Mouth gesture and voice command based robot command interface - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2009

Mouth gesture and voice command based robot command interface

Alexander Ceballos
  • Fonction : Auteur
Tanneguy Redarce
Connectez-vous pour contacter l'auteur

Résumé

In this paper we present a voice command and mouth gesture based robot command interface which is capable of controlling three degrees of freedom. The gesture set was designed in order to avoid head rotation and translation, and thus relying solely in mouth movements. Mouth segmentation is performed by using the normalized a* component, as in [1]. The gesture detection process is carried out by a Gaussian Mixture Model (GMM) based classifier. After that, a state machine stabilizes the system response by restricting the number of possible movements depending on the initial state. Voice commands are modeled using a Hidden Markov Model (HMM) isolated word recognition scheme. The interface was designed taking into account the specific pose restrictions found in the DaVinci Assisted Surgery command console.
Fichier non déposé

Dates et versions

hal-00402457 , version 1 (07-07-2009)

Identifiants

Citer

Juan B. Gómez, Alexander Ceballos, Flavio Prieto, Tanneguy Redarce. Mouth gesture and voice command based robot command interface. ICRA, May 2009, Kobe, Japan. pp.333 - 338, ⟨10.1109/ROBOT.2009.5152858⟩. ⟨hal-00402457⟩
183 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More