Speech, Gaze and Head Motion in a Face-to-Face Collaborative Task - Archive ouverte HAL Accéder directement au contenu
Chapitre D'ouvrage Année : 2010

Speech, Gaze and Head Motion in a Face-to-Face Collaborative Task

Résumé

In the present work we observe two subjects interacting in a colla-borative task on a shared environment. One goal of the experiment is to meas-ure the change in behavior with respect to gaze when one interactant is wearing dark glasses and hence his/her gaze is not visible by the other one. The results show that if one subject wears dark glasses while telling the other subject the position of a certain object, the other subject needs significantly more time to locate and move this object. Hence, eye gaze – when visible – of one subject looking at a certain object speeds up the location of the cube by the other sub-ject. The second goal of the currently ongoing work is to collect data on the multimodal behavior of one of the subjects by means of audio recording, eye gaze and head motion tracking in order to build a model that can be used to control a robot in a comparable scenario in future experiments.
Fichier non déposé

Dates et versions

hal-00531003 , version 1 (31-10-2010)

Identifiants

  • HAL Id : hal-00531003 , version 1

Citer

Sascha Fagel, Gérard Bailly. Speech, Gaze and Head Motion in a Face-to-Face Collaborative Task. Anna Esposito, Antonietta M. Esposito, Raffaele Martone, Vincent C. Müller, Gaetano Scarpetta. Toward Autonomous, Adaptive, and Context-Aware Multimodal Interfaces: Theoretical and Practical Issues, Springer-Verlag, pp.265-274, 2010, Lecture Notes in Computer Science (LNCS) n°6456, 978-3-642-18183-2. ⟨hal-00531003⟩
110 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More