Multimodal human machine interactions in virtual and augmented reality

Abstract : Virtual worlds are developing rapidly over the Internet. They are visited by avatars and staffed with Embodied Conversational Agents (ECAs). An avatar is a representation of a physical person. Each person controls one or several avatars and usually receives feedback from the virtual world on an audio-visual display. Ideally, all senses should be used to feel fully embedded in a virtual world. Sound, vision and sometimes touch are the available modalities. This paper reviews the technological developments which enable audio-visual interactions in virtual and augmented reality worlds. Emphasis is placed on speech and gesture interfaces, including talking face analysis and synthesis
Document type :
Book sections
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-00472794
Contributor : Médiathèque Télécom Sudparis & Institut Mines-Télécom Business School <>
Submitted on : Tuesday, April 13, 2010 - 11:32:37 AM
Last modification on : Monday, February 25, 2019 - 11:08:10 AM

Links full text

Identifiers

Citation

Gérard Chollet, Anna Esposito, Annie Gentes, Patrick Horain, Walid Karam, et al.. Multimodal human machine interactions in virtual and augmented reality. Multimodal Signals : Cognitive and Algorithmic Issues : COST Action 2102 and euCognition International School Vietri sul Mare, Italy, April 21-26, 2008 Revised Selected and Invited Papers , 5398, Springer-Verlag, pp.1 - 23, 2009, Lecture Notes in Computer Science, 978-3-642-00524-4. ⟨10.1007/978-3-642-00525-1_1⟩. ⟨hal-00472794⟩

Share

Metrics

Record views

174