Skip to Main content Skip to Navigation
Book sections

Multimodal human machine interactions in virtual and augmented reality

Abstract : Virtual worlds are developing rapidly over the Internet. They are visited by avatars and staffed with Embodied Conversational Agents (ECAs). An avatar is a representation of a physical person. Each person controls one or several avatars and usually receives feedback from the virtual world on an audio-visual display. Ideally, all senses should be used to feel fully embedded in a virtual world. Sound, vision and sometimes touch are the available modalities. This paper reviews the technological developments which enable audio-visual interactions in virtual and augmented reality worlds. Emphasis is placed on speech and gesture interfaces, including talking face analysis and synthesis
Document type :
Book sections
Complete list of metadata
Contributor : Médiathèque Télécom Sudparis & Institut Mines-Télécom Business School Connect in order to contact the contributor
Submitted on : Tuesday, April 13, 2010 - 11:32:37 AM
Last modification on : Wednesday, October 14, 2020 - 12:44:49 PM

Links full text




Gérard Chollet, Anna Esposito, Annie Gentes, Patrick Horain, Walid Karam, et al.. Multimodal human machine interactions in virtual and augmented reality. Multimodal Signals : Cognitive and Algorithmic Issues : COST Action 2102 and euCognition International School Vietri sul Mare, Italy, April 21-26, 2008 Revised Selected and Invited Papers , 5398, Springer-Verlag, pp.1 - 23, 2009, Lecture Notes in Computer Science, 978-3-642-00524-4. ⟨10.1007/978-3-642-00525-1_1⟩. ⟨hal-00472794⟩



Record views