Perceiving and rendering users in a 3D interaction

Abstract : In a computer supported distant collaboration, communication between users can be enhanced with a visual channel. Plain videos of individual users unfortunately fail to render their joint actions on the objects they share, which limits their mutual perception. Remote interaction can be enhanced by immersing user representations (avatars) and the shared objects in a networked 3D virtual environment, so user actions are rendered by avatar mimicry. Communication gesture (not actions) are captured by real-time computer vision and rendered. We have developed a system based on a single-webcam for body and face 3D motion capture. We have used a library of communication gestures to learn statistical gesture models and used them as prior constraints for monocular motion capture, so improving tracking ambiguous poses and rendering some motion details. We have developed an open source library for real-time image analysis and computer vision that supports acceleration by consumer graphical processing units (GPUs). Finally, users are rendered with low-bandwidth avatar animation, thus opening the path to low-cost remote virtual presence at home.
Complete list of metadatas

Cited literature [18 references]  Display  Hide  Download
Contributor : Médiathèque Télécom Sudparis & Institut Mines-Télécom Business School <>
Submitted on : Friday, June 21, 2013 - 11:09:30 AM
Last modification on : Monday, June 17, 2019 - 5:04:06 PM
Long-term archiving on : Sunday, September 22, 2013 - 4:07:28 AM


Files produced by the author(s)


  • HAL Id : hal-00836606, version 1


Patrick Horain, José Marques Soares, Dianle Zhou, Zhenbo Li, David Antonio Gomez Jauregui, et al.. Perceiving and rendering users in a 3D interaction. IHCI 2010 : Second IEEE International Conference on Intelligent Human Computer Interaction, Jan 2010, Allahabad, India. pp.42-53. ⟨hal-00836606⟩



Record views


Files downloads