Real-time 3D motion capture by monocular vision and virtual rendering

Abstract : Networked 3D virtual environments allow multiple users to interact over the Internet by means of avatars and to get some feeling of a virtual telepresence. However, avatar control may be tedious. 3D sensors for motion capture systems based on 3D sensors have reached the consumer market, but webcams remain more widespread and cheaper. This work aims at animating a user's avatar by real-time motion capture using a personal computer and a plain webcam. In a classical model-based approach, we register a 3D articulated upper-body model onto video sequences and propose a number of heuristics to accelerate particle filtering while robustly tracking user motion. Describing the body pose using wrists 3D positions rather than joint angles allows efficient handling of depth ambiguities for probabilistic tracking. We demonstrate experimentally the robustness of our 3D body tracking by real-time monocular vision, even in the case of partial occlusions and motion in the depth direction
Document type :
Journal articles
Complete list of metadatas
Contributor : Médiathèque Télécom Sudparis & Institut Mines-Télécom Business School <>
Submitted on : Tuesday, November 7, 2017 - 10:16:08 AM
Last modification on : Thursday, October 17, 2019 - 12:35:09 PM

Links full text



David Antonio Gómez Jáuregui, Patrick Horain. Real-time 3D motion capture by monocular vision and virtual rendering. Machine Vision and Applications, Springer Verlag, 2017, 28 (8), pp.839 - 858. ⟨10.1007/s00138-017-0861-3⟩. ⟨hal-01630015⟩



Record views