HAL will be down for maintenance from Friday, June 10 at 4pm through Monday, June 13 at 9am. More information
Skip to Main content Skip to Navigation
Conference papers

Merging Live and pre-Captured Data to support Full 3D Head Reconstruction for Telepresence

Abstract : This paper proposes a 3D head reconstruction method for low cost 3D telepresence systems that uses only a single consumer level hybrid sensor (color+depth) located in front of the users. Our method fuses the real-time, noisy and incomplete output of a hybrid sensor with a set of static, high-resolution textured models acquired in a calibration phase. A complete and fully textured 3D model of the users' head can thus be reconstructed in real-time, accurately preserving the facial expression of the user. The main features of our method are a mesh interpolation and a fusion of a static and a dynamic textures to combine respectively a better resolution and the dynamic features of the face.
Complete list of metadata

Cited literature [5 references]  Display  Hide  Download

Contributor : Cédric Fleury Connect in order to contact the contributor
Submitted on : Tuesday, September 2, 2014 - 9:48:26 PM
Last modification on : Wednesday, April 27, 2022 - 5:22:07 PM
Long-term archiving on: : Wednesday, December 3, 2014 - 11:01:54 AM


Files produced by the author(s)



Cédric Fleury, Tiberiu Popa, Tat Jen Cham, Henry Fuchs. Merging Live and pre-Captured Data to support Full 3D Head Reconstruction for Telepresence. EG'14, Apr 2014, Strasbourg, France. ⟨10.2312/egsh.20141002⟩. ⟨hal-01060128⟩



Record views


Files downloads