Viewpoint invariant 3D landmark model inference from monocular 2D images using higher-order priors

Abstract : In this paper, we propose a novel one-shot optimization approach to simultaneously determine both the optimal 3D landmark model and the corresponding 2D projections without explicit estimation of the camera viewpoint, which is also able to deal with misdetections as well as partial occlusions. To this end, a 3D shape manifold is built upon fourth-order interactions of landmarks from a training set where pose-invariant statistics are obtained in this space. The 3D-2D consistency is also encoded in such high-order interactions, which eliminate the necessity of viewpoint estimation. Furthermore, the modeling of visibility improves further the performance of the method by handling missing correspondences and occlusions. The inference is addressed through a MAP formulation which is naturally transformed into a higher-order MRF optimization problem and is solved using a dual-decomposition-based method. Promising results on standard face benchmarks demonstrate the potential of our approach.
Document type :
Conference papers
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-00856131
Contributor : Vivien Fécamp <>
Submitted on : Friday, August 30, 2013 - 2:57:08 PM
Last modification on : Tuesday, March 5, 2019 - 3:34:03 PM

Identifiers

Collections

Citation

Chaohui Wang, Yun Zeng, Loïc Simon, Ioannis Kakadiaris, Dimitris Samaras, et al.. Viewpoint invariant 3D landmark model inference from monocular 2D images using higher-order priors. 13th International Conference on Computer Vision - ICCV 2011, Nov 2011, Barcelona, Spain. pp.319-326, ⟨10.1109/ICCV.2011.6126258⟩. ⟨hal-00856131⟩

Share

Metrics

Record views

419