Deep exemplar 2d-3d detection by adapting from real to rendered views - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2016

Deep exemplar 2d-3d detection by adapting from real to rendered views

Résumé

This paper presents an end-to-end convolutional neural network (CNN) for 2D-3D exemplar detection. We demonstrate that the ability to adapt the features of natural images to better align with those of CAD rendered views is critical to the success of our technique. We show that the adaptation can be learned by compositing rendered views of textured object models on natural images. Our approach can be naturally incorporated into a CNN detection pipeline and extends the accuracy and speed benefits from recent advances in deep learning to 2D-3D exemplar detection. We applied our method to two tasks: instance detection, where we evaluated on the IKEA dataset [36], and object category detection, where we out-perform Aubry et al. [3] for " chair " detection on a subset of the Pascal VOC dataset.
Fichier principal
Vignette du fichier
Massa_Deep_Exemplar_2D-3D_CVPR_2016_paper.pdf (1.71 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01800639 , version 1 (27-05-2018)

Identifiants

Citer

Francisco Massa, Bryan Russell, Mathieu Aubry. Deep exemplar 2d-3d detection by adapting from real to rendered views. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2016, Las Vegas, United States. ⟨10.1109/CVPR.2016.648⟩. ⟨hal-01800639⟩
252 Consultations
163 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More