Skip to Main content Skip to Navigation
Conference papers

SnapNet-R: Consistent 3D Multi-View Semantic Labeling for Robotics

Abstract : In this paper we present a new approach for semantic recognition in the context of robotics. When a robot evolves in its environment, it gets 3D information given either by its sensors or by its own motion through 3D reconstruction. Our approach uses (i) 3D-coherent synthesis of scene observations and (ii) mix them in a multi-view framework for 3D labeling. (iii) This is efficient locally (for 2D semantic segmentation) and globally (for 3D structure labeling). This allows to add semantics to the observed scene that goes beyond simple image classification, as shown on challenging datasets such as SUNRGBD or the 3DRMS Reconstruction Challenge.
Document type :
Conference papers
Complete list of metadata

Cited literature [46 references]  Display  Hide  Download
Contributor : David Filliat Connect in order to contact the contributor
Submitted on : Tuesday, June 5, 2018 - 5:20:45 PM
Last modification on : Friday, December 3, 2021 - 11:34:11 AM
Long-term archiving on: : Thursday, September 6, 2018 - 7:05:22 PM


Files produced by the author(s)



Joris Guerry, Alexandre Boulch, Bertrand Le Saux, Julien Moras, Aurelien Plyer, et al.. SnapNet-R: Consistent 3D Multi-View Semantic Labeling for Robotics. IEEE International Conference on Computer Vision Workshop (ICCVW), Oct 2017, Venice, Italy. ⟨10.1109/ICCVW.2017.85⟩. ⟨hal-01808539⟩



Les métriques sont temporairement indisponibles