SnapNet-R: Consistent 3D Multi-View Semantic Labeling for Robotics

Abstract : In this paper we present a new approach for semantic recognition in the context of robotics. When a robot evolves in its environment, it gets 3D information given either by its sensors or by its own motion through 3D reconstruction. Our approach uses (i) 3D-coherent synthesis of scene observations and (ii) mix them in a multi-view framework for 3D labeling. (iii) This is efficient locally (for 2D semantic segmentation) and globally (for 3D structure labeling). This allows to add semantics to the observed scene that goes beyond simple image classification, as shown on challenging datasets such as SUNRGBD or the 3DRMS Reconstruction Challenge.
Document type :
Conference papers
Complete list of metadatas

Cited literature [46 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01808539
Contributor : David Filliat <>
Submitted on : Tuesday, June 5, 2018 - 5:20:45 PM
Last modification on : Wednesday, July 3, 2019 - 10:48:05 AM
Long-term archiving on : Thursday, September 6, 2018 - 7:05:22 PM

File

2017-iccv-slash.pdf
Files produced by the author(s)

Identifiers

Citation

Joris Guerry, Alexandre Boulch, Bertrand Le Saux, Julien Moras, Aurelien Plyer, et al.. SnapNet-R: Consistent 3D Multi-View Semantic Labeling for Robotics. IEEE International Conference on Computer Vision Workshop (ICCVW), Oct 2017, Venice, Italy. ⟨10.1109/ICCVW.2017.85⟩. ⟨hal-01808539⟩

Share

Metrics

Record views

97

Files downloads

158