Indoor Semantic Segmentation using depth information

Abstract : This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. We obtain state-of-the-art on the NYU-v2 depth dataset with an accuracy of 64.5%. We illustrate the labeling of indoor scenes in videos sequences that could be processed in real-time using appropriate hardware such as an FPGA.
Document type :
Conference papers
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-00805105
Contributor : Laurent Najman <>
Submitted on : Wednesday, March 27, 2013 - 9:33:18 AM
Last modification on : Tuesday, January 22, 2019 - 3:51:42 PM

Links full text

Identifiers

  • HAL Id : hal-00805105, version 1
  • ARXIV : 1301.3572

Citation

Camille Couprie, Clément Farabet, Laurent Najman, Yann Lecun. Indoor Semantic Segmentation using depth information. First International Conference on Learning Representations (ICLR 2013), May 2013, Scottsdale, AZ, United States. pp.1-8. ⟨hal-00805105⟩

Share

Metrics

Record views

905