Skip to Main content Skip to Navigation
Conference papers

Indoor Semantic Segmentation using depth information

Abstract : This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. We obtain state-of-the-art on the NYU-v2 depth dataset with an accuracy of 64.5%. We illustrate the labeling of indoor scenes in videos sequences that could be processed in real-time using appropriate hardware such as an FPGA.
Document type :
Conference papers
Complete list of metadatas
Contributor : Laurent Najman <>
Submitted on : Wednesday, March 27, 2013 - 9:33:18 AM
Last modification on : Wednesday, February 26, 2020 - 7:06:05 PM

Links full text


  • HAL Id : hal-00805105, version 1
  • ARXIV : 1301.3572


Camille Couprie, Clément Farabet, Laurent Najman, Yann Lecun. Indoor Semantic Segmentation using depth information. First International Conference on Learning Representations (ICLR 2013), May 2013, Scottsdale, AZ, United States. pp.1-8. ⟨hal-00805105⟩



Record views