Indoor Semantic Segmentation using depth information

Abstract : This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. We obtain state-of-the-art on the NYU-v2 depth dataset with an accuracy of 64.5%. We illustrate the labeling of indoor scenes in videos sequences that could be processed in real-time using appropriate hardware such as an FPGA.
Type de document :
Communication dans un congrès
First International Conference on Learning Representations (ICLR 2013), May 2013, Scottsdale, AZ, United States. pp.1-8, 2013
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-00805105
Contributeur : Laurent Najman <>
Soumis le : mercredi 27 mars 2013 - 09:33:18
Dernière modification le : mercredi 27 juillet 2016 - 14:48:48

Identifiants

  • HAL Id : hal-00805105, version 1
  • ARXIV : 1301.3572

Citation

Camille Couprie, Clément Farabet, Laurent Najman, Yann Lecun. Indoor Semantic Segmentation using depth information. First International Conference on Learning Representations (ICLR 2013), May 2013, Scottsdale, AZ, United States. pp.1-8, 2013. 〈hal-00805105〉

Partager

Métriques

Consultations de la notice

419