RGBD object recognition and visual texture classification for indoor semantic mapping

Abstract : We present a mobile robot whose goal is to autonomously explore an unknown indoor environment and to build a semantic map containing high-level information similar to those extracted by humans. This information includes the rooms, their connectivity, the objects they contain and the material of the walls and ground. This robot was developed in order to participate in a French exploration and mapping contest called CAROTTE whose goal is to produce easily interpretable maps of an unknown environment. In particular we present our object detection approach based on a color+depth camera that fuse 3D, color and texture information through a neural network for robust object recognition. We also present the material recognition approach based on machine learning applied to vision. We demonstrate the performances of these modules on image databases and provide examples on the full system working in real environments.
Document type :
Conference papers
Liste complète des métadonnées

Cited literature [24 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-00755295
Contributor : David Filliat <>
Submitted on : Wednesday, November 21, 2012 - 10:18:00 PM
Last modification on : Thursday, March 21, 2019 - 12:23:36 PM
Document(s) archivé(s) le : Saturday, December 17, 2016 - 1:32:30 PM

File

Filliat_Tepra2012.pdf
Files produced by the author(s)

Identifiers

Citation

David Filliat, Emmanuel Battesti, Stéphane Bazeille, Guillaume Duceux, Alexander Gepperth, et al.. RGBD object recognition and visual texture classification for indoor semantic mapping. Technologies for Practical Robot Applications (TePRA), 2012 IEEE International Conference on, Apr 2012, United States. pp.127 - 132, ⟨10.1109/TePRA.2012.6215666⟩. ⟨hal-00755295⟩

Share

Metrics

Record views

1192

Files downloads

1066