Multimodal information fusion for urban scene understanding - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Machine Vision and Applications Année : 2016

Multimodal information fusion for urban scene understanding

Résumé

This paper addresses the problem of scene under- standing for driver assistance systems. To recognize the large number of objects that may be found on the road, several sen- sors and decision algorithms have to be used. The proposed approach is based on the representation of all available information in over-segmented image regions. The main novelty of the framework is its capability to incorporate new classes of objects and to include new sensors or detection methods while remaining robust to sensor failures. Several classes such as ground, vegetation or sky are considered, as well as three different sensors. The approach was evaluated on real publicly available urban driving scene data.
Fichier non déposé

Dates et versions

hal-01133430 , version 1 (19-03-2015)

Identifiants

Citer

Philippe Xu, Franck Davoine, Jean-Baptiste Bordes, Huijing Zhao, Thierry Denœux. Multimodal information fusion for urban scene understanding. Machine Vision and Applications, 2016, 27 (3), pp.331-349. ⟨10.1007/s00138-014-0649-7⟩. ⟨hal-01133430⟩
285 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More