" Look At This One " Detection sharing between modality-independent classifiers for robotic discovery of people

Abstract : With the advent of low-cost RGBD sensors, many solutions have been proposed for extraction and fusion of colour and depth information. In this paper, we propose new different fusion approaches of these multimodal sources for people detection. We are especially concerned by a scenario where a robot evolves in a changing environment. We extend the use of the Faster RCNN framework proposed by Girshick et al. [1] to this use case (i), we significantly improve performances on people detection on the InOutDoor RGBD People dataset [2] and the RGBD people dataset [3] (ii), we show these fusion handle efficiently sensor defect like complete lost of a modality (iii). Furthermore we propose a new dataset for people detection in difficult conditions: ONERA.ROOM (iv).
Document type :
Conference papers
Liste complète des métadonnées

Cited literature [17 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01628762
Contributor : David Filliat <>
Submitted on : Saturday, November 4, 2017 - 12:59:46 PM
Last modification on : Tuesday, March 26, 2019 - 2:24:45 PM
Document(s) archivé(s) le : Monday, February 5, 2018 - 12:12:24 PM

File

2017_ECMR_LookAtThisOne.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01628762, version 1

Citation

Joris Guerry, Bertrand Le Saux, David Filliat. " Look At This One " Detection sharing between modality-independent classifiers for robotic discovery of people. ECMR 2017 - European Conference on Mobile Robotics, Sep 2017, Paris, France. pp.1-6. ⟨hal-01628762⟩

Share

Metrics

Record views

228

Files downloads

121