Skip to Main content Skip to Navigation
Conference papers

A Fast Audiovisual Attention Model for Human Detection and Localization on a Companion Robot

Abstract : This paper describes a fast audiovisual attention model applied to human detection and localization on a companion robot. Its originality lies in combining static and dynamic modalities over two analysis paths in order to guide the robot's gaze towards the most probable human beings' locations based on the concept of saliency. Visual, depth and audio data are acquired using a RGB-D camera and two horizontal microphones. Adapted state-of-the-art methods are used to extract relevant information and fuse them together via two dimensional gaussian representations. The obtained saliency map represents human positions as the most salient areas. Experiments have shown that the proposed model can provide a mean F-measure of 66 percent with a mean precision of 77 percent for human localization using bounding box areas on 10 manually annotated videos. The corresponding algorithm is able to process 70 frames per second on the robot.
Document type :
Conference papers
Complete list of metadata

Cited literature [12 references]  Display  Hide  Download
Contributor : Denis Pellerin Connect in order to contact the contributor
Submitted on : Monday, December 5, 2016 - 12:33:18 PM
Last modification on : Wednesday, November 3, 2021 - 6:45:39 AM
Long-term archiving on: : Monday, March 20, 2017 - 4:48:07 PM


Files produced by the author(s)




  • HAL Id : hal-01408740, version 1


Rémi Ratajczak, Denis Pellerin, Quentin Labourey, Catherine Garbay. A Fast Audiovisual Attention Model for Human Detection and Localization on a Companion Robot. VISUAL 2016 - The First International Conference on Applications and Systems of Visual Paradigms (VISUAL 2016), IARIA, Nov 2016, Barcelone, Spain. ⟨hal-01408740⟩



Record views


Files downloads