Skip to Main content Skip to Navigation
Journal articles

Exploring to learn visual saliency: The RL-IAC approach

Abstract : The problem of object localization and recognition on autonomous mobile robots is still an active topic. In this context, we tackle the problem of learning a model of visual saliency directly on a robot. This model, learned and improved on-the-fly during the robot's exploration provides an efficient tool for localizing relevant objects within their environment. The proposed approach includes two intertwined components. On the one hand, we describe a method for learning and incrementally updating a model of visual saliency from a depth-based object detector. This model of saliency can also be exploited to produce bounding box proposals around objects of interest. On the other hand, we investigate an autonomous exploration technique to efficiently learn such a saliency model. The proposed exploration, called Reinforcement Learning-Intelligent Adaptive Curiosity (RL-IAC) is able to drive the robot's exploration so that samples selected by the robot are likely to improve the current model of saliency. We then demonstrate that such a saliency model learned directly on a robot outperforms several state-of-the-art saliency techniques, and that RL-IAC can drastically decrease the required time for learning a reliable saliency model.
Document type :
Journal articles
Complete list of metadata

Cited literature [76 references]  Display  Hide  Download
Contributor : David Filliat <>
Submitted on : Wednesday, December 19, 2018 - 9:29:20 AM
Last modification on : Thursday, January 21, 2021 - 9:26:01 AM
Long-term archiving on: : Wednesday, March 20, 2019 - 2:05:37 PM


Files produced by the author(s)




Céline Craye, Timothée Lesort, David Filliat, Jean-François Goudou. Exploring to learn visual saliency: The RL-IAC approach. Robotics and Autonomous Systems, Elsevier, 2019, 112, pp.244-259. ⟨10.1016/j.robot.2018.11.012⟩. ⟨hal-01959882⟩



Record views


Files downloads