Skip to Main content Skip to Navigation
Conference papers

Global Robot Ego-localization Combining Image Retrieval and HMM-based Filtering

Abstract : This paper addresses the problem of global visual ego-localization of a robot equipped with a monocular camera that has to navigate autonomously in an urban environment. The robot has access to a database of geo-referenced images of its environment and to the outputs of an odometric system (Inertial Measurement Unit or visual odometry). We suppose that no GPS information is available. The goal of the approach described and evaluated in this paper is to exploit a Hidden Markov Model (HMM) to combine the localization estimates provided by the odometric system and the visual similarities between acquired images and the geo-localized image database. It is shown that the use of spatial and temporal constraints reduces the mean localization error from 16 m to 4 m over a 11 km path evaluated on the Google Pittsburgh dataset when compared to an image based method alone.
Document type :
Conference papers
Complete list of metadatas

Cited literature [15 references]  Display  Hide  Download
Contributor : Cédric Le Barz <>
Submitted on : Tuesday, September 30, 2014 - 9:56:31 AM
Last modification on : Friday, June 26, 2020 - 2:04:02 PM
Document(s) archivé(s) le : Wednesday, December 31, 2014 - 10:30:30 AM


Files produced by the author(s)


  • HAL Id : hal-01069869, version 1


Cédric Le Barz, Nicolas Thome, Matthieu Cord, Stéphane Herbin, Martial Sanfourche. Global Robot Ego-localization Combining Image Retrieval and HMM-based Filtering. 6th Workshop on Planning, Perception and Navigation for Intelligent Vehicles, Sep 2014, Chicago, United States. 6 p. ⟨hal-01069869⟩



Record views


Files downloads