Global Robot Ego-localization Combining Image Retrieval and HMM-based Filtering

Abstract : This paper addresses the problem of global visual ego-localization of a robot equipped with a monocular camera that has to navigate autonomously in an urban environment. The robot has access to a database of geo-referenced images of its environment and to the outputs of an odometric system (Inertial Measurement Unit or visual odometry). We suppose that no GPS information is available. The goal of the approach described and evaluated in this paper is to exploit a Hidden Markov Model (HMM) to combine the localization estimates provided by the odometric system and the visual similarities between acquired images and the geo-localized image database. It is shown that the use of spatial and temporal constraints reduces the mean localization error from 16 m to 4 m over a 11 km path evaluated on the Google Pittsburgh dataset when compared to an image based method alone.
Document type :
Conference papers
Liste complète des métadonnées

Cited literature [15 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01069869
Contributor : Cédric Le Barz <>
Submitted on : Tuesday, September 30, 2014 - 9:56:31 AM
Last modification on : Tuesday, March 26, 2019 - 2:24:42 PM
Document(s) archivé(s) le : Wednesday, December 31, 2014 - 10:30:30 AM

File

PPNIV2014_CLB.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01069869, version 1

Citation

Cédric Le Barz, Nicolas Thome, Matthieu Cord, Stéphane Herbin, Martial Sanfourche. Global Robot Ego-localization Combining Image Retrieval and HMM-based Filtering. 6th Workshop on Planning, Perception and Navigation for Intelligent Vehicles, Sep 2014, Chicago, United States. 6 p. ⟨hal-01069869⟩

Share

Metrics

Record views

410

Files downloads

372