Skip to Main content Skip to Navigation
Conference papers

Hybrid Visual and Inertial Position and Orientation Estimation based on Known Urban 3D Models

Résumé : More and more pedestrians own devices (as a smartphone) that integrate a wide array of low-cost sensors (camera, IMU, magnetometer and GNSS receiver). GNSS is usually used for pedestrian localization in urban environment, but signal suffers from an inaccuracy of several meters. In order to have a more accurate localization and improve pedestrian navigation and urban mobility, we present a method for cityscale localization with a handheld device. Our central idea is to estimate the 3D location and 3D orientation of the camera based on the knowledge of the street furniture, which have a high repeatability and a large coverage area in the city. Firstly, the use of inertial measurements acquired with an IMU in the vision based method allows to accelerate the calculation of the position and orientation. Then, it provides a localization and an orientation as close as the vision based method with manual points selection, and will certainly better than an automatic detection and points selection. Performances are presented in terms of accuracy of positionning. The final aim is to have with our method a precision good enough to be able to propose in future works a on site display in augmented reality.
Document type :
Conference papers
Complete list of metadatas
Contributor : Ifsttar Cadic <>
Submitted on : Wednesday, February 1, 2017 - 10:50:36 AM
Last modification on : Friday, July 10, 2020 - 10:50:11 AM




Nicolas Antigny, Myriam Servières, Valérie Renaudin. Hybrid Visual and Inertial Position and Orientation Estimation based on Known Urban 3D Models. IPIN 2016, International conference on Indoor Positioning and Indoor Navigation, Oct 2016, MADRID, Spain. pp.4-7, ⟨10.1109/IPIN.2016.7743619⟩. ⟨hal-01451468⟩



Record views