Combining local and global visual information in context-based neurorobotic navigation
Résumé
In robotic navigation, biologically inspired localization models have often exhibited interesting features and proven to be competitive with other solutions in terms of adaptability and performance. In general, place recognition systems rely on global or local visual descriptors; or both. In this paper, we propose a model of context-based place cells combining these two information. Global visual features are extracted to represent visual contexts. Based on the idea of global precedence, contexts drive a more refined recognition level which has local visual descriptors as an input. We evaluate this model on a robotic navigation dataset that we recorded in the outdoors. Thus, our contribution is twofold: 1) a bio-inspired model of context-based place recognition using neural networks; and 2) an evaluation assessing its suitability for applications on real robot by comparing it to 4 other architectures -- 2 variants of the model and 2 stacking-based solutions -- in terms of performance and computational cost.
The context-based model gets the highest score based on the three metrics we consider -- or is second to one of its variants. Moreover, a key feature makes the computational cost constant over time while it increases with the other methods. These promising results suggest that this model should be a good candidate for a robust place recognition in wide environments.
Origine : Fichiers produits par l'(les) auteur(s)
Loading...