Multimodal integration of visual place cells and grid cells for navigation tasks of a real robot

Adrien Jauffret 1, * Nicolas Cuperlier 1 Philippe Gaussier 2 Philippe Tarroux 3
* Corresponding author
1 Neurocybernetique
ETIS - Equipes Traitement de l'Information et Systèmes
2 Neurocybernétique
ETIS - Equipes Traitement de l'Information et Systèmes
3 CPU
LIMSI - Laboratoire d'Informatique pour la Mécanique et les Sciences de l'Ingénieur [Orsay]
Abstract : In the present study, we propose a model of multimodal place cells merging visual and proprioceptive primitives. First we will briefly present our previous sensory-motor architecture, highlighting limitations of a visual-only based system. Then we will introduce a new model of proprioceptive localization, giving rise to the so-called grid cells, wich are congruent with neurobiological studies made on rodent. Finally we will show how a simple conditionning rule between both modalities can outperform visual-only driven models by producing ro- bust multimodal place cells. Experiments show that this model enhances robot localization and also allows to solve some benchmark problems for real life robotics applications.
Document type :
Conference papers
12th International Conference on Simulation of Adaptive Behavior, SAB 2012, Aug 2012, Odense, Denmark. Springer, pp.136-145, 2012, LNAI


https://hal.archives-ouvertes.fr/hal-00737047
Contributor : Nicolas Cuperlier <>
Submitted on : Tuesday, November 6, 2012 - 1:20:27 PM
Last modification on : Monday, October 13, 2014 - 3:43:25 PM

Identifiers

  • HAL Id : hal-00737047, version 1

Collections

Citation

Adrien Jauffret, Nicolas Cuperlier, Philippe Gaussier, Philippe Tarroux. Multimodal integration of visual place cells and grid cells for navigation tasks of a real robot. 12th International Conference on Simulation of Adaptive Behavior, SAB 2012, Aug 2012, Odense, Denmark. Springer, pp.136-145, 2012, LNAI. <hal-00737047>

Export

Share

Metrics

Consultation de
la notice

158

Téléchargement du document

100