S. V. Albertin, A. B. Mulder, E. Tabuchi, M. B. Zugaro, and S. I. Wiener, Lesions of the medial shell of the nucleus accumbens impair rats in finding larger rewards, but spare reward-seeking behavior, Behavioural Brain Research, vol.117, issue.1-2, pp.173-83, 2000.
DOI : 10.1016/S0166-4328(00)00303-X

URL : https://hal.archives-ouvertes.fr/hal-00618317

A. Arleo, W. Gerstner, and W. , Spatial cognition and neuro-mimetic navigation: a model of hippocampal place cell activity, Biological Cybernetics, vol.83, issue.3, pp.287-99, 2000.
DOI : 10.1007/s004220000171

G. Baldassarre, A modular neural-network model of the basal ganglia???s role in learning and selecting motor behaviours, Cognitive Systems Research, vol.3, issue.1, pp.5-13, 2002.
DOI : 10.1016/S1389-0417(01)00039-0

K. Doya, K. Samejima, K. Katagiri, and M. Kawato, Multiple Model-Based Reinforcement Learning, Neural Computation, vol.3, issue.6, pp.1347-69, 2002.
DOI : 10.1016/S1364-6613(98)01221-2

D. Filliat, B. Girard, A. Guillot, M. Khamassi, L. Lachèze et al., State of the artificial rat Psikharpax, From Animals to Animats 8: Proceedings of the Seventh International Conference on Simulation of Adaptive Behavior, pp.3-12, 2004.
URL : https://hal.archives-ouvertes.fr/hal-00016411

B. Fritzke, A growing neural gas network learns topologies, Advances in Neural Information Processing Systems, pp.625-657, 1995.

S. Geman, E. Bienenstock, and R. Doursat, Neural Networks and the Bias/Variance Dilemma, Neural Computation, vol.36, issue.1, pp.1-58, 1992.
DOI : 10.1162/neco.1990.2.1.1

K. Gurney, T. J. Prescott, and P. Redgrave, A computational model of action selection in the basal ganglia. I. A new functional anatomy, Biological Cybernetics, vol.84, issue.6, pp.401-411, 2001.
DOI : 10.1007/PL00007984

J. Holmström, Growing neural gas : Experiments with GNG, GNG with utility and supervised GNG, 2002.

M. S. Jog, Y. Kubota, C. I. Connolly, V. Hillegaart, and A. M. Graybiel, Building Neural Representations of Habits, Science, vol.286, issue.5445, pp.286-1745, 1999.
DOI : 10.1126/science.286.5445.1745

M. Khamassi, L. Lachèze, B. Girard, A. Berthoz, and . Guillot, Actor-Critic Models of Reinforcement Learning in the Basal Ganglia: From Natural to Artificial Rats, Adaptive Behavior, vol.13, issue.2
DOI : 10.1177/105971230501300205

URL : https://hal.archives-ouvertes.fr/hal-00016390

T. Kohonen, Self-organizing maps, 1995.

J. K. Lee and I. H. Kim, Reinforcement learning control using self-organizing map and multi-layer feed-forward neural network, International Conference on Control Automation and Systems, p.2003, 2003.

J. Meyer, A. Guillot, B. Girard, M. Khamassi, P. Pirim et al., The Psikharpax project: towards building an artificial rat, Robotics and Autonomous Systems, vol.50, issue.4, pp.211-234, 2005.
DOI : 10.1016/j.robot.2004.09.018

URL : https://hal.archives-ouvertes.fr/hal-00016391

S. Marsland, J. Shapiro, and U. Nehmzow, A self-organising network that grows when required, Neural Networks, vol.15, issue.8-9, pp.1041-58, 2002.
DOI : 10.1016/S0893-6080(02)00078-3

T. J. Prescott, P. Redgrave, and K. Gurney, Layered Control Architectures in Robots and Vertebrates, Adaptive Behavior, vol.7, issue.1, pp.99-127, 1999.
DOI : 10.1177/105971239900700105

W. Schultz, P. Dayan, and P. R. Montague, A Neural Substrate of Prediction and Reward, Science, vol.275, issue.5306, pp.275-1593, 1997.
DOI : 10.1126/science.275.5306.1593

A. J. Smith, Applications of the self-organizing map to reinforcement learning, Neural Networks, vol.1589, pp.1107-1131, 2002.

R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, IEEE Transactions on Neural Networks, vol.9, issue.5, 1998.
DOI : 10.1109/TNN.1998.712192

B. Tang, M. I. Heywood, and M. Shepherd, Input Partitioning to Mixture of Experts, IEEE/INNS International Joint Conference on Neural Networks, pp.227-259, 2002.

J. Tani and S. Nolfi, Learning to perceive the world as articulated: an approach for hierarchical learning in sensory-motor systems, Neural Networks, vol.12, issue.7-8, pp.1131-1172, 1999.
DOI : 10.1016/S0893-6080(99)00060-X