Skip to Main content Skip to Navigation
Journal articles

Bootstrapping Q-Learning for Robotics from Neuro-Evolution Results

Matthieu Zimmer 1, 2, 3 Stephane Doncieux 3, 4
1 MAIA - Autonomous intelligent machine
Inria Nancy - Grand Est, LORIA - AIS - Department of Complex Systems, Artificial Intelligence & Robotics
2 CORTEX - Neuromimetic intelligence
Inria Nancy - Grand Est, LORIA - AIS - Department of Complex Systems, Artificial Intelligence & Robotics
4 AMAC
ISIR - Institut des Systèmes Intelligents et de Robotique
Abstract : Reinforcement learning problems are hard to solve in a robotics context as classical algorithms rely on discrete representations of actions and states, but in robotics both are continuous. A discrete set of actions and states can be defined, but it requires an expertise that may not be available, in particular in open environments. It is proposed to define a process to make a robot build its own representation for a reinforcement learning algorithm. The principle is to first use a direct policy search in the sensori-motor space, i.e. with no predefined discrete sets of states nor actions, and then extract from the corresponding learning traces discrete actions and identify the relevant dimensions of the state to estimate the value function. Once this is done, the robot can apply reinforcement learning (1) to be more robust to new domains and, if required, (2) to learn faster than a direct policy search. This approach allows to take the best of both worlds: first learning in a continuous space to avoid the need of a specific representation, but at a price of a long learning process and a poor generalization, and then learning with an adapted representation to be faster and more robust.
Complete list of metadatas

Cited literature [77 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01494744
Contributor : Matthieu Zimmer <>
Submitted on : Thursday, March 23, 2017 - 9:12:51 PM
Last modification on : Wednesday, May 20, 2020 - 2:32:07 PM
Document(s) archivé(s) le : Saturday, June 24, 2017 - 4:24:13 PM

File

article.pdf
Files produced by the author(s)

Identifiers

Citation

Matthieu Zimmer, Stephane Doncieux. Bootstrapping Q-Learning for Robotics from Neuro-Evolution Results. IEEE Transactions on Cognitive and Developmental Systems, Institute of Electrical and Electronics Engineers, Inc, 2017, ⟨10.1109/TCDS.2016.2628817⟩. ⟨hal-01494744⟩

Share

Metrics

Record views

547

Files downloads

578