Skip to Main content Skip to Navigation
Conference papers

State Representation Learning from Demonstration

Abstract : In a context where several policies can be observed as black boxes on different instances of a control task, we propose a method to derive a state representation that can be relied on to reproduce any of the observed policies. We do so via imitation learning on a multi-head neural network consisting of a first part that outputs a common state representation and then one head per policy to imitate. If the demonstrations contain enough diversity, the state representation is general and can be transferred to learn new instances of the task. We present a proof of concept with experimental results on a simulated 2D robotic arm performing a reaching task, with noisy image inputs containing a distractor, and show that the state representations learned provide a greater speed up to end-to-end reinforcement learning on new instances of the task than with other classical representations.
Complete list of metadata
Contributor : Nicolas Perrin-Gilbert Connect in order to contact the contributor
Submitted on : Friday, December 18, 2020 - 9:03:53 PM
Last modification on : Tuesday, May 31, 2022 - 8:36:02 PM

Links full text


  • HAL Id : hal-03083156, version 1
  • ARXIV : 1910.01738


Astrid Merckling, Alexandre Coninx, Loic Cressot, Stephane Doncieux, Nicolas Perrin. State Representation Learning from Demonstration. 6th International Conference on Machine Learning, Optimization, and Data Science, LOD 2020, Jul 2020, Siena, Italy. ⟨hal-03083156⟩



Record views