Combining learned skills and reinforcement learning for robotic manipulations

Robin Strudel 1, 2 Alexander Pashevich 3 Igor Kalevatykh 1, 2 Ivan Laptev 1, 2 Josef Sivic 1, 2 Cordelia Schmid 3
1 WILLOW - Models of visual object recognition and scene understanding
DI-ENS - Département d'informatique de l'École normale supérieure, Inria de Paris
3 Thoth - Apprentissage de modèles à partir de données massives
Inria Grenoble - Rhône-Alpes, LJK - Laboratoire Jean Kuntzmann
Abstract : Manipulation tasks such as preparing a meal or assembling furniture remain highly challenging for robotics and vision. The supervised approach of imitation learning can handle short tasks but suffers from compounding errors and the need of many demonstrations for longer and more complex tasks. Reinforcement learning (RL) can find solutions beyond demonstrations but requires tedious and task-specific reward engineering for multi-step problems. In this work we address the difficulties of both methods and explore their combination. To this end, we propose a RL policies operating on pre-trained skills, that can learn composite manipulations using no intermediate rewards and no demonstrations of full tasks. We also propose an efficient training of basic skills from few synthetic demonstrated trajectories by exploring recent CNN architectures and data augmentation. We show successful learning of policies for composite manipulation tasks such as making a simple breakfast. Notably, our method achieves high success rates on a real robot, while using synthetic training data only.
Complete list of metadatas
Contributor : Alexander Pashevich <>
Submitted on : Friday, August 30, 2019 - 1:26:02 PM
Last modification on : Tuesday, September 3, 2019 - 1:16:43 AM

Links full text


  • HAL Id : hal-02274969, version 1
  • ARXIV : 1908.00722



Robin Strudel, Alexander Pashevich, Igor Kalevatykh, Ivan Laptev, Josef Sivic, et al.. Combining learned skills and reinforcement learning for robotic manipulations. 2019. ⟨hal-02274969⟩



Record views