Reinforcement Learning with Sequences of Motion Primitives for Robust Manipulation

Abstract : Physical contact events often allow a natural decomposition of manipulation tasks into action phases and subgoals. Within the motion primitive paradigm, each action phase corresponds to a motion primitive, and the subgoals correspond to the goal parameters of these primitives. Current state-of-the-art reinforcement learning algorithms are able to efficiently and robustly optimize the parameters of motion primitives in very high-dimensional problems. These algorithms often consider only shape parameters, which determine the trajectory between the start- and end-point of the movement. In manipulation, however, it is also crucial to optimize the goal parameters, which represent the subgoals between the motion primitives. We therefore extend the policy improvement with path integrals (PI$^2$) algorithm to simultaneously optimize shape and goal parameters. Applying simultaneous shape and goal learning to sequences of motion primitives leads to the novel algorithm PI$^2$-Seq. We use our methods to address a fundamental challenge in manipulation: improving the robustness of everyday pick-and-place tasks.
Document type :
Journal articles
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-00766177
Contributor : Freek Stulp <>
Submitted on : Monday, December 17, 2012 - 5:13:24 PM
Last modification on : Thursday, February 7, 2019 - 2:25:53 PM

Identifiers

  • HAL Id : hal-00766177, version 1

Collections

Citation

Freek Stulp, Evangelos Theodorou, Stefan Schaal. Reinforcement Learning with Sequences of Motion Primitives for Robust Manipulation. IEEE Transactions on Robotics, IEEE, 2012, 28 (6), pp.1360-1370. ⟨hal-00766177⟩

Share

Metrics

Record views

225