HAL will be down for maintenance from Friday, June 10 at 4pm through Monday, June 13 at 9am. More information
Skip to Main content Skip to Navigation
Conference papers

Modulated Policy Hierarchies

Abstract : Solving tasks with sparse rewards is a main challenge in reinforcement learning. While hierarchical controllers are an intuitive approach to this problem, current methods often require manual reward shaping, alternating training phases, or manually defined sub tasks. We introduce modulated policy hierarchies (MPH), that can learn end-to-end to solve tasks from sparse rewards. To achieve this, we study different modulation signals and exploration for hierarchical controllers. Specifically, we find that communicating via bit-vectors is more efficient than selecting one out of multiple skills, as it enables mixing between them. To facilitate exploration, MPH uses its different time scales for temporally extended intrinsic motivation at each level of the hierarchy. We evaluate MPH on the robotics tasks of pushing and sparse block stacking, where it outperforms recent baselines.
Document type :
Conference papers
Complete list of metadata

Contributor : Alexander Pashevich Connect in order to contact the contributor
Submitted on : Friday, December 21, 2018 - 1:53:44 PM
Last modification on : Friday, February 4, 2022 - 3:24:28 AM

Links full text


  • HAL Id : hal-01963580, version 1
  • ARXIV : 1812.00025



Alexander Pashevich, Danijar Hafner, James Davidson, Rahul Sukthankar, Cordelia Schmid. Modulated Policy Hierarchies. Deep Reinforcement Learning Workshop at NeurIPS 2018, Dec 2018, Montreal, Canada. ⟨hal-01963580⟩



Record views