Modulated Policy Hierarchies

Abstract : Solving tasks with sparse rewards is a main challenge in reinforcement learning. While hierarchical controllers are an intuitive approach to this problem, current methods often require manual reward shaping, alternating training phases, or manually defined sub tasks. We introduce modulated policy hierarchies (MPH), that can learn end-to-end to solve tasks from sparse rewards. To achieve this, we study different modulation signals and exploration for hierarchical controllers. Specifically, we find that communicating via bit-vectors is more efficient than selecting one out of multiple skills, as it enables mixing between them. To facilitate exploration, MPH uses its different time scales for temporally extended intrinsic motivation at each level of the hierarchy. We evaluate MPH on the robotics tasks of pushing and sparse block stacking, where it outperforms recent baselines.
Type de document :
Autre publication
Deep RL workshop at NIPS 2018. 2018
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-01963580
Contributeur : Alexander Pashevich <>
Soumis le : vendredi 21 décembre 2018 - 13:53:44
Dernière modification le : vendredi 18 janvier 2019 - 14:10:02

Lien texte intégral

Identifiants

  • HAL Id : hal-01963580, version 1
  • ARXIV : 1812.00025

Collections

Citation

Alexander Pashevich, Danijar Hafner, James Davidson, Rahul Sukthankar, Cordelia Schmid. Modulated Policy Hierarchies. Deep RL workshop at NIPS 2018. 2018. 〈hal-01963580〉

Partager

Métriques

Consultations de la notice

83