Deep Reinforcement Learning and the Deadly Triad

Hado Van Hasselt 1 Yotam Doron 1 Florian Strub 2, 3, 4, 1 Matteo Hessel 1 Nicolas Sonnerat 1 Joseph Modayil 1
2 SEQUEL - Sequential Learning
Inria Lille - Nord Europe, CRIStAL - Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
Abstract : We know from reinforcement learning theory that temporal difference learning can fail in certain cases. Sutton and Barto (2018) identify a deadly triad of function approximation, bootstrapping, and off-policy learning. When these three properties are combined, learning can diverge with the value estimates becoming unbounded. However, several algorithms successfully combine these three properties, which indicates that there is at least a partial gap in our understanding. In this work, we investigate the impact of the deadly triad in practice, in the context of a family of popular deep reinforcement learning models - deep Q-networks trained with experience replay - analysing how the components of this system play a role in the emergence of the deadly triad, and in the agent's performance
Type de document :
Pré-publication, Document de travail
2018
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-01949304
Contributeur : Florian Strub <>
Soumis le : lundi 10 décembre 2018 - 00:23:56
Dernière modification le : vendredi 22 mars 2019 - 01:37:13

Lien texte intégral

Identifiants

  • HAL Id : hal-01949304, version 1
  • ARXIV : 1812.02648

Citation

Hado Van Hasselt, Yotam Doron, Florian Strub, Matteo Hessel, Nicolas Sonnerat, et al.. Deep Reinforcement Learning and the Deadly Triad. 2018. 〈hal-01949304〉

Partager

Métriques

Consultations de la notice

53