Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

Deep Reinforcement Learning and the Deadly Triad

Abstract : We know from reinforcement learning theory that temporal difference learning can fail in certain cases. Sutton and Barto (2018) identify a deadly triad of function approximation, bootstrapping, and off-policy learning. When these three properties are combined, learning can diverge with the value estimates becoming unbounded. However, several algorithms successfully combine these three properties, which indicates that there is at least a partial gap in our understanding. In this work, we investigate the impact of the deadly triad in practice, in the context of a family of popular deep reinforcement learning models - deep Q-networks trained with experience replay - analysing how the components of this system play a role in the emergence of the deadly triad, and in the agent's performance
Complete list of metadata
Contributor : Florian Strub Connect in order to contact the contributor
Submitted on : Monday, December 10, 2018 - 12:23:56 AM
Last modification on : Friday, December 11, 2020 - 6:44:05 PM

Links full text


  • HAL Id : hal-01949304, version 1
  • ARXIV : 1812.02648


Hado van Hasselt, Yotam Doron, Florian Strub, Matteo Hessel, Nicolas Sonnerat, et al.. Deep Reinforcement Learning and the Deadly Triad. 2018. ⟨hal-01949304⟩



Record views