Skip to Main content Skip to Navigation
New interface
Conference papers

RLMViz: Interpreting Deep Reinforcement Learning Memory

Theo Jaunet 1, 2 Romain Vuillemot 1 Christian Wolf 2 
1 SICAL - Situated Interaction, Collaboration, Adaptation and Learning
LIRIS - Laboratoire d'InfoRmatique en Image et Systèmes d'information
2 imagine - Extraction de Caractéristiques et Identification
LIRIS - Laboratoire d'InfoRmatique en Image et Systèmes d'information
Abstract : We present RLMViz, a visual analytics interface to interpret the internal memory of an agent (e.g., a robot) trained using deep reinforcement learning. This memory is composed of large temporal vectors updated before each action of the agent moving in an environment. This memory is not trivial to understand, and is referred to as a black box, which only inputs (images) and outputs (actions) are understood, but not its inner workings. Using RLMViz, experts can form hypothesis on this memory and derive rules based on the agent's decisions to interpret them, and gain an understanding towards why errors have been made and improve future training process. We report on the main features of RLMViz which are memory navigation and contextualization techniques using time-lines juxtapositions. We also present our early findings using the VizDoom simulator, a standard benchmark for DRL navigation scenarios.
Document type :
Conference papers
Complete list of metadata

Cited literature [9 references]  Display  Hide  Download
Contributor : Théo Jaunet Connect in order to contact the contributor
Submitted on : Monday, May 27, 2019 - 4:03:26 PM
Last modification on : Friday, September 30, 2022 - 11:34:16 AM


Files produced by the author(s)


  • HAL Id : hal-02140902, version 1


Theo Jaunet, Romain Vuillemot, Christian Wolf. RLMViz: Interpreting Deep Reinforcement Learning Memory. Journée Visu 2019, May 2019, Paris, France. ⟨hal-02140902⟩



Record views


Files downloads