RLMViz: Interpreting Deep Reinforcement Learning Memory

Theo Jaunet 1, 2 Romain Vuillemot 1 Christian Wolf 2
1 SICAL - Situated Interaction, Collaboration, Adaptation and Learning
LIRIS - Laboratoire d'InfoRmatique en Image et Systèmes d'information
2 imagine - Extraction de Caractéristiques et Identification
LIRIS - Laboratoire d'InfoRmatique en Image et Systèmes d'information
Abstract : We present RLMViz, a visual analytics interface to interpret the internal memory of an agent (e.g., a robot) trained using deep reinforcement learning. This memory is composed of large temporal vectors updated before each action of the agent moving in an environment. This memory is not trivial to understand, and is referred to as a black box, which only inputs (images) and outputs (actions) are understood, but not its inner workings. Using RLMViz, experts can form hypothesis on this memory and derive rules based on the agent's decisions to interpret them, and gain an understanding towards why errors have been made and improve future training process. We report on the main features of RLMViz which are memory navigation and contextualization techniques using time-lines juxtapositions. We also present our early findings using the VizDoom simulator, a standard benchmark for DRL navigation scenarios.
Document type :
Conference papers
Complete list of metadatas

Cited literature [9 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-02140902
Contributor : Théo Jaunet <>
Submitted on : Monday, May 27, 2019 - 4:03:26 PM
Last modification on : Wednesday, November 20, 2019 - 3:21:36 AM

File

Journee_Visu_2019__DRLViz.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-02140902, version 1

Citation

Theo Jaunet, Romain Vuillemot, Christian Wolf. RLMViz: Interpreting Deep Reinforcement Learning Memory. Journée Visu 2019, May 2019, Paris, France. ⟨hal-02140902⟩

Share

Metrics

Record views

85

Files downloads

89