Learning the Structure of Factored Markov Decision Processes in Reinforcement Learning Problems

Thomas Degris 1 Olivier Sigaud 1 Pierre-Henri Wuillemin 2
1 Animatlab
LIP6 - Laboratoire d'Informatique de Paris 6
2 DECISION
LIP6 - Laboratoire d'Informatique de Paris 6
Abstract : Recent decision-theoric planning algorithms are able to find optimal solutions in large problems, using Factored Markov Decision Processes (FMDPs). However, these algorithms need a perfect knowledge of the structure of the problem. In this paper, we propose SDYNA, a general framework for addressing large reinforcement learning problems by trial-and-error and with no initial knowledge of their structure. SDYNA integrates incremental planning algorithms based on FMDPs with supervised learning techniques building structured representations of the problem. We describe SPITI, an instantiation of SDYNA, that uses incremental decision tree induction to learn the structure of a problem combined with an incremental version of the Structured Value Iteration algorithm. We show that SPITI can build a factored representation of a reinforcement learning problem and may improve the policy faster than tabular reinforcement learning algorithms by exploiting the generalization property of decision tree induction algorithms.
Document type :
Conference papers
Complete list of metadatas

https://hal.archives-ouvertes.fr/hal-01336925
Contributor : Lip6 Publications <>
Submitted on : Friday, June 24, 2016 - 11:07:11 AM
Last modification on : Thursday, March 21, 2019 - 1:07:27 PM

Links full text

Identifiers

Citation

Thomas Degris, Olivier Sigaud, Pierre-Henri Wuillemin. Learning the Structure of Factored Markov Decision Processes in Reinforcement Learning Problems. The 23rd International Conference on Machine Learning, Jun 2006, Pittsburgh, Pennsylvania, United States. pp.257-264, ⟨10.1145/1143844.1143877⟩. ⟨hal-01336925⟩

Share

Metrics

Record views

158