Skip to Main content Skip to Navigation
Conference papers

Optimistic planning of deterministic systems

Jean-Francois Hren 1 Rémi Munos 1 
1 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, Inria Lille - Nord Europe, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal
Abstract : If one possesses a model of a controlled deterministic system, then from any state, one may consider the set of all possible reachable states starting from that state and using any sequence of actions. This forms a tree whose size is exponential in the planning time horizon. Here we ask the question: given finite computational resources (e.g. CPU time), which may not be known ahead of time, what is the best way to explore this tree, such that once all resources have been used, the algorithm would be able to propose an action (or a sequence of actions) whose performance is as close as possible to optimality? The performance with respect to optimality is assessed in terms of the regret (with respect to the sum of discounted future rewards) resulting from choosing the action returned by the algorithm instead of an optimal action. In this paper we investigate an optimistic exploration of the tree, where the most promising states are explored first, and compare this approach to a naive uniform exploration. Bounds on the regret are derived both for uniform and optimistic exploration strategies. Numerical simulations illustrate the benefit of optimistic planning.
Document type :
Conference papers
Complete list of metadata

Cited literature [4 references]  Display  Hide  Download
Contributor : Rémi Munos Connect in order to contact the contributor
Submitted on : Tuesday, June 4, 2013 - 3:22:23 PM
Last modification on : Thursday, January 20, 2022 - 4:16:35 PM
Long-term archiving on: : Thursday, September 5, 2013 - 4:23:14 AM


Files produced by the author(s)


  • HAL Id : hal-00830182, version 1



Jean-Francois Hren, Rémi Munos. Optimistic planning of deterministic systems. European Workshop on Reinforcement Learning, 2008, France. pp.151-164. ⟨hal-00830182⟩



Record views


Files downloads