On the Sample Complexity of Reinforcement Learning with a Generative Model

Mohammad Gheshlaghi Azar 1 Remi Munos 2 Bert Kappen 1
2 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal, Inria Lille - Nord Europe
Abstract : We consider the problem of learning the optimal action-value function in the discounted-reward Markov decision processes (MDPs). We prove a new PAC bound on the sample-complexity of model-based value iteration algorithm in the presence of the generative model, which indicates that for an MDP with N state-action pairs and the discount factor \gamma\in[0,1) only O(N\log(N/\delta)/((1-\gamma)^3\epsilon^2)) samples are required to find an \epsilon-optimal estimation of the action-value function with the probability 1-\delta. We also prove a matching lower bound of \Theta (N\log(N/\delta)/((1-\gamma)^3\epsilon^2)) on the sample complexity of estimating the optimal action-value function by every RL algorithm. To the best of our knowledge, this is the first matching result on the sample complexity of estimating the optimal (action-) value function in which the upper bound matches the lower bound of RL in terms of N, \epsilon, \delta and 1/(1-\gamma). Also, both our lower bound and our upper bound significantly improve on the state-of-the-art in terms of 1/(1-\gamma).
Document type :
Preprints, Working Papers, ...
Complete list of metadatas

https://hal.archives-ouvertes.fr/hal-00830034
Contributor : Rémi Munos <>
Submitted on : Tuesday, June 4, 2013 - 12:02:56 PM
Last modification on : Thursday, February 21, 2019 - 10:52:49 AM

Links full text

Identifiers

  • HAL Id : hal-00830034, version 1
  • ARXIV : 1206.6461

Collections

Citation

Mohammad Gheshlaghi Azar, Remi Munos, Bert Kappen. On the Sample Complexity of Reinforcement Learning with a Generative Model. 2012. ⟨hal-00830034⟩

Share

Metrics

Record views

238