Mean field for Markov Decision Processes: from Discrete to Continuous Optimization

Nicolas Gast 1, 2 Bruno Gaujal 1 Jean-Yves Le Boudec 2
1 MESCAL - Middleware efficiently scalable
Inria Grenoble - Rhône-Alpes, LIG - Laboratoire d'Informatique de Grenoble
Abstract : We study the convergence of Markov Decision Processes made of a large number of objects to optimization problems on ordinary differential equations (ODE). We show that the optimal reward of such a Markov Decision Process, satisfying a Bellman equation, converges to the solution of a continuous Hamilton-Jacobi-Bellman (HJB) equation based on the mean field approximation of the Markov Decision Process. We give bounds on the difference of the rewards, and a constructive algorithm for deriving an approximating solution to the Markov Decision Process from a solution of the HJB equations. We illustrate the method on three examples pertaining respectively to investment strategies, population dynamics control and scheduling in queues are developed. They are used to illustrate and justify the construction of the controlled ODE and to show the gain obtained by solving a continuous HJB equation rather than a large discrete Bellman equation.
Liste complète des métadonnées

Cited literature [25 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-00473005
Contributor : Nicolas Gast <>
Submitted on : Tuesday, May 17, 2011 - 3:15:05 PM
Last modification on : Thursday, October 11, 2018 - 8:48:02 AM
Document(s) archivé(s) le : Friday, November 9, 2012 - 11:35:44 AM

File

RR_7239_MeanFieldMDP.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-00473005, version 3
  • ARXIV : 1004.2342

Collections

Citation

Nicolas Gast, Bruno Gaujal, Jean-Yves Le Boudec. Mean field for Markov Decision Processes: from Discrete to Continuous Optimization. 2010. ⟨hal-00473005v3⟩

Share

Metrics

Record views

877

Files downloads

448