Action Decomposable MDP A Linear Programming formulation for queuing network problems - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2012

Action Decomposable MDP A Linear Programming formulation for queuing network problems

Résumé

Markov decision processes (MDP) have provided general frameworks for many control, decision making, and stochastic optimization problems. In this paper we are interested in a class of queuing control that can be modelled with a Continuous-Time MDP but that have an exponential actions state space with classic methods. The Event Based Dynamic Programming approach deals with this problem and provides some algorithms (value iteration) to compute the best policy efficiently. However there is no formal definition of the subclass of MDP models they can tackle. The first contribution of this paper to define this class, naming it "Action Decomposable MDP". The second contribution is to give a new MDP Linear Programming formulation using "Action Decomposability" that contributes to extend MDP solution techniques. Finally we give some examples of application of this framework and give numeric experiments showing the interest of using the action decomposition properties.
Fichier principal
Vignette du fichier
ADMDP_Sept2012.pdf (196.84 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-00727039 , version 1 (31-08-2012)
hal-00727039 , version 2 (14-11-2012)
hal-00727039 , version 3 (29-04-2013)

Identifiants

  • HAL Id : hal-00727039 , version 1

Citer

Ariel Waserhole, Vincent Jost, Jean-Philippe Gayon. Action Decomposable MDP A Linear Programming formulation for queuing network problems. 2012. ⟨hal-00727039v1⟩
311 Consultations
1733 Téléchargements

Partager

Gmail Facebook X LinkedIn More