Singularly Perturbed Discounted Markov Control Processes in a General State Space - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue SIAM Journal on Control and Optimization Année : 2012

Singularly Perturbed Discounted Markov Control Processes in a General State Space

Résumé

This work studies the asymptotic optimality of discrete-time Markov Decision Processes (MDP's in short) with general state space and action space and having weak and strong interactions. By using a similar approach as developed in Liu 2001, the idea in this paper is to consider a MDP with general state and action spaces and to reduce the dimension of the state space by considering an averaged model. This formulation is often described by introducing a small parameter $\epsilon >0$ in the definition of the transition kernel, leading to a singularly perturbed Markov model with two time scales. Our objective is twofold. First it is shown that the value function of the control problem for the perturbed system converges to the value function of a limit averaged control problem as $\epsilon$ goes to zero. In the second part of the paper, it is proved that a feedback control policy for the original control problem defined by using an optimal feedback policy for the limit problem is asymptotically optimal. Our work extends existing results of the literature in the following two directions: the underlying MDP is defined on general state and action spaces and we do not impose strong conditions on the recurrence structure of the MDP such as Doeblin's condition.
Fichier non déposé

Dates et versions

hal-00759715 , version 1 (02-12-2012)

Identifiants

  • HAL Id : hal-00759715 , version 1

Citer

Oswaldo Costa, François Dufour. Singularly Perturbed Discounted Markov Control Processes in a General State Space. SIAM Journal on Control and Optimization, 2012, 50 (2), pp.720-747. ⟨hal-00759715⟩
167 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More