C. Aliprantis and K. Border, Infinite dimensional analysis, 2006.

E. Altman, Denumerable Constrained Markov Decision Processes and Finite Approximations, Mathematics of Operations Research, vol.19, issue.1, pp.169-191, 1994.
DOI : 10.1287/moor.19.1.169

A. Arapostathis, V. Borkar, E. Fernández-gaucherand, M. Ghosh, and S. Marcus, Discrete-Time Controlled Markov Processes with Average Cost Criterion: A Survey, SIAM Journal on Control and Optimization, vol.31, issue.2, pp.282-344, 1993.
DOI : 10.1137/0331018

R. Bellman, A Markovian Decision Process, Indiana University Mathematics Journal, vol.6, issue.4, 1957.
DOI : 10.1512/iumj.1957.6.56038

D. Bertsekas and S. Shreve, Stochastic optimal control: The discrete time case, Athena Scientific, 1996.

D. Blackwell, Discrete Dynamic Programming, The Annals of Mathematical Statistics, vol.33, issue.2, pp.719-726, 1962.
DOI : 10.1214/aoms/1177704593

V. S. Borkar, A convex analytic approach to Markov decision processes. Probability Theory and Related Fields, pp.583-602, 1988.

V. S. Borkar, Average Cost Dynamic Programming Equations For Controlled Markov Chains With Partial Observations, SIAM Journal on Control and Optimization, vol.39, issue.3, p.673, 2000.
DOI : 10.1137/S0363012998345172

C. Dellacherie and P. Meyer, Probabilities and Potential, C: Potential Theory for Discrete and Continuous Semigroups, 2011.

L. E. Dubins and L. J. Savage, How to gamble if you must: Inequalities for stochastic processes, 1965.

E. Feinberg, On measurability and representation of strategic measures in Markov decision processes. Lecture Notes-Monograph Series, pp.29-43, 1996.

E. Feinberg, P. Kasyanov, and M. Zgurovsky, Partially Observable Total-Cost Markov Decision Processes with Weakly Continuous Transition Probabilities, Mathematics of Operations Research, vol.41, issue.2, 2014.
DOI : 10.1287/moor.2015.0746

D. Gillette, 9. STOCHASTIC GAMES WITH ZERO STOP PROBABILITIES, Contributions to the Theory of Games, pp.179-187, 1957.
DOI : 10.1515/9781400882151-011

O. Hernández-lerma and J. Lasserre, Markov chains and invariant probabilities, 2003.
DOI : 10.1007/978-3-0348-8024-4

A. Maitra and W. Sudderth, Discrete gambling and stochastic games, 1996.
DOI : 10.1007/978-1-4612-4002-0

J. F. Mertens and A. Neyman, Stochastic games, International Journal of Game Theory, vol.39, issue.2, pp.53-66, 1981.
DOI : 10.1007/BF01769259

J. Renault, Uniform value in dynamic programming, Journal of the European Mathematical Society, vol.13, issue.2, pp.309-330, 2011.
DOI : 10.4171/JEMS/254

URL : https://hal.archives-ouvertes.fr/hal-00265257

J. Renault and X. Venel, A distance for probability spaces, and long-term values in Markov decision processes and repeated games Arxiv preprint arXiv:1202, 2012.

D. Rhenius, Incomplete information in Markovian decision models. The Annals of Statistics, pp.1327-1334, 1974.

D. Rosenberg, E. Solan, and N. Vieille, Blackwell optimality in Markov decision processes with partial observation, The Annals of Statistics, vol.30, issue.4, pp.1178-1193, 2002.
DOI : 10.1214/aos/1031689022

URL : https://hal.archives-ouvertes.fr/hal-00464998

Y. Sawaragi and T. Yoshikawa, Discrete-time Markovian decision processes with incomplete state observation. The Annals of Mathematical Statistics, pp.78-86, 1970.

A. Yushkevich, Reduction of a controlled Markov model with incomplete data to a problem with complete information in the case of borel state and control space. Theory of Probability and Its Applications, pp.153-158, 1976.