S. Abdallah and V. Lesser, Modeling task allocation using a decision theoretic model, Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems , AAMAS '05, pp.719-726, 2005.
DOI : 10.1145/1082473.1082583

C. Amato, A. Carlin, and S. Zilberstein, Bounded dynamic programming for decetralized pomdps, AAMAS 2007 Workshop on Multi-Agent Sequential Decision Making in Uncertain Domains, 2007.

C. Amato, D. S. , B. Zilberstein, and S. , Optimizing memory-bounded controllers for decentralized pomdps, Proceedings of the Twenty Third Conference on Uncertainty in Artificial Intelligence, 2007.

R. Becker, V. Lesser, and S. Zilberstein, Decentralized Markov Decision Processes with Event-Driven Interactions, The Third International Joint Conference on Autonomous Agents and Multi Agent Systems, pp.302-309, 2004.

R. Becker, V. Lesser, and S. Zilberstein, Analyzing Myopic Approaches for Multi-Agent Communication, IEEE/WIC/ACM International Conference on Intelligent Agent Technology, pp.550-557, 2005.
DOI : 10.1109/IAT.2005.44

R. Becker, S. Zilberstein, V. Lesser, and C. Goldman, Transition-independent decentralized markov decision processes, Proceedings of the second international joint conference on Autonomous agents and multiagent systems , AAMAS '03, pp.41-48, 2003.
DOI : 10.1145/860575.860583

R. Becker, S. Zilberstein, V. Lesser, and C. Goldman, Solving transition independent decentralized markov decision processes, Journal of Artificial Intelligence Research, vol.22, pp.423-455, 2004.

D. Bernstein, E. A. Hansen, and S. Zilberstein, Bounded policy iteration for decentralized pomdps, Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence, 2005.

D. Bernstein, S. Zilberstein, and N. Immerman, The Complexity of Decentralized Control of Markov Decision Processes, Mathematics of Operations Research, vol.27, issue.4, pp.819-840, 2002.
DOI : 10.1287/moor.27.4.819.297

D. Bernstein, S. Zilberstein, R. Washington, and J. Bresina, Planetary rover control as a markov decision process Robotics and Automation in Space, The 6th International Symposium on Artificial Intelligence, 2001.

J. Blythe, Decision-theoretic planning, 1999.

J. Blythe, Planning under uncertainty in dynamic domains, 1999.

C. Boutilier, R. Brafman, and C. Geib, Prioritized goal decomposition of Markov decision processes: Towards a synthesis of classical and decision theoretic planning, Proceedings of the Fifteenth International Joint Conference on Artificial Intelligence, pp.1156-1163, 1997.

C. Boutilier, T. Dean, and S. Hanks, Decision-theoretic planning: Structural asumptions and computational leverage, Journal of Articicial Intelligence Research, vol.1, pp.1-93, 1999.

J. Bresina, R. Dearden, N. Meuleau, S. Ramakrishnan, D. Smith et al., Planning under continuous time and resource uncertainty: A challenge for ai, 2002.

I. Chadès, B. Scherrer, and F. Charpillet, A heuristic approach for solving decentralized-POMDP, Proceedings of the 2002 ACM symposium on Applied computing , SAC '02, 2002.
DOI : 10.1145/508791.508804

T. Dean and S. Lin, Decomposition techniques for planning in stochastic domains, p.95, 1995.

K. Decker and V. Lesser, Quantitative Modeling of Complex Environments, Intelligent Systems in Accounting, Finance and Management, vol.28, issue.4, pp.215-234, 1993.
DOI : 10.1002/j.1099-1174.1993.tb00044.x

R. Emery-montemerlo, G. Gordon, J. Schneider, and S. Thrun, Approximate solutions for partially observable stochastic games with common payoffs, Proceedings of the Third Joint Conference on Autnomous Agents and Multi Agent Systems, 2004.

H. O. Esben, J. M. Maja, and S. S. Gaurav, Multi-robot task allocation in the light of uncertainty, Proceedings of IEEE International Conference on Robotics and Automation, pp.3002-3007, 2002.

B. P. Gerkey and M. J. Matari´cmatari´c, Sold!: auction methods for multirobot coordination, IEEE Transactions on Robotics and Automation, vol.18, issue.5, pp.758-768, 2002.
DOI : 10.1109/TRA.2002.803462

C. Goldman and S. Zilberstein, Optimizing information exchange in cooperative multiagent systems, International Joint Conference on Autonomous Agents and Multi Agent Systems, pp.137-144, 2003.

C. Goldman and S. Zilberstein, Decentralized control of cooperative systems: Categorization and complexity analysis, Journal of Artificial Intelligence Research, vol.22, pp.143-174, 2004.

H. Hanna and A. Mouaddib, Task selection as decision making in multiagent system, International Joint Conference on Autonomous Agents and Multi Agent Systems, pp.616-623, 2002.

E. A. Hansen, D. Bernstein, and S. Zilberstein, Dynamic programming for partially observable stochastic games, Proceedings of the Nineteenth National Conference on Artificial Intelligence, 2004.

R. A. Howard, Dynamic Programming and Markov Processes, 1960.

D. Koller and B. Milch, Multi-agent influence diagrams for representing and solving games, Games and Economic Behavior, vol.45, issue.1, pp.181-221, 2003.
DOI : 10.1016/S0899-8256(02)00544-4

J. Marecki and M. Tambe, On opportunistic techniques for solving decentralized mdps with temporal constraints, Proceedings of the Sixth International Joint Conference on Autonomous Agents and Multi-agent Systems (AAMAS), 2007.

N. Meuleau, M. Hauskrecht, K. Kim, L. Peshkin, L. Kaelbling et al., Solving very large weakly coupled markov decision processes, AAAI/IAAI, pp.165-172, 1998.

T. Morimoto, How to develop a RoboCupRescue agent, 2000.

R. Nair, V. Pradeep, T. Milind, and Y. Makoto, Networked distributed POMDPs: A synthesis of distributed constraint optimization and POMDPs, Proceedings of the Twentieth National Conference on Artificial Intelligence (AAAI-05), 2005.

R. Nair, M. Roth, M. Yokoo, and M. Tambe, Communication for improving policy computation in distributed pomdps, Proceedings of the Third International Joint Conference on Agents and Multiagent Systems (AAMAS-04, pp.1098-1105, 2004.

R. Nair, M. Tambe, M. Yokoo, S. Marsella, and D. V. Pynadath, Taming decentralized pomdps: Towards efficient policy computation for multiagent settings, Proceedings of the International Joint Conference on Artificial Intelligence, pp.705-711, 2003.

L. Peshkin, K. Kim, N. Meuleu, and L. Kaelbling, Learning to cooperate via policy search, Sixteenth Conference on Uncertainty in Artificial Intelligence, pp.307-314, 2000.

P. Poupart, C. Boutilier, R. Patrascu, and D. Schuurmans, Piecewise linear value function approximation for factored mdps, Eighteenth National Conference on Artificial Intelligence, 2002.

M. L. Puterman, Markov Decision processes : discrete stochastic dynamic programming, 2005.
DOI : 10.1002/9780470316887

D. Pynadath and M. Tambe, The communicative multiagent team decision problem: Analyzing teamwork theories and models, Journal of Artificial Intelligence Research, pp.389-423, 2002.

N. Roy, J. Pineau, and S. Thrun, Spoken dialogue management using probabilistic reasoning, Proceedings of the 38th Annual Meeting on Association for Computational Linguistics , ACL '00, 2000.
DOI : 10.3115/1075218.1075231

S. Seuken and S. Zilberstein, Memory-bounded dynamic programming for dec-pomdps, 2007.

S. Singh and D. Cohn, How to dynamically merge markov decision processes, Advances in Neural Information Processing Systems, 1998.