. Starcraft, Brood War a science fiction real-time strategy (RTS) game released in 1998 by Blizzard Entertainment, p.16

D. W. Aha, M. Molineaux, and M. J. Ponsen, Learning to Win: Case-Based Plan Selection in a Real-Time Strategy Game, Lecture Notes in Computer Science, vol.3620, issue.74, pp.5-20, 2005.
DOI : 10.1007/11536406_4

M. Srinivas, R. J. Aji, and . Mceliece, The generalized distributive law, IEEE Transactions on Information Theory, vol.46, issue.2, pp.325-343, 2000.

D. W. Albrecht, I. Zukerman, and A. E. Nicholson, Bayesian models for keyhole plan recognition in an adventure game, User Modeling and User-Adapted Interaction, vol.8, issue.1/2, pp.5-47, 1998.
DOI : 10.1023/A:1008238218679

L. Victor and . Allis, Searching for Solutions in Games and Artificial Intelligence, fragrieu.free.fr/SearchingForSolutions.pdf. 2 citations pages, pp.21-23, 1994.

C. Andrieu, N. De-freitas, A. Doucet, and M. I. Jordan, An introduction to mcmc for machine learning, Machine Learning, pp.5-43, 2003.

M. S. Arulampalam, S. Maskell, and N. Gordon, A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking, IEEE Transactions on Signal Processing, vol.50, issue.2, pp.174-188, 2002.
DOI : 10.1109/78.978374

R. B. Ash and R. L. Bishop, Monopoly as a Markov Process, Mathematics Magazine, vol.45, issue.1, pp.26-29, 1972.
DOI : 10.2307/2688377

J. Asmuth, L. Li, M. Littman, A. Nouri, and D. Wingate, A bayesian sampling approach to exploration in reinforcement learning, Uncertainty in Artificial Intelligence, UAI, pp.19-26, 2009.

P. Avery, S. Louis, and B. Avery, Evolving coordinated spatial tactics for autonomous entities using influence maps, 2009 IEEE Symposium on Computational Intelligence and Games, pp.341-348, 2009.
DOI : 10.1109/CIG.2009.5286457

S. Bakkes, P. Spronck, and E. Postma, TEAM: The Team-Oriented Evolutionary Adaptability Mechanism, pp.273-282, 2004.
DOI : 10.1007/978-3-540-28643-1_36

R. Bellman, A markovian decision process. Indiana Univ, Math. J, vol.6, pp.679-684, 1957.

C. Bererton, State estimation for game ai using particle filters, AAAI Workshop on Challenges in Game AI, pp.30-96, 2004.

P. Bessière, C. Laugier, and R. Siegwart, Probabilistic Reasoning and Decision Making in Sensory-Motor Systems, pp.9783540790068-14, 2008.
DOI : 10.1007/978-3-540-79007-5

D. Billings, J. Schaeffer, and D. Szafron, Poker as a testbed for machine intelligence research, Advances in Artificial Intelligence, pp.1-15, 1998.

D. Billings, N. Burch, A. Davidson, R. C. Holte, J. Schaeffer et al., Approximating game-theoretic optimal strategies for fullscale poker, Proceedings of IJCAI, pp.661-668

M. Booth, The AI Systems of Left 4 Dead, Proceedings of AIIDE, 2009.

E. Browne, D. Powley, S. Whitehouse, P. Lucas, P. Cowling et al., A survey of monte carlo tree search methods. Computational Intelligence and AI in Games, IEEE Transactions on, issue.99

M. Buro, Real-Time Strategy Games: A New AI Research Challenge, IJCAI, pp.1534-1535, 2003.

M. Buro, Call for ai research in rts games, Proceedings of the AAAI Workshop on AI in Games, pp.139-141, 2004.

P. Cadena and L. Garrido, Fuzzy Case-Based Reasoning for Managing Strategic and Tactical Reasoning in StarCraft, pp.113-124, 2011.
DOI : 10.1007/978-3-642-25324-9_10

M. Campbell, A. Joseph-hoane-jr, and F. Hsu, Deep Blue, Artificial Intelligence, vol.134, issue.1-2, pp.57-83, 2002.
DOI : 10.1016/S0004-3702(01)00129-1

C. London and . Center, Copa mercosur tournament

A. Champandard, T. Verweij, and R. Straatman, Killzone 2 multiplayer bots, Paris Game AI Conference, 2009.

E. Charniak and R. P. Goldman, A Bayesian model of plan recognition, Artificial Intelligence, vol.64, issue.1, pp.53-79, 1993.
DOI : 10.1016/0004-3702(93)90060-O

F. Chee, Understanding korean experiences of online game hype, identity, and the menace of the "wang-tta, DIGRA Conference, p.60, 2005.

M. Chung, M. Buro, and J. Schaeffer, Monte carlo planning in rts games, Proceedings of IEEE CIG. IEEE, pp.75-115, 2005.

D. Churchill and M. Buro, Build order optimization in starcraft, Artificial Intelligence and Interactive Digital Entertainment (AIIDE), pp.170-177, 2011.

F. Colas, J. Diard, and P. Bessière, Common Bayesian Models for Common Cognitive Issues, Acta Biotheoretica, vol.86, issue.3, pp.191-216, 2010.
DOI : 10.1007/s10441-010-9101-1

URL : https://hal.archives-ouvertes.fr/hal-00530356

. Contracts, Progamers income list: http://www.teamliquid.net/forum/viewmessage, p.16

R. T. Cox, Probability, Frequency and Reasonable Expectation, American Journal of Physics, vol.14, issue.1, pp.1-13, 1946.
DOI : 10.1119/1.1990764

M. Cutumisu and D. Szafron, An Architecture for Game Behavior AI: Behavior Multi- Queues, AAAI, pp.32-74, 2009.

H. Danielsiek, R. Stuer, A. Thom, N. Beume, B. Naujoks et al., Intelligent moving of groups in real-time strategy games, 2008 IEEE Symposium On Computational Intelligence and Games, pp.71-78, 2008.
DOI : 10.1109/CIG.2008.5035623

F. Bruno-de, La prévision: Ses lois logiques, ses sources subjectives. Annales de l'Institut Henri Poincaré, pp.1-68, 1937.

D. Demyen and M. Buro, Efficient triangulation-based pathfinding, Proceedings of the 21st national conference on Artificial intelligence, pp.942-947, 2006.

E. Dereszynski, J. Hostetler, A. Fern, T. Hoang, and M. Udarbe, Learning probabilistic behavior models in real-time strategy games, AAAI, editor, Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 2011.

J. Diard, P. Bessière, and E. Mazer, A survey of probabilistic models using the bayesian programming methodology as a unifying framework, Conference on Computational Intelligence, p.47, 2003.
URL : https://hal.archives-ouvertes.fr/hal-00019254

K. Erol, J. Hendler, and D. S. Nau, HTN Planning: Complexity and Expressivity, Proceedings of AAAI, pp.1123-1128, 1994.
DOI : 10.1007/bf02136175

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.13.6009

E. Richard, N. J. Fikes, and . Nilsson, Strips: A new approach to the application of theorem proving to problem solving, Artificial Intelligence, vol.2, pp.3-4189, 1971.

K. D. Forbus, J. V. Mahoney, and K. Dill, How qualitative spatial reasoning can improve strategy game AIs, IEEE Intelligent Systems, vol.17, issue.4, pp.25-30, 2002.
DOI : 10.1109/MIS.2002.1024748

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.27.1100

S. Aviezri, D. Fraenkel, and . Lichtenstein, Computing a Perfect Strategy for n x n Chess Requires Time Exponential in n, J. Comb. Theory, Ser. A, vol.31, issue.2, pp.199-214, 1981.

C. Fraley and A. E. Raftery, Model-Based Clustering, Discriminant Analysis, and Density Estimation, Journal of the American Statistical Association, vol.97, issue.458, pp.611-631, 2002.
DOI : 10.1198/016214502760047131

C. Fraley and A. E. Raftery, MCLUST version 3 for R: Normal mixture modeling and modelbased clustering, p.126, 2006.

O. François and P. Leray, Etude comparative d'algorithmes d'apprentissage de structure dans les réseaux bayésiens, Journal électronique d'intelligence artificielle, vol.5, issue.39, pp.1-19, 2004.

C. Frayn, An evolutionary approach to strategies for the game of monopoly, CIG, p.26, 2005.

W. Christopher, R. P. Geib, and . Goldman, A probabilistic plan recognition algorithm based on plan tree grammars, Artificial Intelligence, vol.173, issue.2, pp.1101-1132, 2009.

S. Gelly and Y. Wang, Exploration exploitation in Go: UCT for Monte-Carlo Go, Proceedings of NIPS, Canada, pp.24-115, 2006.
URL : https://hal.archives-ouvertes.fr/hal-00115330

S. Gelly, Y. Wang, R. Munos, and O. Teytaud, Modification of UCT with Patterns in Monte-Carlo Go, 2006.
URL : https://hal.archives-ouvertes.fr/inria-00117266

E. A. Gunn, B. G. Craenen, and E. Hart, A Taxonomy of Video Games and AI, pp.33-34, 2009.

J. Hagelbäck and S. J. Johansson, Dealing with fog of war in a Real Time Strategy game environment, 2008 IEEE Symposium On Computational Intelligence and Games, pp.55-62, 2008.
DOI : 10.1109/CIG.2008.5035621

J. Hagelbäck and S. J. Johansson, A multiagent potential field-based bot for realtime strategy games, Int. J. Comput. Games Technol, vol.4, issue.75, pp.4-5, 2009.

J. Hagelbäck and S. J. Johansson, A study on human like characteristics in real time strategy games, Proceedings of the 2010 IEEE Conference on Computational Intelligence and Games, 2010.
DOI : 10.1109/ITW.2010.5593362

D. Hunter, M. G. Hale, P. N. Youngblood, and . Dixit, Automatically-generated convex region decomposition for real-time spatial agent navigation in virtual worlds, Artificial Intelligence and Interactive Digital Entertainment AIIDE, pp.173-178, 2008.

N. Hay, S. J. Russell, and . Berkeley, Metareasoning for monte carlo tree search, 2011.

A. Robert, E. D. Hearn, and . Demaine, Games, Puzzles, and Computation. A K Peters, 2009.

P. Hingston, A Turing Test for Computer Game Bots, IEEE Transactions on Computational Intelligence and AI in Games, vol.1, issue.3, pp.169-186, 2009.
DOI : 10.1109/TCIAIG.2009.2032534

S. Hladky and V. Bulitko, An evaluation of models for predicting opponent positions in first-person shooter video games, 2008 IEEE Symposium On Computational Intelligence and Games, pp.30-96, 2008.
DOI : 10.1109/CIG.2008.5035619

H. Hoang, S. Lee-urban, and H. Muñoz-avila, Hierarchical plan representations for encoding strategic game ai, AIIDE, pp.63-68, 2005.

R. Houlette and D. Fu, The ultimate guide to fsms in games, AI Game Programming Wisdom 2, pp.18-121, 2003.

J. Hsieh and C. Sun, Building a player strategy model by analyzing replays of real-time strategy games, 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), pp.3106-3111, 2008.
DOI : 10.1109/IJCNN.2008.4634237

D. Isla, Handling complexity in the halo 2 ai, Game Developers Conference, pp.18-74, 2005.

E. T. Jaynes, Probability Theory: The Logic of Science, 2003.
DOI : 10.1017/CBO9780511790423

B. Jónsson, Representing uncertainty in rts games, 2012.

F. Kabanza, P. Bellefeuille, F. Bisson, A. R. Benaskeur, and H. Irandoust, Opponent behaviour recognition for real-time strategy games, AAAI Workshops, pp.96-170, 2010.

R. Kalman, A New Approach to Linear Filtering and Prediction Problems, Journal of Basic Engineering, vol.82, issue.1, pp.35-45, 1960.
DOI : 10.1115/1.3662552

D. Kline, Bringing Interactive Storytelling to Industry, Proceedings of AIIDE

D. Kline, The ai director in dark spore, Paris Game AI Conference, 2011.

L. Kocsis and C. Szepesvári, Bandit Based Monte-Carlo Planning, In: ECML-06. Number 4212 in LNCS, pp.282-293, 2006.
DOI : 10.1007/11871842_29

D. Koller and N. Friedman, Probabilistic Graphical Models: Principles and Techniques, 2009.

D. Koller and A. Pfeffer, Representations and solutions for game-theoretic problems, Artificial Intelligence, vol.94, issue.1-2, pp.167-215, 1997.
DOI : 10.1016/S0004-3702(97)00023-4

A. N. Kolmogorov, Grundbegriffe der Wahrscheinlichkeitsrechnung, p.45, 1933.
DOI : 10.1007/978-3-642-49888-6

K. B. Korb, A. E. Nicholson, and N. Jitnah, Bayesian poker, In In Uncertainty in Artificial Intelligence, pp.343-350, 1999.

V. Kuleshov and D. Precup, Algorithms for the multi-armed bandit problem, JMLR, p.176, 2000.

J. E. Laird, It knows what you're going to do, Proceedings of the fifth international conference on Autonomous agents , AGENTS '01, pp.385-392, 2001.
DOI : 10.1145/375735.376343

E. John, M. Laird, and . Van-lent, Human-level ai's killer application: Interactive computer games, AI Magazine, vol.22, issue.2, pp.15-26, 2001.

P. Simon and D. Laplace, Essai philosophique sur les probabilités. 1814, p.45

H. Ronan-le, Programmation et apprentissage bayésien de comportements pour des personnages synthétiques, Application aux personnages de jeux vidéos, 2007.

A. Ronan-le-hy, P. Arrigoni, O. Bessière, and . Lebeltel, Teaching Bayesian Behaviours to Video Game Characters, pp.18-48, 2004.

O. Lebeltel, P. Bessière, J. Diard, and E. Mazer, Bayesian Robot Programming, Autonomous Robots, vol.16, issue.1, pp.49-79, 2004.
DOI : 10.1023/B:AURO.0000008671.38949.43

URL : https://hal.archives-ouvertes.fr/inria-00189723

P. Leray and O. François, Bayesian network structural learning and incomplete data, Proceedings of the International and Interdisciplinary Conference on Adaptive Knowledge Representation and Reasoning (AKRR 2005), pp.33-40, 2005.

D. Lichtenstein and M. Sipser, GO is pspace hard, 19th Annual Symposium on Foundations of Computer Science (sfcs 1978), pp.48-54, 1978.
DOI : 10.1109/SFCS.1978.17

D. Loiacono, J. Togelius, P. L. Lanzi, L. Kinnaird-heether, S. M. Lucas et al., The WCCI 2008 simulated car racing competition, 2008 IEEE Symposium On Computational Intelligence and Games, pp.119-126, 2008.
DOI : 10.1109/CIG.2008.5035630

I. Lynce and J. Ouaknine, Sudoku as a sat problem, Proc. of the Ninth International Symposium on Artificial Intelligence and Mathematics, 2006.

J. C. David and . Mackay, Information Theory, Inference, and Learning Algorithms, pp.14-48, 2003.

C. Madeira, V. Corruble, and G. Ramalho, Designing a reinforcement learningbased adaptive AI for large-scale strategy games, AI and Interactive Digital Entertainment Conference, AIIDE (AAAI), p.74, 2006.
URL : https://hal.archives-ouvertes.fr/hal-01351276

A. R. , M. Meta, and S. Ontanon, Meta-level behavior adaptation in real-time strategy games, ICCBR-10 Workshop on Case-Based Reasoning for Computer Games, pp.96-178, 2010.

B. Marthi, S. Russell, D. Latham, and C. Guestrin, Concurrent hierarchical reinforcement learning, IJCAI, pp.779-785, 2005.

. Martin, Instance-based learning: nearest neighbour with generalisation, 1995.

K. Mekhnacha, J. Ahuactzin, P. Bessière, E. Mazer, and L. Smail, Exact and approximate inference in ProBT. Revue d'Intelligence Artificielle, pp.295-332, 2007.
URL : https://hal.archives-ouvertes.fr/hal-00338763

C. Miles, J. C. Quiroz, R. E. Leigh, and S. J. Louis, Co-Evolving Influence Map Tree Based Strategy Game Players, 2007 IEEE Symposium on Computational Intelligence and Games, pp.88-95, 2007.
DOI : 10.1109/CIG.2007.368083

K. Mishra, S. Ontañón, and A. Ram, Situation Assessment for Plan Retrieval in Real-Time Strategy Games, Lecture Notes in Computer Science, vol.5239, pp.355-369, 2008.
DOI : 10.1007/978-3-540-85502-6_24

K. Mishra, S. Ontañón, and A. Ram, Situation Assessment for Plan Retrieval in Real-Time Strategy Games, ECCBR, pp.355-369, 2008.
DOI : 10.1007/978-3-540-85502-6_24

M. Molineaux, D. W. Aha, and P. Moore, Learning continuous action models in a real-time strategy strategy environment, FLAIRS Conference, pp.257-262, 2008.

J. Nash, Non-Cooperative Games, The Annals of Mathematics, vol.54, issue.2, pp.286-295, 1951.
DOI : 10.2307/1969529

K. Olsen, South Korean gamers get a sneak peek at 'StarCraft II, p.60, 2007.

S. Ontañón, K. Mishra, N. Sugandh, and A. Ram, Case-Based Planning and Execution for Real-Time Strategy Games, Proceedings of ICCBR, ICCBR '07, pp.164-178, 2007.
DOI : 10.1007/978-3-540-74141-1_12

S. Ontañón, K. Mishra, N. Sugandh, and A. Ram, Case-Based Planning and Execution for Real-Time Strategy Games, Proceedings of the 7th International conference on Case-Based Reasoning: Case-Based Reasoning Research and Development, ICCBR '07, pp.164-178, 2007.
DOI : 10.1007/978-3-540-74141-1_12

S. Ontañón, K. Mishra, N. Sugandh, and A. Ram, Learning from Demonstration and Case-Based Planning for Real-Time Strategy Games, Soft Computing Applications in Industry of Studies in Fuzziness and Soft Computing, pp.293-310, 2008.
DOI : 10.1007/978-3-540-77465-5_15

J. Orkin, Three States and a Plan: The A.I. of F, E.A.R. In GDC, 2006.

J. Martin, A. Osborne, and . Rubinstein, A course in game theory, 1994.

C. Papadimitriou and J. N. Tsitsiklis, The Complexity of Markov Decision Processes, Mathematics of Operations Research, vol.12, issue.3, pp.441-450, 1987.
DOI : 10.1287/moor.12.3.441

J. Pearl, Probabilistic reasoning in intelligent systems: networks of plausible inference, p.47, 1988.

F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion et al., Scikit-learn: Machine Learning in Python, Journal of Machine Learning Research, vol.12, pp.2825-2830, 2011.
URL : https://hal.archives-ouvertes.fr/hal-00650905

A. Uriarte, P. , and S. O. Villar, Multi-reactive planning for real-time strategy games, 2011.

L. Perkins, Terrain analysis in real-time strategy games: An integrated approach to choke point detection and region decomposition, pp.96-97, 2010.

A. Plaat, Research, re: search and re-search, 1996.

M. Ponsen and I. P. Spronck, Improving Adaptive Game AI with Evolutionary Learning, p.74, 2004.

J. V. Marc, H. Ponsen, P. Muñoz-avila, D. W. Spronck, and . Aha, Automatically generating game tactics through evolutionary learning, pp.75-84, 2006.

D. C. Pottinger, Terrain analysis for real-time strategy games, Proceedings of Game Developers Conference, p.96, 2000.

M. Preuss, N. Beume, H. Danielsiek, T. Hein, B. Naujoks et al., Towards Intelligent Team Composition and Maneuvering in Real-Time Strategy Games, Transactions on Computational Intelligence and AI in Games, pp.82-98, 2010.
DOI : 10.1109/TCIAIG.2010.2047645

J. R. Quinlan, C4. 5: programs for machine learning, 1993.

S. Rabin, Implementing a state machine language. AI Game Programming Wisdom, pp.314-320, 2002.

L. Rabiner, A tutorial on HMM and selected applications in speech recognition, Proceedings of the IEEE, pp.257-286, 1989.

M. Ramírez and H. Geffner, Plan recognition as planning, Proceedings of IJCAI, pp.1778-1783, 2009.

A. Reinefeld, An Improvement to the Scout Tree-Search Algorithm, International Computer Chess Association Journal, vol.6, issue.4, pp.4-14, 1983.

C. W. Reynolds, Steering behaviors for autonomous characters, Proceedings of Game Developers Conference 1999, pp.763-782, 1999.

M. Riedl, B. Li, H. Ai, and A. Ram, Robust and authorable multiplayer storytelling experiences, 2011.

J. M. Robson, The complexity of go, IFIP Congress, pp.413-417, 1983.

P. Rohlfshagen and S. M. Lucas, Ms Pac-Man versus Ghost Team CEC 2011 competition, 2011 IEEE Congress of Evolutionary Computation (CEC), pp.70-77, 2011.
DOI : 10.1109/CEC.2011.5949599

URL : http://repository.essex.ac.uk/4112/1/PacmanVersusGhostTeam2011.pdf

J. Russell and P. Norvig, Artificial intelligence: a modern approach, pp.13-92, 2010.

F. Schadd, S. Bakkes, and P. Spronck, Opponent modeling in real-time strategy games, pp.61-68, 2007.

J. Schaeffer, Y. B. , N. Burch, A. Kishimoto, M. Müller et al., Checkers is solved Work named by Science Magazine as one of the 10 most important scientific achievements of, Science, issue.5844, pp.3171518-1522, 2007.

J. Schaeffer, Y. B. , N. Burch, A. Kishimoto, M. Müller et al., Checkers is solved Work named by Science Magazine as one of the 10 most important scientific achievements of, Science, issue.5844, pp.3171518-1522, 2007.

J. Schrum, I. V. Karpov, and R. Miikkulainen, Ut2: Human-like behavior via neuroevolution of combat behavior and replay of human traces, Proceedings of IEEE CIG, pp.329-336, 2011.

G. Schwarz, Estimating the Dimension of a Model. The Annals of Statistics, pp.461-464, 1978.

N. Shaker, J. Togelius, and G. N. Yannakakis, Towards Automatic Personalized Content Generation for Platform Games, Proceedings of AIIDE, 2010.

C. E. Shannon, Programming a Computer for Playing Chess, Philosophical Magazine, issue.2, pp.22-65, 1950.
DOI : 10.1007/978-1-4757-1968-0_1

M. Sharma, M. Holmes, J. Santamaria, A. Irani, C. L. Isbell et al., Transfer Learning in Real-Time Strategy Games Using Hybrid CBR/RL, International Joint Conference of Artificial Intelligence, IJCAI, pp.74-96, 2007.

S. Fraser and U. , Skillcraft http://skillcraft.ca, p.16

H. Simonis, Sudoku as a constraint problem CP Workshop on modeling and reformulating Constraint Satisfaction Problems, pp.13-27, 2005.

G. Smith, R. Avery, S. Houmanfar, and . Louis, Using co-evolved RTS opponents to teach spatial tactics, Proceedings of the 2010 IEEE Conference on Computational Intelligence and Games, pp.75-97, 2010.
DOI : 10.1109/ITW.2010.5593359

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.173.5335

J. Sondik, The Optimal Control of Partially Observable Markov Processes over the Infinite Horizon: Discounted Costs, Operations Research, vol.26, issue.2, pp.1071-1088, 1978.
DOI : 10.1287/opre.26.2.282

F. Southey, M. Bowling, B. Larson, C. Piccione, N. Burch et al., Bayes' bluff: Opponent modelling in poker, Proceedings of UAI, pp.550-558, 2005.

. Sturtevant, Benchmarks for grid-based pathfinding. Transactions on Computational Intelligence and AI in Games, 2012.

S. Richard, A. G. Sutton, and . Barto, Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning), pp.74-89, 1998.

G. Synnaeve and P. Bessière, Bayesian Modeling of a Human MMORPG Player, 30th international workshop on Bayesian Inference and Maximum Entropy, 2010.
DOI : 10.1063/1.3573658

URL : https://hal.archives-ouvertes.fr/inria-00538744

G. Synnaeve and P. Bessière, A Bayesian model for RTS units control applied to StarCraft, 2011 IEEE Conference on Computational Intelligence and Games (CIG'11), 2011.
DOI : 10.1109/CIG.2011.6032006

URL : https://hal.archives-ouvertes.fr/hal-00607281

G. Synnaeve and P. Bessière, A Bayesian model for opening prediction in RTS games with application to StarCraft, 2011 IEEE Conference on Computational Intelligence and Games (CIG'11), pp.95-174, 2011.
DOI : 10.1109/CIG.2011.6032018

URL : https://hal.archives-ouvertes.fr/hal-00607277

G. Synnaeve and P. Bessière, A Bayesian Model for Plan Recognition in RTS Games applied to StarCraft, AAAI, editor, Proceedings of AIIDE, pp.79-84, 2011.
URL : https://hal.archives-ouvertes.fr/hal-00641323

G. Synnaeve and P. Bessiere, A Dataset for Star- Craft AI & an Example of Armies Clustering, Artificial Intelligence in Adversarial Real-Time Games: Papers from the 2012 AIIDE Workshop AAAI Techn, pp.25-30, 2012.

G. Synnaeve and P. Bessière, Special tactics: A Bayesian approach to tactical decision-making, 2012 IEEE Conference on Computational Intelligence and Games (CIG), 2012.
DOI : 10.1109/CIG.2012.6374184

URL : https://hal.archives-ouvertes.fr/hal-00752841

J. B. Tenenbaum, V. D. Silva, and J. C. Langford, A Global Geometric Framework for Nonlinear Dimensionality Reduction, Science, vol.290, issue.5500, pp.2319-2323, 2000.
DOI : 10.1126/science.290.5500.2319

B. Thiery and . Scherrer, Building Controllers for Tetris, ICGA Journal, vol.32, issue.1, pp.3-11, 2009.
DOI : 10.3233/ICG-2009-32102

URL : https://hal.archives-ouvertes.fr/inria-00418954

. Thrun, Particle filters in robotics, Proceedings of the 17th Annual Conference on Uncertainty in AI (UAI), p.165, 2002.

J. Togelius, S. Karakovskiy, and R. Baumgarten, The 2009 Mario AI Competition, IEEE Congress on Evolutionary Computation, pp.1-8, 2010.
DOI : 10.1109/CEC.2010.5586133

A. Treuille, S. Cooper, and Z. Popovi?, Continuum crowds, ACM Transactions on Graphics, vol.25, issue.3, pp.1160-1168, 2006.
DOI : 10.1145/1141911.1142008

J. Tromp and G. Farnebäck, Combinatorics of Go, 2006.
DOI : 10.1007/978-3-540-75538-8_8

W. Van-der-sterren, Multi-Unit Planning with HTN and A*, Paris Game AI Conference, 2009.

J. M. Van-waveren and L. J. Rothkrantz, Artificial player for quake iii arena, International Journal of Intelligent Games & Simulation (IJIGS), vol.1, issue.1, pp.25-32, 2002.

G. Viglietta, Gaming is a hard job, but someone has to do it! Arxiv, pp.72-120, 2012.

J. Von-neumann and O. Morgenstern, Theory of Games and Economic Behavior, 1944.

B. G. Weber, Integrating Learning in a Multi-Scale Agent, p.65

G. Ben, M. Weber, and . Mateas, A data mining approach to strategy prediction, CIG (IEEE), pp.122-147, 2009.

B. G. Weber, M. Mateas, and A. Jhala, Applying goal-driven autonomy to starcraft, Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 2010a, pp.96-170

B. G. Weber, P. Mawhorter, M. Mateas, and A. Jhala, Reactive planning idioms for multi-scale game AI, Proceedings of the 2010 IEEE Conference on Computational Intelligence and Games, pp.74-92
DOI : 10.1109/ITW.2010.5593363

B. G. Weber, M. Mateas, and A. Jhala, A particle model for state estimation in realtime strategy games, Proceedings of AIIDE, pp.103-108, 2011.

J. Westra and F. Dignum, Evolutionary neural networks for Non-Player Characters in Quake III, 2009 IEEE Symposium on Computational Intelligence and Games, 2009.
DOI : 10.1109/CIG.2009.5286460