M. Buro, Real-time strategy games: A new ai research challenge, IJCAI 2003. International Joint Conferences on Artificial Intelligence, pp.1534-1535, 2003.

M. Buro and D. Churchill, Real-time strategy game competitions, AI Magazine, vol.33, issue.3, pp.106-108, 2012.

G. Synnaeve, Bayesian Programming and Learning for Multi-Player Video Games, 2012.
URL : https://hal.archives-ouvertes.fr/tel-00780635

B. G. Weber and M. Mateas, A data mining approach to strategy prediction, 2009 IEEE Symposium on Computational Intelligence and Games, 2009.
DOI : 10.1109/CIG.2009.5286483

G. Synnaeve and P. Bessiere, A Dataset for StarCraft AI & an Example of Armies Clustering, AIIDE Workshop on AI in Adversarial Realtime games 2012, 2012.
URL : https://hal.archives-ouvertes.fr/hal-00752893

B. G. Weber, M. Mateas, and A. Jhala, Building human-level ai for real-time strategy games, Proceedings of AIIDE Fall Symposium on Advances in Cognitive Systems, 2011.

C. E. Miles, Co-evolving real-time strategy game players, 2007.
DOI : 10.1109/cig.2007.368083

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.180.9775

R. Houlette and D. Fu, The ultimate guide to fsms in games, AI Game Programming Wisdom 2, 2003.

S. Ontañón, K. Mishra, N. Sugandh, and A. Ram, Learning from Demonstration and Case-Based Planning for Real-Time Strategy Games, Soft Computing Applications in Industry, ser. Studies in Fuzziness and Soft Computing, pp.293-310, 2008.
DOI : 10.1007/978-3-540-77465-5_15

K. Mishra, S. Ontañón, and A. Ram, Situation Assessment for Plan Retrieval in Real-Time Strategy Games, ECCBR, pp.355-369, 2008.
DOI : 10.1007/978-3-540-85502-6_24

H. Hoang, S. Lee-urban, and H. Muñoz-avila, Hierarchical plan representations for encoding strategic game ai, AIIDE, pp.63-68, 2005.

D. Churchill and M. Buro, Build order optimization in starcraft, Proceedings of AIIDE, pp.14-19, 2011.

E. Dereszynski, J. Hostetler, A. Fern, T. D. , T. Hoang et al., Learning probabilistic behavior models in real-time strategy games, Artificial Intelligence and Interactive Digital Entertainment (AIIDE), AAAI, 2011.

G. Synnaeve and P. Bessiere, A Bayesian model for opening prediction in RTS games with application to StarCraft, 2011 IEEE Conference on Computational Intelligence and Games (CIG'11), p.0, 2011.
DOI : 10.1109/CIG.2011.6032018

URL : https://hal.archives-ouvertes.fr/hal-00607277

G. Synnaeve and P. Bessì-ere, A Bayesian Model for Plan Recognition in RTS Games applied to StarCraft, Proceedings of the Seventh Artificial Intelligence and Interactive Digital Entertainment Conference (AIIDE 2011), ser. Proceedings of AIIDE, AAAI, pp.79-84, 2011.
URL : https://hal.archives-ouvertes.fr/hal-00641323

J. Young and N. Hawes, Evolutionary learning of goal priorities in a real-time strategy game, 2012.

A. Aamodt and E. Plaza, Case-based reasoning: Foundational issues, methodological variations, and system approaches, Artificial Intelligence Communications, vol.7, issue.1, pp.39-59, 1994.

D. W. Aha, M. Molineaux, and M. J. Ponsen, Learning to Win: Case-Based Plan Selection in a Real-Time Strategy Game, ICCBR, pp.5-20, 2005.
DOI : 10.1007/11536406_4

J. Hsieh and C. Sun, Building a player strategy model by analyzing replays of real-time strategy games, 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), pp.3106-3111, 2008.
DOI : 10.1109/IJCNN.2008.4634237

F. Schadd, S. Bakkes, and P. Spronck, Opponent modeling in real-time strategy games, GAMEON, pp.61-70, 2007.

U. Jaidee, H. Muñoz-avila, and D. W. Aha, Case-based learning in goal-driven autonomy agents for real-time strategy combat tasks, Proceedings of the ICCBR Workshop on Computer Games, pp.43-52, 2011.

M. ?. Certick´ycertick´y and M. ?. Certick´ycertick´y, Case-based reasoning for army compositions in real-time strategy games, Proceedings of Scientific Conference of Young Researchers, pp.70-73, 2013.

B. G. Weber, M. Mateas, and A. Jhala, A particle model for state estimation in real-time strategy games, Proceedings of AIIDE, pp.103-108, 2011.

B. G. Weber, P. Mawhorter, M. Mateas, and A. Jhala, Reactive planning idioms for multi-scale game AI, Proceedings of the 2010 IEEE Conference on Computational Intelligence and Games, 2010.
DOI : 10.1109/ITW.2010.5593363

B. G. Weber, M. Mateas, and A. Jhala, Applying goal-driven autonomy to starcraft, Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 2010.

D. C. Pottinger, Terrain analysis for real-time strategy games, Proceedings of Game Developers Conference, 2000.

K. D. Forbus, J. V. Mahoney, and K. Dill, How qualitative spatial reasoning can improve strategy game AIs, IEEE Intelligent Systems, vol.17, issue.4, pp.25-30, 2002.
DOI : 10.1109/MIS.2002.1024748

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.27.1100

D. H. Hale, G. M. Youngblood, and P. N. Dixit, Automaticallygenerated convex region decomposition for real-time spatial agent navigation in virtual worlds, Artificial Intelligence and Interactive Digital Entertainment AIIDE, pp.173-178, 2008.

L. Perkins and M. ?. Certick´ycertick´y, Terrain analysis in real-time strategy games : An integrated approach to choke point detection and region decomposition Implementing a wall-in building placement in starcraft with declarative programming, Artificial Intelligence, pp.168-173, 2010.

S. Hladky and V. Bulitko, An evaluation of models for predicting opponent positions in first-person shooter video games, 2008 IEEE Symposium On Computational Intelligence and Games, 2008.
DOI : 10.1109/CIG.2008.5035619

F. Kabanza, P. Bellefeuille, F. Bisson, A. R. Benaskeur, and H. Irandoust, Opponent behaviour recognition for real-time strategy games, AAAI Workshops, 2010.

C. W. Geib and R. P. Goldman, A probabilistic plan recognition algorithm based on plan tree grammars, Artificial Intelligence, vol.173, issue.11, pp.1101-1132, 2009.
DOI : 10.1016/j.artint.2009.01.003

M. Sharma, M. Holmes, J. Santamaria, A. Irani, C. L. Isbell et al., Transfer Learning in Real-Time Strategy Games Using Hybrid CBR/RL, International Joint Conference of Artificial Intelligence, IJCAI, 2007.

P. Cadena and L. Garrido, Fuzzy Case-Based Reasoning for Managing Strategic and Tactical Reasoning in StarCraft, MICAI (1), ser. Lecture Notes in Computer Science, pp.113-124, 2011.
DOI : 10.1007/978-3-642-25324-9_10

G. Synnaeve and P. Bessì-ere, Special tactics: A Bayesian approach to tactical decision-making, 2012 IEEE Conference on Computational Intelligence and Games (CIG), 2012.
DOI : 10.1109/CIG.2012.6374184

URL : https://hal.archives-ouvertes.fr/hal-00752841

C. Miles and S. J. Louis, Co-evolving real-time strategy game playing influence map trees with genetic algorithms, Proceedings of the International Congress on Evolutionary Computation, 2006.

D. Churchill, A. Saffidine, and M. Buro, Fast heuristic search for rts game combat scenarios, AIIDE, 2012.

M. Chung, M. Buro, and J. Schaeffer, Monte carlo planning in rts games, IEEE Symposium on Computational Intelligence and Games (CIG), 2005.

R. Balla and A. Fern, Uct for tactical assault planning in realtime strategy games, International Joint Conference of Artificial Intelligence, IJCAI

A. Uriarte and S. Ontañón, Kiting in rts games using influence maps, Eighth Artificial Intelligence and Interactive Digital Entertainment Conference, 2012.

J. Hagelbäck and S. J. Johansson, A Multiagent Potential Field-Based Bot for Real-Time Strategy Games, International Journal of Computer Games Technology, vol.5, issue.1, pp.1-410, 2009.
DOI : 10.1109/21.44033

J. Hagelbäck, Potential-field based navigation in StarCraft, 2012 IEEE Conference on Computational Intelligence and Games (CIG), 2012.
DOI : 10.1109/CIG.2012.6374181

J. Hagelbäck and S. J. Johansson, Dealing with fog of war in a Real Time Strategy game environment, 2008 IEEE Symposium On Computational Intelligence and Games, pp.55-62, 2008.
DOI : 10.1109/CIG.2008.5035621

P. Avery, S. Louis, and B. Avery, Evolving coordinated spatial tactics for autonomous entities using influence maps, 2009 IEEE Symposium on Computational Intelligence and Games, pp.341-348, 2009.
DOI : 10.1109/CIG.2009.5286457

G. Smith, R. Avery, S. Houmanfar, and . Louis, Using co-evolved RTS opponents to teach spatial tactics, Proceedings of the 2010 IEEE Conference on Computational Intelligence and Games, 2010.
DOI : 10.1109/ITW.2010.5593359

H. Danielsiek, R. Stuer, A. Thom, N. Beume, B. Naujoks et al., Intelligent moving of groups in real-time strategy games, 2008 IEEE Symposium On Computational Intelligence and Games, pp.71-78, 2008.
DOI : 10.1109/CIG.2008.5035623

L. Liu and L. Li, Regional Cooperative Multi-agent Q-learning Based on Potential Field, 2008 Fourth International Conference on Natural Computation, pp.535-539, 2008.
DOI : 10.1109/ICNC.2008.173

M. Preuss, N. Beume, H. Danielsiek, T. Hein, B. Naujoks et al., Towards Intelligent Team Composition and Maneuvering in Real-Time Strategy Games, Transactions on Computational Intelligence and AI in Games (TCIAIG), pp.82-98, 2010.
DOI : 10.1109/TCIAIG.2010.2047645

G. Synnaeve and P. Bessiere, A Bayesian model for RTS units control applied to StarCraft, 2011 IEEE Conference on Computational Intelligence and Games (CIG'11), p.0, 2011.
DOI : 10.1109/CIG.2011.6032006

URL : https://hal.archives-ouvertes.fr/hal-00607281

R. S. Sutton, A. G. Barto, and R. Learning, An Introduction (Adaptive Computation and Machine Learning), 1998.

S. Wender and I. Watson, Applying reinforcement learning to small scale combat in the real-time strategy game StarCraft:Broodwar, 2012 IEEE Conference on Computational Intelligence and Games (CIG), 2012.
DOI : 10.1109/CIG.2012.6374183

B. Marthi, S. Russell, D. Latham, and C. Guestrin, Concurrent hierarchical reinforcement learning, International Joint Conference of Artificial Intelligence, IJCAI, pp.779-785, 2005.

C. Madeira, V. Corruble, and G. Ramalho, Designing a reinforcement learning-based adaptive AI for large-scale strategy games, AI and Interactive Digital Entertainment Conference, AIIDE (AAAI), 2006.
URL : https://hal.archives-ouvertes.fr/hal-01351276

U. Jaidee and H. Muñoz-avila, Classq-l: A q-learning algorithm for adversarial real-time strategy games, Eighth Artificial Intelligence and Interactive Digital Entertainment Conference, 2012.

M. Ponsen and I. P. Spronck, Improving adaptive game AI with evolutionary learning, pp.389-396, 2004.

N. Othman, J. Decraene, W. Cai, N. Hu, and A. Gouaillard, Simulationbased optimization of starcraft tactical ai through evolutionary computation, CIG (IEEE), 2012.

D. Churchill, A. Saffidine, and M. Buro, Fast heuristic search for rts game combat scenarios, 2012.

S. Wintermute, J. Z. , J. Xu, and J. E. Laird, Sorts: A humanlevel approach to real-time strategy AI, AI and Interactive Digital Entertainment Conference, AIIDE (AAAI), pp.55-60, 2007.

D. Demyen and M. Buro, Efficient triangulation-based pathfinding, Proceedings of the 21st national conference on Artificial intelligence, pp.942-947, 2006.

C. W. Reynolds, Steering behaviors for autonomous characters, Proceedings of Game Developers Conference, pp.763-782, 1999.

A. Treuille, S. Cooper, and Z. Popovi´cpopovi´c, Continuum crowds, ACM Transactions on Graphics, vol.25, issue.3, pp.1160-1168, 2006.
DOI : 10.1145/1141911.1142008

N. Sturtevant, Benchmarks for Grid-Based Pathfinding, Transactions on Computational Intelligence and AI in Games, 2012. [Online]
DOI : 10.1109/TCIAIG.2012.2197681

S. Available, K. Ontañón, N. Mishra, A. Sugandh, and . Ram, On-line case-based planning The combinatorial multi-armed bandit problem and its application to real-time strategy games, Computational Intelligence, vol.26, issue.1, pp.84-119, 2010.

M. Molineaux, D. W. Aha, and P. Moore, Learning continuous action models in a real-time strategy strategy environment, FLAIRS Conference, pp.257-262, 2008.

J. Young, F. Smith, K. Atkinson, T. Poyner, and . Chothia, SCAIL: An integrated Starcraft AI system, 2012 IEEE Conference on Computational Intelligence and Games (CIG), 2012.
DOI : 10.1109/CIG.2012.6374188

P. Auer, N. Cesa-bianchi, and P. Fischer, Finite-time analysis of the multiarmed bandit problem, Machine Learning, vol.47, issue.2/3, pp.235-256, 2002.
DOI : 10.1023/A:1013689704352

G. Tesauro, Comparison training of chess evaluation functions, " in Machines that learn to play games, pp.117-130, 2001.

J. Rubin and I. Watson, Computer poker: A review, Artificial Intelligence, vol.175, issue.5-6, pp.958-987, 2011.
DOI : 10.1016/j.artint.2010.12.005

I. Refanidis and I. Vlahavas, Heuristic planning with resources, ECAI, pp.521-525, 2000.

S. Branavan, D. Silver, and R. Barzilay, Learning to win by reading manuals in a monte-carlo framework, Proceedings of ACL, pp.268-277, 2011.