B. Argall, S. Chernova, and M. Veloso, A survey of robot learning from demonstration, Robotics and Autonomous Systems, vol.57, issue.5, 2009.
DOI : 10.1016/j.robot.2008.10.024

M. Lopes, F. Melo, L. Montesano, J. Santos, and . Victor, Abstraction Levels for Robotic Imitation: Overview and Computational Approaches, From Motor to Interaction Learning in Robots, ser. Studies in Computational Intelligence, pp.313-355, 2010.
DOI : 10.1007/978-3-642-05181-4_14

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.157.9428

M. Nicolescu and M. Mataric, Natural methods for robot task learning, Proceedings of the second international joint conference on Autonomous agents and multiagent systems , AAMAS '03, pp.241-248, 2003.
DOI : 10.1145/860575.860614

S. Nguyen, A. Baranes, and P. Oudeyer, Bootstrapping intrinsically motivated learning with human demonstration, 2011 IEEE International Conference on Development and Learning (ICDL), 2011.
DOI : 10.1109/DEVLRN.2011.6037329

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.231.6728

S. Calinon, F. Guenter, and A. Billard, On Learning, Representing, and Generalizing a Task in a Humanoid Robot, IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics), vol.37, issue.2, 2007.
DOI : 10.1109/TSMCB.2006.886952

M. Lopes, F. S. Melo, and L. Montesano, Affordance-based imitation learning in robots, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.1015-1021, 2007.
DOI : 10.1109/IROS.2007.4399517

A. L. Thomaz and C. Breazeal, Teachable robots: Understanding human teaching behavior to build more effective robot learners, Artificial Intelligence, vol.172, issue.6-7, pp.716-737, 2008.
DOI : 10.1016/j.artint.2007.09.009

P. Abbeel and A. Y. Ng, Apprenticeship learning via inverse reinforcement learning, Twenty-first international conference on Machine learning , ICML '04, pp.1-8, 2004.
DOI : 10.1145/1015330.1015430

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.2.92

S. Chernova and M. Veloso, Interactive policy learning through confidence-based autonomy, J. Artificial Intelligence Research, 2009.

C. Breazeal, A. Brooks, J. Gray, G. Hoffman, J. Lieberman et al., TUTELAGE AND COLLABORATION FOR HUMANOID ROBOTS, International Journal of Humanoid Robotics, vol.01, issue.02, 2004.
DOI : 10.1142/S0219843604000150

M. Lopes, F. S. Melo, and L. Montesano, Active Learning for Reward Estimation in Inverse Reinforcement Learning, Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part II, ser. ECML PKDD '09, pp.31-46, 2009.
DOI : 10.1007/978-3-642-04174-7_3

M. Mason and M. Lopes, Robot self-initiative and personalization by learning through repeated interactions, Proceedings of the 6th international conference on Human-robot interaction, HRI '11, 2011.
DOI : 10.1145/1957656.1957814

URL : https://hal.archives-ouvertes.fr/hal-00636164

K. Judah, S. Roy, A. Fern, and T. Dietterich, Reinforcement learning via practice and critique advice, Twenty-Fourth AAAI Conference on Artificial Intelligence (AAAI-10), 2010.

W. Knox and P. Stone, Interactively shaping agents via human reinforcement, Proceedings of the fifth international conference on Knowledge capture, K-CAP '09, pp.9-16, 2009.
DOI : 10.1145/1597735.1597738

M. Cakmak and A. Thomaz, Optimality of human teachers for robot learners, 2010 IEEE 9th International Conference on Development and Learning, 2010.
DOI : 10.1109/DEVLRN.2010.5578865

P. Rouanet, P. Oudeyer, F. Danieau, and D. Filliat, The Impact of Human–Robot Interfaces on the Learning of Visual Objects, IEEE Transactions on Robotics, vol.29, issue.2, 2013.
DOI : 10.1109/TRO.2012.2228134

M. Lopes, T. Cederborg, and P. Oudeyer, Simultaneous acquisition of task and feedback models, 2011 IEEE International Conference on Development and Learning (ICDL), pp.1-7, 2011.
DOI : 10.1109/DEVLRN.2011.6037359

URL : https://hal.archives-ouvertes.fr/hal-00636166

M. Heckmann, Teaching a humanoid robot: Headset-free speech interaction for audio-visual association learning, RO-MAN 2009, The 18th IEEE International Symposium on Robot and Human Interactive Communication, pp.422-427, 2009.
DOI : 10.1109/ROMAN.2009.5326338

P. Kindermans, D. Verstraeten, and B. Schrauwen, A Bayesian Model for Exploiting Application Constraints to Enable Unsupervised Training of a P300-based BCI, PLoS ONE, vol.39, issue.4, p.33758, 2012.
DOI : 10.1371/journal.pone.0033758.s002

R. Sutton and A. Barto, Reinforcement Learning: An Introduction, IEEE Transactions on Neural Networks, vol.9, issue.5, 1998.
DOI : 10.1109/TNN.1998.712192

D. Ramachandran and E. Amir, Bayesian inverse reinforcement learning, 20th Int. Joint Conf. Artificial Intelligence, 2007.

F. Zheng, G. Zhang, and Z. Song, Comparison of different implementations of MFCC, Journal of Computer Science and Technology, vol.87, issue.4, 2001.
DOI : 10.1007/BF02943243

H. Sakoe and S. Chiba, Dynamic programming algorithm optimization for spoken word recognition, IEEE Transactions on Acoustics, Speech, and Signal Processing, vol.26, issue.1, pp.43-49, 1978.
DOI : 10.1109/TASSP.1978.1163055

J. Platt, Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods Advances in large margin classifiers, pp.61-74, 1999.

M. Cakmak and A. Thomaz, Designing robot learners that ask good questions, Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction, HRI '12, 2012.
DOI : 10.1145/2157689.2157693

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.650.7240

R. Sutton, D. Precup, and S. Singh, Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning, Artificial Intelligence, vol.112, issue.1-2, pp.181-211, 1999.
DOI : 10.1016/S0004-3702(99)00052-1