W. S. Cleveland and S. J. Devlin, Locally Weighted Regression: An Approach to Regression Analysis by Local Fitting, Journal of the American Statistical Association, vol.41, issue.810345, p.596610, 1988.
DOI : 10.1080/01621459.1988.10478639

C. G. Atkeson, A. W. Moore, and S. Schaal, Locally Weighted Learning, Artificial Intelligence Review, vol.11, issue.10, pp.11-731006559212014, 1023.
DOI : 10.1007/978-94-017-2053-3_2

R. Penrose, A generalized inverse for matrices, Mathematical Proceedings of the Cambridge Philosophical Society, vol.11, issue.03, pp.406-413, 1955.
DOI : 10.1093/qmath/2.1.189

R. H. Byrd, P. Lu, and J. Nocedal, A Limited Memory Algorithm for Bound Constrained Optimization, SIAM Journal on Scientific Computing, vol.16, issue.5, pp.1190-1208, 1995.
DOI : 10.1137/0916069

C. Zhu, R. H. Byrd, and J. Nocedal, Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound-constrained optimization, ACM Transactions on Mathematical Software, vol.23, issue.4, pp.550-560, 1997.
DOI : 10.1145/279232.279236

A. F. Siegel, Robust regression using repeated medians, Biometrika, vol.69, issue.1, 1982.
DOI : 10.1093/biomet/69.1.242

A. Baranes and P. Oudeyer, Active learning of inverse models with intrinsically motivated goal exploration in robots, Robotics and Autonomous Systems, vol.61, issue.1, 2012.
DOI : 10.1016/j.robot.2012.05.008

URL : https://hal.archives-ouvertes.fr/hal-00788440

M. Rolf, Goal Babbling for an Efficient Bootstrapping of Inverse Models in High Dimensions, 2012.

M. Lopes, T. Lang, M. Toussaint, and P. Oudeyer, Exploration in model-based reinforcement learning by empirically estimating learning progress, Neural Information Processing System (NIPS), 2012.
URL : https://hal.archives-ouvertes.fr/hal-00755248

M. Lopes and P. Oudeyer, The strategic student approach for life-long exploration and learning, 2012 IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL), 2012.
DOI : 10.1109/DevLrn.2012.6400807

URL : https://hal.archives-ouvertes.fr/hal-00755216

J. Schmidhuber, F. Fernández, and M. Veloso, Formal Theory of Creativity, Fun, and Intrinsic Motivation Probabilistic policy reuse in a reinforcement learning agent, IEEE Transactions on Autonomous Mental Development Proceeding of the fifth conference on Autonomous Agents and Multiagent Systems, pp.230-247, 1990.

M. E. Taylor, N. K. Jong, and P. Stone, Transferring instances for modelbased reinforcement learning Machine Learning and Knowledge Discovery in Databases, pp.488-505, 2008.

M. E. Taylor and P. Stone, Transfer learning for reinforcement learning domains: A survey, The Journal of Machine Learning Research, vol.10, pp.1633-1685, 2009.

L. Torrey and J. Shavlik, Transfer Learning, Handbook of Research on Machine Learning Applications, 2009.
DOI : 10.4018/978-1-60566-766-9.ch011

F. Doshi-velez and G. D. Konidaris, Transfer Learning by Discovering Latent Task Parametrizations, the NIPS 2012 Workshop on Bayesian Nonparametric Models for Reliable Planning And Decision- Making Under Uncertainty, 2012.

M. G. Madden and T. Howley, Transfer of Experience Between Reinforcement Learning Environments with Progressive Difficulty, Artificial Intelligence Review, vol.21, issue.3/4, pp.3-4, 2004.
DOI : 10.1023/B:AIRE.0000036264.95672.64

S. Barrett, M. Taylor, and P. Stone, Transfer learning for reinforcement learning on a physical robot, The Ninth International Conference on Autonomous Agents and Multiagent Systems -Adaptive Learning Agents Workshop, 2010.

S. Thrun and T. Mitchell, Lifelong robot learning, Robotics and Autonomous Systems, vol.15, issue.1-2, 1993.
DOI : 10.1016/0921-8890(95)00004-Y

D. L. Silver, Y. Qiang, and L. Lianghao, Lifelong Machine Learning Systems: Beyond Learning Algorithms, AAAI Spring Symposium Series, 2013.

J. Konczak, On the notion of motor primitives in humans and robots, 2005.