S. M. Anzalone, S. Boucenna, S. Ivaldi, and M. Chetouani, Evaluating the Engagement with Social Robots, International Journal of Social Robotics, vol.35, issue.4, pp.465-478, 2015.
DOI : 10.1145/954339.954342

URL : https://hal.archives-ouvertes.fr/hal-01158293

T. Bader, M. Vogelgesang, and E. Klaus, Multimodal integration of natural gaze behavior for intention recognition during object manipulation, Proceedings of the 2009 international conference on Multimodal interfaces, ICMI-MLMI '09, pp.199-206, 2009.
DOI : 10.1145/1647314.1647350

S. Baluja and D. Pomerleau, Non-intrusive gaze tracking using artificial neural networks In: Advances in NIPS, pp.753-760, 1994.

S. Boucenna, P. Gaussier, P. Andry, and L. Hafemeister, A Robot Learns the Facial Expressions Recognition and Face/Non-face Discrimination Through an Imitation Game, International Journal of Social Robotics, vol.31, issue.1, pp.633-652, 2014.
DOI : 10.1109/TPAMI.2008.52

I. Bretherton, Intentional communication and the development of an understanding of mind. Children's theories of mind: Mental states and social understanding pp, pp.49-75, 1991.

G. Castellano, A. Pereira, I. Leite, A. Paiva, and P. W. Mcowan, Detecting user engagement with a robot companion using task and social interaction-based features, Proceedings of the 2009 international conference on Multimodal interfaces, ICMI-MLMI '09, pp.119-126, 2009.
DOI : 10.1145/1647314.1647336

O. Dermy, A. Paraschos, M. Ewerton, J. Peters, F. Charpillet et al., Prediction of Intention during Interaction with iCub with Probabilistic Movement Primitives, Frontiers in robotics and AI, 2017.
DOI : 10.1177/0278364913478447

URL : https://hal.archives-ouvertes.fr/hal-01613671

R. Dillmann, R. Becher, and P. Steinhaus, ARMAR II ??? A LEARNING AND COOPERATIVE MULTIMODAL HUMANOID ROBOT SYSTEM, International Journal of Humanoid Robotics, vol.01, issue.01, pp.143-155, 2004.
DOI : 10.1142/S0219843604000046

A. Dragan and S. Srinivasa, Generating Legible Motion, Robotics: Science and Systems IX, 2013.
DOI : 10.15607/RSS.2013.IX.024

A. Dragan and S. Srinivasa, Integrating human observer inferences into robot motion planning, Autonomous Robots, vol.32, issue.1, pp.351-368, 2014.
DOI : 10.1177/0278364913488805

URL : http://www.ri.cmu.edu/pub_files/2014/7/legibility_AURO14.pdf

G. Ferrer and A. Sanfeliu, Bayesian Human Motion Intentionality Prediction in urban environments, Pattern Recognition Letters, vol.44, pp.134-140, 2014.
DOI : 10.1016/j.patrec.2013.08.013

M. W. Hoffman, D. B. Grimes, A. P. Shon, and R. P. Rao, A probabilistic model of gaze imitation and shared attention, Neural Networks, vol.19, issue.3, pp.299-310, 2006.
DOI : 10.1016/j.neunet.2006.02.008

C. M. Huang and B. Mutlu, Anticipatory robot control for efficient human-robot collaboration, 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp.83-90, 2016.
DOI : 10.1109/HRI.2016.7451737

R. Ishii, Y. Shinohara, T. Nakano, and T. Nishida, Combining multiple types of eyegaze information to predict user's conversational engagement, 2011.

S. Ivaldi, S. Lefort, J. Peters, M. Chetouani, J. Provasi et al., Towards Engagement Models that Consider Individual Factors in HRI: On the Relation of Extroversion and Negative Attitude Towards Robots to Gaze and Speech During a Human???Robot Assembly Task, International Journal of Social Robotics, vol.74, issue.1, pp.63-86, 2017.
DOI : 10.1145/1878039.1878048

URL : https://hal.archives-ouvertes.fr/hal-01322231

J. Kim, C. J. Banks, and J. A. Shah, Collaborative planning with encoding of users' high-level strategies, p.AAAI, 2017.

H. Kozima and H. Yano, A robot that learns to communicate with human caregivers, Proceedings of the First International Workshop on Epigenetic Robotics, pp.47-52, 2001.

C. Ma, H. Prendinger, and M. Ishizuka, Eye movement as an indicator of users' involvement with embodied interfaces at the low level, Proc. AISB pp, pp.136-143, 2005.

A. N. Meltzoff and R. Brooks, Eyes wide shut: The importance of eyes in infant gaze following and understanding other minds. Gaze following: Its development and significance, 2007.

I. Mitsugami, N. Ukita, and M. Kidode, Robot navigation by eye pointing. Lecture notes in computer science 3711, p.256, 2005.
DOI : 10.1007/11558651_26

A. Paraschos, C. Daniel, J. R. Peters, and G. Neumann, Probabilistic movement primitives, pp.2616-2624, 2013.
DOI : 10.1109/lra.2017.2725440

URL : https://hal.archives-ouvertes.fr/hal-01613671

H. C. Ravichandar, H. Kumar, A. Dani, and A. , Bayesian human intention inference through multiple model filtering with gaze-based priors, In: Information Fusion IEEE, pp.2296-2302, 2016.
DOI : 10.1109/mfi.2015.7295812

F. Timm and E. Barth, Accurate eye centre localisation by means of gradients, pp.125-130, 2011.

V. J. Traver, A. P. Del-pobil, and M. Pérez-francisco, Making service robots humansafe, Proceedings.(IROS 2000) on, pp.696-701, 2000.
DOI : 10.1109/iros.2000.894685

A. S. Walker-andrews, Infants' perception of expressive behaviors: Differentiation of multimodal information., Psychological Bulletin, vol.121, issue.3, p.437, 1997.
DOI : 10.1037/0033-2909.121.3.437

Z. Wang, M. P. Deisenroth, H. B. Amor, D. Vogt, B. Schölkopf et al., Probabilistic Modeling of Human Movements for Intention Inference, Robotics: Science and Systems VIII, 2012.
DOI : 10.15607/RSS.2012.VIII.055

M. Weser, D. Westhoff, M. Huser, and J. Zhang, Multimodal People Tracking and Trajectory Prediction based on Learned Generalized Motion Patterns, 2006 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, pp.541-546, 2006.
DOI : 10.1109/MFI.2006.265639

URL : http://tams-www.informatik.uni-hamburg.de/people/alumni/westhoff/publications/mfi2006.pdf

X. Xiong and F. De-la-torre, Supervised Descent Method and Its Applications to Face Alignment, 2013 IEEE Conference on Computer Vision and Pattern Recognition, p.IEEE CVPR, 2013.
DOI : 10.1109/CVPR.2013.75

URL : http://www.ri.cmu.edu/pub_files/2013/5/main.pdf