A. Andrade, A. Bagri, K. Zaw, B. Roos, and J. Ruiz, Avatar-Mediated Training in the Delivery of Bad News in a Virtual World, Journal of Palliative Medicine, vol.13, issue.12, pp.1415-1424, 2010.
DOI : 10.1089/jpm.2010.0108

J. Allwood, S. Kopp, and K. Grammer, The analysis of embodied communicative feedback in multimodal corpora: a prerequisite for behavior simulation, Language Resources and Evaluation, vol.32, issue.3, p.255, 2007.
DOI : 10.3758/BF03200792

J. Allwood and L. Cerrato, A study of gestural feedback expressions, First nordic symposium on multimodal communication, pp.7-22, 2003.

S. A. Battersby and P. G. Healey, Head and hand movements in the orchestration of dialogue, Annual Conference of the Cognitive Science Society, 2010.

R. Bertrand, G. Ferré, P. Blache, R. Espesser, and S. Rauzy, Backchannels revisited from a multimodal perspective. Auditory-visual Speech Processing, pp.1-5, 2007.
URL : https://hal.archives-ouvertes.fr/hal-00244490

R. Bertrand, P. Blache, R. Espesser, G. Ferré, C. Meunier et al., Le CID-Corpus of Interactional Data ? Annotation et exploitation multimodale de parole conversationnelle, pp.49-50, 2008.
URL : https://hal.archives-ouvertes.fr/hal-00349893

E. Bevacqua, M. Mancini, and C. Pelachaud, A Listening Agent Exhibiting Variable Behaviour, International Workshop on Intelligent Virtual Agents, pp.262-269, 2008.
DOI : 10.1007/978-3-540-85483-8_27

E. Bevacqua, S. J. Hyniewska, and C. Pelachaud, Positive influence of smile backchannels in ECAs, International Workshop on Interacting with ECAs as Virtual Characters, p.13, 2010.

B. Bigi, SPPAS: a tool for the phonetic segmentations of Speech, The eighth international conference on Language Resources and Evaluation, pp.1748-1755, 2012.
URL : https://hal.archives-ouvertes.fr/hal-00983701

P. P. Boersma, Praat, a system for doing phonetics by computer, 2002.

P. E. Bull, Posture and gesture, 1987.

L. J. Brunner, Smiles can be back channels., Journal of Personality and Social Psychology, vol.37, issue.5, p.728, 1979.
DOI : 10.1037/0022-3514.37.5.728

J. Cassell and K. R. Thorisson, The power of a nod and a glance: Envelope vs. emotional feedback in animated conversational agents, Applied Artificial Intelligence, vol.13, issue.4-5, pp.4-5, 1999.
DOI : 10.1080/088395199117360

M. Chollet, M. Ochs, and C. Pelachaud, Mining a multimodal corpus for non-verbal signals sequences conveying attitudes, International Conference on Language Resources and Evaluation, 2014.
URL : https://hal.archives-ouvertes.fr/hal-01074879

M. Chollet, M. Ochs, and C. Pelachaud, From nonverbal signals sequence mining to bayesian networks for interpersonal attitudes expression, International Conference on Intelligent Virtual Agents, pp.120-133, 2014.
URL : https://hal.archives-ouvertes.fr/hal-01074880

S. Dermouche and C. Pelachaud, Sequence-based multimodal behavior modeling for social agents, Proceedings of the 18th ACM International Conference on Multimodal Interaction, ICMI 2016, pp.29-36, 2016.
DOI : 10.1145/1891903.1891909

S. Duncan, Some signals and rules for taking speaking turns in conversations., Journal of Personality and Social Psychology, vol.23, issue.2, p.283, 1972.
DOI : 10.1037/h0033031

P. Ekman and W. V. Friesen, Felt, false, and miserable smiles, Journal of Nonverbal Behavior, vol.6, issue.4, pp.238-252, 1982.
DOI : 10.1007/BF00987191

P. Fournier-viger, T. Gueniche, V. S. Tseng, P. Fournier-viger, U. Faghihi et al., CMRules: Mining Sequential Rules Common to Several Sequences Knowledge-based Systems CMRules: Mining Sequential Rules Common to Several Sequences. Knowledge-based Systems RuleGrowth: Mining Sequential Rules Common to Several Sequences by Pattern- Growth ERMiner: Sequential Rule Mining Using Equivalence Classes Advances in Intelligent Data Analysis When listeners talk Eyebrows in French talk-in-interaction. Gesture and Speech in Interaction Virtual rapport, Proc. 8th International Conference on Advanced Data Mining and Applications Proceedings of the 26th Symposium on Applied Computing (ACM SAC 2011 International Workshop on Intelligent Virtual Agents, pp.431-442, 2001.

A. Gravano, J. M. Hirschberg, and S. Kita, Turn-taking cues in task-oriented dialogue, Computer Speech & Language, vol.25, issue.3, pp.601-634, 2009.
DOI : 10.1016/j.csl.2010.10.003

T. Janssoone, C. Clavel, K. Bailly, G. Richard, and K. Jokinen, Using temporal association rules for the synthesis of embodied conversational agents with a specific stance Non-verbal Feedback in Interactions, International Conference on Intelligent Virtual Agents Affective Information Processing Gaze and gesture activity in communication. Universal Access in Human-Computer Interaction. Intelligent and Ubiquitous Interaction Environments, pp.175-189, 2008.

S. Kopp, J. Allwood, K. Grammer, E. Ahlsen, T. Stocksmeier et al., Modeling Embodied Feedback with Virtual Humans Modeling Communication with Robots and Virtual Humans An analysis of turn-taking and backchannels based on prosodic and syntactic features in Japanese map task dialogs. Language and speech Measuring acousticprosodic entrainment with respect to multiple levels and dimensions Mining multimodal sequential patterns: a case study on affect detection Hand and mind: What gestures reveal about thought Automated posture analysis for detecting learner's interest level A probabilistic multimodal approach for predicting listener backchannels, Interspeech Proceedings of the 13th international conference on multimodal interfaces (pp. 3-10). ACM. [39] McNeill, Computer Vision and Pattern Recognition Workshop, 2003. CVPR 03. Conference on Proc. of 8th Int. Conf. on Autonomous Agents and Multiagent Systems, pp.3-4, 1992.

P. Paggio and C. Navarretta, Feedback in head gestures and speech, LREC 2010 Workshop Multimodal Corpora: Advances in Capturing, Coding and Analyzing Multimodality, pp.1-4, 2010.

E. Philips, The classification of smile patterns, J Can Dent Assoc, vol.65, pp.252-256, 1999.

M. J. Pickering and S. Garrod, Alignment as the Basis for Successful Communication, Research on Language and Computation, pp.203-228, 2006.

R. Poppe, K. P. Truong, D. Reidsma, and D. Heylen, Backchannel Strategies for Artificial Listeners, International Conference on Intelligent Virtual Agents, pp.146-158, 2010.
DOI : 10.1007/978-3-540-74997-4_15

L. Prévot, B. Bigi, and R. Bertrand, A quantitative view of feedback lexical markers in conversational French, 14th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp.1-4, 2013.

L. Prevot, J. Gorish, and S. Mukherjee, Annotation and classification of french feedback communicative functions, The 29th Pacific Asia Conference on Language, Information and Computation, 2015.
URL : https://hal.archives-ouvertes.fr/hal-01227890

S. Rauzy, G. Montcheuil, and P. Blache, MarsaTag, a tagger for French written texts and speech transcriptions, Second Asia Pacific Corpus Linguistics Conference, 2009.
URL : https://hal.archives-ouvertes.fr/hal-01500736

J. Saubesty and M. Tellier, Multimodal analysis of hand gesture back-channel feedback, Gesture and Speech in Interaction 4, pp.205-210, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01498878

E. A. Schegloff, H. Sloetjes, P. Wittenburg, R. Srikant, R. M. Agrawal et al., Analyzing Discourse:Text and Talk Annotation by Category: ELAN and ISO DCR Mining sequential patterns: Generalizations and performance improvements Advances in Database Types de gestes et utilisation de l'espace gestuel dans une description spatiale: méthodologie de l'annotation Communicative Humanoids: A Computational Model of Psychosocial Dialogue Skills, LREC. [53 Actes du 1er DEfi Geste Langue des Signes Prosodic features which cue backchannel responses in English and Japanese, pp.71-93, 1982.

V. H. Yngve, On getting a word in edgewise, Chicago Linguistics Society, 6th Meeting, pp.567-578, 1970.