A. , S. Sezgin, T. M. Gao, Y. And-robinson, and P. , Perception of emotional expressions in different representations using facial feature points, Affective Computing and Intelligent Interaction and Workshops ACII 2009. 3rd International Conference on, pp.1-6, 2009.

A. , V. And-bailly, and G. , Generation of intonation: a global approach, 1995.

B. , G. And-holm, and B. , Sfc: a trainable prosodic model, Speech Communication, vol.46, issue.3, pp.348-364, 2005.

B. , G. Barbe, T. And-wang, and H. , Automatic labeling of large prosodic databases: Tools, methodology and links with a text-to-speech system, The ESCA Workshop on Speech Synthesis, 1991.

P. Barbosa and G. And-bailly, Characterisation of rhythmic patterns for text-to-speech synthesis, Speech Communication, vol.15, issue.1-2, pp.127-137, 1994.
DOI : 10.1016/0167-6393(94)90047-7

B. , A. Hueber, T. Bailly, G. Ronfard, and R. , Audio-visual speaker conversion using prosody features, International Conference on Auditory-Visual Speech Processing, 2013.
URL : https://hal.archives-ouvertes.fr/hal-00842928

B. Youssef, A. Shimodaira, H. And-braude, and D. A. , Articulatory features for speech-driven head motion synthesis, Proceedings of Interspeech, 2013.

D. J. Berndt, C. And, and J. , Using dynamic time warping to find patterns in time series, KDD workshop, pp.359-370, 1994.

P. Boersma, Praat, a system for doing phonetics by computer, Glot international, vol.510, issue.9, pp.341-345, 2002.

C. Busso, Z. Deng, M. Grimm, U. Neumann, and S. And-narayanan, Rigid head motion in expressive speech animation: Analysis and synthesis. Audio, Speech, and Language Processing, IEEE Transactions on, vol.15, issue.3, pp.1075-1086, 2007.

C. , J. Pelachaud, C. Badler, N. Steedman, M. Achorn et al., Animated conversation: Rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents, Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques, pp.94-413, 1994.

C. , E. And-bregler, and C. , Mood swings: expressive speech animation, ACM Transactions on Graphics (TOG), vol.24, issue.2, pp.331-347, 2005.

D. Moraes, J. A. Rilliard, A. De-oliveira-mota, B. A. And-shochi, and T. , Multimodal perception and production of attitudinal meaning in brazilian portuguese, Proc. Speech Prosody, 2010.

I. , Z. And, Y. , and S. , A system for transforming the emotion in speech: combining data-driven conversion techniques for prosody and voice quality, INTERSPEECH, pp.490-493, 2007.

K. , E. And-swerts, and M. , Audiovisual prosodyintroduction to the special issue, Language and speech, vol.52, pp.2-3, 2009.

M. Madden, 99 ways to tell a story: exercises in style, 2005.

M. , Y. Bailly, G. And-aubergéauberg´aubergé, and V. , Generating prosodic attitudes in french: data, model and evaluation, Speech Communication, vol.33, issue.4, pp.357-371, 2001.

M. , E. And-charpentier, and F. , Pitch-synchronous waveform processing techniques for text-to-speech synthesis using diphones, Speech, vol.9, issue.5, pp.453-467, 1990.

R. , A. Martin, J. Aubergéauberg´aubergé, V. Shochi, and T. , Perception of french audio-visual prosodic attitudes, Speech Prosody, 2008.
URL : https://hal.archives-ouvertes.fr/hal-00262371

S. , K. R. And-ellgring, and H. , Multimodal expression of emotion: Affect programs or componential appraisal patterns? Emotion 7, p.158, 2007.

T. , J. Kang, Y. , A. Li, and A. , Prosody conversion from neutral speech to emotional speech. Audio, Speech, and Language Processing, IEEE Transactions on, vol.14, issue.4, pp.1145-1154, 2006.

V. , J. Collier, R. And-mozziconacci, and S. J. , Duration and intonation in emotional speech, Eurospeech, 1993.

Y. , H. Kuratate, T. , A. Vatikiotis-bateson, and E. , Facial animation and head motion driven by speech acoustics, 5th Seminar on Speech Production: Models and Data, pp.265-268, 2000.

Z. , Z. Pantic, M. Roisman, G. I. And-huang, and T. S. , A survey of affect recognition methods: Audio, visual, and spontaneous expressions. Pattern Analysis and Machine Intelligence, IEEE Transactions on 31, vol.1, pp.39-58, 2009.