P. Arias, P. Belin, and J. Aucouturier, Auditory smiles trigger unconscious facial imitations, Current Biology, vol.28, issue.14, pp.782-783, 2018.

P. Arias, C. Soladie, O. Bouafif, A. Robel, R. Seguier et al., Realistic transformation of facial and vocal smiles in real-time audiovisual streams, IEEE Transactions on Affective Computing, 2018.

E. Ponsot, P. Arias, and J. J. Aucouturier, Uncovering mental representations of smiled speech using reverse correlation, The Journal of the Acoustical Society of America, vol.143, issue.1, pp.19-24, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01712385

L. Rachman, M. Liuni, P. Arias, A. Lind, P. Johansson et al., DAVID: An open-source platform for real-time transformation of infra-segmental emotional cues in running speech, Behavior research methods, vol.50, issue.1, pp.323-343, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01450511

, Abstracts and presentations at international conferences

P. Arias, P. Belin, and J. Aucouturier, Hearing smiles and smiling back, 2018.

P. Arias, Auditory smiles trigger unconscious facial reactions. Contextual: How the Social Context Shapes Brain and Behaviour, International conference of the European Society for Cognitive and Affective Neurosciences (ESCAN), 2018.

P. Arias, P. Belin, and J. Aucouturier, Unconsciously imitating smiles heard in speech -and hearing smiles in musical sounds, 2018.

B. Appendix, Publications by the author

P. Arias, Représentations mentales du sourire dans la voix parlé: une étude par corrélation inverse, Congrès Français d'Acoustique (CFA), 2018.

P. Arias, Ziggy : the rise and fall of zygomatic muscles in speech. Conference of the Consortium of, European Research on Emotions, 2018.

P. Arias, Unconscious physiological reactions to smiles in speech revealed by super-vp's frequency warping, Journée RIM. IRCAM, 2017.

P. Arias, Spectral cues caused by smiling trigger unconscious facial imitation. Music Language and Cognition Summer School, 2017.

P. Arias, Emotional mimicry induced by manipulated speech. Workshop on Music cognition, emotion and audio technology in Tokyo, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01450552

P. Arias, Journées Jeunes Chercheurs en Audition Acoustique musicale et Signal audio, JJ-CAAS, 2016.

P. Arias, Time perception and neural oscillations modulated by speech rate, Festival IRCAM-Manifeste, 2016.

L. Rachman, M. Liuni, P. Arias, and J. J. Aucouturier, Synthesizing speechlike emotional expression onto music and speech signals, Fifth International Conference on Music and Emotions. ICME4, 2015.
URL : https://hal.archives-ouvertes.fr/hal-01261133

P. Arias, P. Aucouturier, J. Roebel, and A. , Méthode et appareil de modification dynamique du timbre de la voix par décalage en fréquence des formants d'une enveloppe spectrale, 2018.

, Teaching and Public dissemination

P. Arias, Perception of Smiles in the Voice, Voice Tech Podcast, 2018.

P. Arias, Essential aspects of voice perception, IRCAM, Cursus-Program on Composition and Computer, 2018.

P. Arias, Trois outils de traitement de la voix émotionnelle et leurs effets physiologiques, École nationale d'ingénieurs de Tunis (ENIT), Journée d'études : TICs, Musique et émotion. Tunisie, 2018.

P. Arias and M. Liuni, Transformations émotionnelles de la voix parlée -conséquences comportementales et physiologiques. Journée voix -Studio 5 en direct, IRCAM, 2017.

, Acoustique musicale et Signal audio (JJCAAS), vol.24, 2016.

, Master's Thesis

P. Arias, J. Françoise, F. Bevilacqua, and N. Schnell, Impaired recognition of emotion in facial expressions following bilateral damage to the human amygdala, Master thesis Bibliography Adolphs, p.669, 1994.

R. Adolphs, F. Gosselin, T. W. Buchanan, D. Tranel, P. Schyns et al., A mechanism for impaired fear recognition after amygdala damage, Nature 433, vol.7021, p.68, 2005.

A. Jr, A. , and J. Lovell, Stimulus features in signal detection, The Journal of the Acoustical Society of America, vol.49, pp.1751-1756, 1971.

H. Akaike, A new look at the statistical model identification, IEEE transactions on automatic control, vol.19, pp.716-723, 1974.

M. E. Ansfield, Smiling when distressed: When a smile is a frown turned upside down, Personality and Social Psychology Bulletin, vol.33, pp.763-775, 2007.

P. Arias, P. Belin, and J. Aucouturier, Auditory smiles trigger unconscious facial imitations, Current Biology, 2018.

P. Arias, A. Jean-julien, and A. Roebel, Méthode et appareil de modification dynamique du timbre de la voix par décalage en fréquence des formants d'une enveloppe spectrale, 2017.

P. Arias, C. Soladie, O. Bouafif, A. Robel, R. Seguier et al., Realistic transformation of facial and vocal smiles in real-time audiovisual streams, IEEE Transactions on Affective Computing, 2018.

. Arnal, H. Luc, A. Flinker, A. Kleinschmidt, A. Giraud et al., Human screams occupy a privileged niche in the communication soundscape, Current Biology 25, vol.15, pp.2051-2056, 2015.

S. R. Arnott, A. Singhal, and M. A. Goodale, An investigation of auditory contagious yawning, Cognitive, Affective, & Behavioral Neuroscience 9, vol.3, pp.335-342, 2009.

J. Aucouturier, P. Johansson, L. Hall, R. Segnini, L. Mercadié et al., Covert digital manipulation of vocal emotion alter speakers' emotional states in a congruent direction, Proceedings of the National Academy of Sciences 113.4, pp.948-953, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01261138

M. Baart and J. Vroomen, Recalibration of vocal affect by a dynamic face, Experimental brain research, pp.1-8, 2018.

J. Bachorowski, J. Michael, and . Owren, Vocal expression of emotion: Acoustic properties of speech are associated with emotional intensity and context, Psychological science 6, vol.4, pp.219-224, 1995.

R. Banse and K. R. Scherer, Acoustic profiles in vocal emotion expression, In: Journal of personality and social psychology, vol.70, p.614, 1996.

J. A. Bargh, Conditional automaticity: Varieties of automatic influence in social perception and cognition, Unintended thought, vol.3, pp.51-69, 1989.

L. Barrett and . Feldman, The theory of constructed emotion: an active inference account of interoception and categorization, Social cognitive and affective neuroscience, vol.12, pp.1-23, 2017.

L. W. Barsalou, P. M. Niedenthal, A. K. Barbey, and J. A. Ruppert, Social embodiment, Psychology of learning and motivation 43, pp.43-92, 2003.

H. Barthel and H. Quené, Acoustic-phonetic properties of smiling revised-measurements on a natural video corpus, Proceedings of the 18th International Congress of Phonetic Sciences, 2015.

F. Basso and O. Oullier, Smile down the phone": Extending the effects of smiles to vocal social interactions, Behavioral and Brain Sciences, pp.435-436, 2010.
URL : https://hal.archives-ouvertes.fr/halshs-00601466

J. Baudouin, D. Gilibert, S. Sansone, and G. Tiberghien, When the smile is a cue to familiarity, pp.285-292, 2000.
URL : https://hal.archives-ouvertes.fr/hal-00654040

M. S. Beauchamp, A. R. Nath, and S. Pasalar, fMRIguided transcranial magnetic stimulation reveals that the superior temporal sulcus is a cortical locus of the McGurk effect, Journal of Neuroscience, vol.30, pp.2414-2417, 2010.

D. Bedoya, L. Goupil, and J. Aucouturier, Les émotions sont-elles exprimées de la même façon en musique que dans la voix parlée, 2018.

R. Behroozmand, O. Korzyukov, L. Sattler, and C. Larson, Opposing and following vocal responses to pitch-shifted auditory feedback: evidence for different mechanisms of voice pitch control, The Journal of the Acoustical Society of America, vol.132, pp.2468-2477, 2012.

P. Belin, R. J. Zatorre, P. Lafaille, P. Ahad, and B. Pike, Voice-selective areas in human auditory cortex, Nature 403, vol.6767, p.309, 2000.

B. G. Berg, Analysis of weights in multiple observation tasks, The Journal of the Acoustical Society of America, vol.86, pp.1743-1746, 1989.

F. J. Bernieri, S. Reznick, and R. Rosenthal, Synchrony, pseudosynchrony, and dissynchrony: Measuring the entrainment process in mother-infant interactions, In: Journal of personality and social psychology, vol.54, p.243, 1988.

P. E. Bestelmeyer, P. Belin, and M. Grosbras, Right temporal TMS impairs voice detection, Current Biology 21, vol.20, pp.838-839, 2011.

S. Blairy, P. Herrera, and U. Hess, Mimicry and the judgment of emotional facial expressions, Journal of Nonverbal behavior, vol.23, pp.5-41, 1999.

A. Blasi, E. Mercure, S. Lloyd-fox, A. Thomson, M. Brammer et al., Early specialization for voice and emotion processing in the infant brain, Current Biology, vol.21, pp.1220-1224, 2011.

E. Bliss-moreau and G. Moadab, The faces monkeys make". In: The science of facial expression, 2017.

A. J. Blood, J. Robert, and . Zatorre, Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion, Proceedings of the National Academy of Sciences 98.20, pp.11818-11823, 2001.

T. D. Blumenthal and . Christopher-t-goode, The startle eyeblink response to low intensity acoustic stimuli, Psychophysiology 28.3, pp.296-306, 1991.

P. Boersma and D. Weenink, Praat: doing phonetics by computer, 2017.

P. Boersma and . Petrus-gerardus, Praat, a system for doing phonetics by computer, Glot international 5, 2002.

P. Bourgeois and U. Hess, The impact of social context on mimicry, Biological psychology 77.3, pp.343-352, 2008.

D. L. Bowling, J. C. Garcia, R. Dunn, . Ruprecht, K. Stewart et al., Body size and vocalization in primates and carnivores, Scientific reports, vol.7, p.41070, 2017.

. Boxtel, J. A. Jeroen, H. Van, and . Lu, Joints and their relations as critical features in action discrimination: Evidence from a classification image method, Journal of vision, vol.15, pp.20-20, 2015.

M. M. Bradley, L. Miccoli, M. A. Escrig, and P. Lang, The pupil as a measure of emotional arousal and autonomic activation, Psychophysiology 45, vol.4, pp.602-607, 2008.

H. C. Breiter, N. L. Etcoff, J. Paul, W. A. Whalen, . Kennedy et al., Response and habituation of the human amygdala during visual processing of facial expression, pp.875-887, 1996.

E. F. Briefer, Vocal expression of emotions in mammals: mechanisms of production and evidence, Journal of Zoology, vol.288, pp.1-20, 2012.

M. Bruder, D. Dosmukhambetova, J. Nerb, and A. Manstead, Emotional signals in nonverbal interaction: Dyadic facilitation and convergence in expressions, appraisals, and feelings, Cognition & emotion, vol.26, pp.480-502, 2012.

G. A. Bryant, M. T. Daniel, R. Fessler, E. Fusaroli, L. Clint et al., Detecting affiliation in colaughter across 24 societies, Proceedings of the National Academy of Sciences 113.17, pp.4682-4687, 2016.

G. A. Bryant, M. T. Daniel, R. Fessler, E. Fusaroli, D. Clint et al., The perception of spontaneous and volitional laughter across 21 societies, Psychological science, vol.29, pp.1515-1525, 2018.

B. R. Buchsbaum, G. Hickok, and C. Humphries, Role of left posterior superior temporal gyrus in phonological processing for speech perception and production, Cognitive Science, vol.25, pp.663-678, 2001.

J. Burred, E. José, L. Ponsot, M. Goupil, J. Liuni et al., CLEESE: An open-source audiotransformation toolbox for data-driven experiments in speech and music cognition, p.436477, 2018.
URL : https://hal.archives-ouvertes.fr/hal-02122143

L. Camras, B. Serah-fatani, M. Fraumeni, . F. Shuster-;-l, M. Barrett et al., The development of facial expressions: current perspectives on infant emotions, Handbook of emotions, Fourth Edition, 2016.

M. Cannizzaro, B. Harel, N. Reilly, P. Chappell, and P. Snyder, Voice acoustical measurement of the severity of major depression, Brain and cognition, vol.56, pp.30-35, 2004.

P. Cannon, A. Hayes, and S. Tipper, An electromyographic investigation of the impact of task relevance on facial mimicry, Cognition & Emotion, vol.23, issue.5, pp.918-929, 2009.

T. L. Chartrand and J. A. Bargh, The chameleon effect: the perception-behavior link and social interaction, Journal of personality and social psychology, vol.76, p.893, 1999.

T. L. Chartrand and A. N. Dalton, Mimicry: Its ubiquity, importance, and functionality, Oxford handbook of human action, pp.458-483, 2009.

C. M. Cheng and T. L. Chartrand, Self-monitoring without awareness: using mimicry as a nonconscious affiliation strategy, Journal of personality and social psychology, vol.85, p.1170, 2003.

. Chevalier-skolnikoff, The primate play face: A possible key to the determinants and evolution of play, vol.60, p.3, 1974.

C. Chong, J. Seng, C. Kim, and . Davis, Disgust expressive speech: The acoustic consequences of the facial expression of emotion, Speech Communication 98, pp.68-72, 2018.

O. Collignon, S. Girard, F. Gosselin, S. Roy, D. Saint-amour et al., Audio-visual integration of emotion expression, Brain research 1242, pp.126-135, 2008.

F. S. Cooper, C. Pierre, A. M. Delattre, J. M. Liberman, L. J. Borst et al., Some experiments on the perception of synthetic speech sounds, The Journal of the Acoustical Society of America, vol.24, pp.597-606, 1952.

L. Cosmides and J. Tooby, Evolutionary psychology and the emotions, pp.91-115, 2000.

S. Cromheeke and S. C. Mueller, The power of a smile: stronger working memory effects for happy faces in adolescents compared to adults, Cognition and Emotion, vol.30, pp.288-301, 2016.

A. R. Damasio, J. Thomas, A. Grabowski, H. Bechara, . Damasio et al., Subcortical and cortical brain activity during the feeling of self-generated emotions, Nature neuroscience, vol.3, p.1049, 2000.

C. Darwin, The expression of the emotions in man and animals, The American Journal of the Medical Sciences, vol.232, issue.4, p.477, 1872.

A. D'ausilio, F. Pulvermüller, P. Salmas, I. Bufalari, C. Begliomini et al., The motor somatotopy of speech perception, Current Biology, vol.19, pp.381-385, 2009.

D. Boer, B. , and P. K. Kuhl, Investigating the role of infantdirected speech with a computer model, Acoustics Research Letters On, pp.129-134, 2003.

D. Gelder, B. , and J. Vroomen, The perception of emotions by ear and by eye, Cognition & Emotion, vol.14, pp.289-311, 2000.

D. Vignemont, F. , and T. Singer, The empathic brain: how, when and why?, In: Trends in cognitive sciences, vol.10, issue.10, pp.435-441, 2006.
URL : https://hal.archives-ouvertes.fr/ijn_00169584

D. Vignemont, F. , and T. Singer, The empathic brain: how, when and why?, In: Trends in cognitive sciences 10.10, pp.435-441, 2006.
URL : https://hal.archives-ouvertes.fr/ijn_00169584

D. Pellegrino, G. , L. Fadiga, L. Fogassi, V. Gallese et al., Understanding motor events: a neurophysiological study, pp.176-180, 1992.

U. Dimberg and M. Thunberg, Rapid facial reactions to emotional facial expressions, Scandinavian journal of psychology, vol.39, pp.39-45, 1998.

U. Dimberg, M. Thunberg, and K. Elmehed, Unconscious facial reactions to emotional facial expressions, Psychological science 11, pp.86-89, 2000.

U. Dimberg, M. Thunberg, and S. Grunedal, Facial reactions to emotional stimuli: Automatically controlled emotional responses, Cognition & Emotion, vol.16, pp.449-471, 2002.

R. J. Dolan, S. John, B. Morris, and . De-gelder, Crossmodal binding of fear in voice and face, Proceedings of the National Academy of Sciences 98.17, pp.10006-10010, 2001.

A. Drahota, A. Costall, and V. Reddy, The vocal communication of different kinds of smile, Speech Communication, vol.50, pp.278-287, 2008.
URL : https://hal.archives-ouvertes.fr/hal-00499197

J. Driver, Enhancement of selective listening by illusory mislocation of speech sounds due to lip-reading, Nature 381, vol.6577, p.66, 1996.

. Dynamixyz, Dynamixyz Generic Face Tracking, URL: www, 2017.

. Dynamixyz and . Com,

. Eibl-eibesfeldt, The expressive behaviour of the deafandblind-born, Irenäus, pp.163-194, 1973.

P. Ekman, Facial action coding system (FACS), 2002.

P. Ekman and W. V. Friesen, Manual for the facial action coding system, 1978.

, Felt, false, and miserable smiles, Journal of nonverbal behavior, vol.6, pp.238-252, 1982.

P. Ekman and E. L. Rosenberg, What the face reveals: Basic and applied studies of spontaneous expression using the Facial Action Coding System (FACS), 1997.

P. Ekman, R. Sorenson, and W. V. Friesen, Pancultural elements in facial displays of emotion, Science 164, vol.3875, pp.86-88, 1969.

E. Haddad, S. Kevin, N. Dupont, A. , and T. Dutoit, An HMM-based speech-smile synthesis system: An approach for amusement synthesis, 11th IEEE International Conference and Workshops on, vol.5, pp.1-6, 2015.

E. Haddad, H. Kevin, S. Cakmak, T. Dupont, and . Dutoit, Towards a speech synthesis system with controllable amusement levels, 4th Interdisciplinary Workshop on Laughter and Other Non-Verbal Vocalisations in Speech, pp.14-15, 2015.

, Laughter and Smile Processing for Human-Computer Interactions, Just talking-casual talk among humans and machines, pp.23-28, 2016.

E. Haddad, I. Kevin, E. Torre, H. Gilmartin, S. Çakmak et al., Introducing AmuS: The Amused Speech Database, International Conference on Statistical Language and Speech Processing, pp.229-240, 2017.

P. C. Ellsworth, William James and emotion: is a century of fame worth a century of misunderstanding?, In: Psychological Review, vol.101, issue.2, p.222, 1994.

D. Erro, E. Navas, and I. Hernaez, Parametric voice conversion based on bilinear frequency warping plus amplitude scaling, IEEE Transactions on Audio, Speech, and Language Processing, vol.21, pp.556-566, 2013.

T. Ethofer, D. Van-de, K. Ville, P. Scherer, and . Vuilleumier, Decoding of emotional information in voice-sensitive cortices, Current Biology 19, vol.12, pp.1028-1033, 2009.

S. Evans, N. Neave, and D. Wakelin, Relationships between vocal characteristics and body size and shape in human males: an evolutionary explanation for a deep male voice, Biological psychology 72, vol.2, pp.160-163, 2006.

S. Fagel, Effects of smiling on articulation: Lips, larynx and acoustics, Development of multimodal interfaces: active listening and synchrony, pp.294-303, 2010.

S. Fecteau, P. Belin, Y. Joanette, and J. L. Armony, Amygdala responses to nonlinguistic emotional vocalizations, Neuroimage 36, pp.480-487, 2007.

C. M. Fiacconi and A. M. Owen, Using facial electromyography to detect preserved emotional processing in disorders of consciousness: A proof-of-principle study, Clinical Neurophysiology, vol.127, pp.3000-3006, 2016.

A. H. Fischer and A. Sr-manstead, Social functions of emotion, Handbook of emotions 3, pp.456-468, 2008.

W. Fitch and . Tecumseh, Vocal tract length and formant frequency dispersion correlate with body size in rhesus macaques, The Journal of the Acoustical Society of America, vol.102, pp.1213-1222, 1997.

, The evolution of speech: a comparative review, Trends in cognitive sciences 4.7, pp.258-267, 2000.

W. Fitch, J. Tecumseh, H. Neubauer, and . Herzel, Calls out of chaos: the adaptive significance of nonlinear phenomena in mammalian vocal production, Animal behaviour 63, vol.3, pp.407-418, 2002.

J. Föcker, M. Gondan, and B. Röder, Preattentive processing of audio-visual emotional signals, Acta psychologica 137.1, pp.36-47, 2011.

S. Frühholz, W. Trost, and S. A. Kotz, The sound of emotions-Towards a unifying neural network perspective of affective sound processing, Neuroscience & Biobehavioral Reviews, vol.68, pp.96-110, 2016.

B. Galantucci, C. A. Fowler, and M. T. Turvey, The motor theory of speech perception reviewed, Psychonomic bulletin & review 13, vol.3, pp.361-377, 2006.

. Gallese, L. Vittorio, L. Fadiga, G. Fogassi, and . Rizzolatti, Action recognition in the premotor cortex, Brain 119, vol.2, pp.593-609, 1996.

V. Gazzola and C. Keysers, The observation and execution of actions share motor and somatosensory voxels in all tested subjects: single-subject analyses of unsmoothed fMRI data, Cerebral Cortex 19, vol.6, pp.1239-1255, 2008.

A. Gelman and J. Hill, Data analysis using regression and multilevelhierarchical models, vol.1, 2007.

A. Gerdes, M. J. Wieser, and G. W. Alpers, Emotional pictures and sounds: a review of multimodal interactions of emotion cues in multiple domains, Frontiers in Psychology, vol.5, p.1351, 2014.

H. Giles, Accent mobility: A model and some data, Anthropological linguistics, pp.87-105, 1973.

O. Glanz, J. Derix, R. Kaur, A. Schulze-bonhage, P. Auer et al., Real-life speech production and perception have a shared premotor-cortical substrate, Scientific reports 8.1, p.8898, 2018.

J. M. Gold, A. B. Sekuler, and P. Bennett, Characterizing perceptual learning with external noise, Cognitive Science, vol.28, pp.167-207, 2004.

J. Golle, F. W. Mast, and J. Lobmaier, Something to smile about: The interrelationship between attractiveness and emotional expression, Cognition & emotion 28, vol.2, pp.298-310, 2014.

M. A. Goodale and D. Milner, Separate visual pathways for perception and action, Trends in neurosciences 15.1, pp.20-25, 1992.

F. Gosselin and P. G. Schyns, Superstitious perceptions reveal properties of internal representations, Psychological Science, vol.14, pp.505-509, 2003.

N. Gosselin, I. Peretz, E. Johnsen, and R. Adolphs, Amygdala damage impairs emotion recognition from music, Neuropsychologia 45, vol.2, pp.236-244, 2007.

F. Gougoux, F. Lepore, M. Lassonde, P. Voss, J. Robert et al., Pitch discrimination in the early blind: People blinded in infancy have sharper listening skills than those who lost their sight later, 2004.

A. Gramfort, M. Luessi, E. Larson, A. Denis, D. Engemann et al., MNE software for processing MEG and EEG data, Neuroimage, vol.86, pp.446-460, 2014.
URL : https://hal.archives-ouvertes.fr/hal-02369299

K. W. Grant and P. Seitz, The use of visible speech cues for improving auditory detection of spoken sentences, The Journal of the Acoustical Society of America, vol.108, pp.1197-1208, 2000.

J. J. Gross, Antecedent-and response-focused emotion regulation: divergent consequences for experience, expression, and physiology, In: Journal of personality and social psychology, vol.74, p.224, 1998.

S. D. Gunnery and M. A. Ruben, Perceptions of Duchenne and non-Duchenne smiles: A meta-analysis, Cognition and Emotion, vol.30, pp.501-515, 2016.

B. Hasan, M. Valdes-sosa, J. Gross, and P. Belin, Hearing faces and seeing voices: Amodal coding of person identity in the human brain, In: Scientific Reports, vol.6, issue.37494, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01469009

B. A. Hasan, M. Shiekh, J. Valdes-sosa, P. Gross, and . Belin, Hearing faces and seeing voices": Amodal coding of person identity in the human brain, In: Scientific reports, vol.6, p.37494, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01469009

E. Hatfield, J. T. Cacioppo, and R. L. Rapson, Emotional contagion, Current directions in psychological science 2.3, pp.96-100, 1993.

S. T. Hawk, A. H. Fischer, and G. Kleef, Face the noise: Embodied responses to nonverbal vocalizations of discrete emotions, In: Journal of Personality and Social Psychology, vol.102, pp.16-29, 2012.

U. Hess, G. Martin, N. Beaupré, and . Cheung, Who to whom and why-cultural differences and similarities in the function of smiles, An empirical reflection on the smile 4, p.187, 2002.

U. Hess and A. Fischer, Emotional mimicry as social regulation, Personality and Social Psychology Review, vol.17, pp.142-157, 2013.

U. Hess and A. H. Fischer, Emotional mimicry in social context, 2016.

U. Hess, P. Philippot, and S. Blairy, Facial reactions to emotional facial expressions: Affect or cognition?, In: Cognition & Emotion, vol.12, pp.509-531, 1998.

C. M. Heyes and C. D. Frith, The cultural evolution of mind reading, Science 344, vol.6190, p.1243091, 2014.

G. Hickok, The myth of mirror neurons: The real neuroscience of communication and cognition, 2014.

G. Hickok, J. Houde, and F. Rong, Sensorimotor integration in speech processing: computational basis and neural organization", pp.407-422, 2011.

G. Hickok and D. Poeppel, The cortical organization of speech processing, Nature Reviews Neuroscience, vol.8, p.393, 2007.

G. Hickok, M. Costanzo, R. Capasso, and G. Miceli, The role of Broca's area in speech perception: Evidence from aphasia revisited, Brain and language, vol.119, pp.214-220, 2011.

J. K. Hietanen, V. Surakka, and I. Linnankoski, Facial electromyographic responses to vocal affect expressions, Psychophysiology 35.05, pp.530-536, 1998.

S. Hoehl, K. Hellmer, M. Johansson, and G. Gredebäck, Itsy bitsy spider. . . : infants react with increased arousal to spiders and snakes, Frontiers in psychology, vol.8, p.1710, 2017.

B. Hommel, Consciousness and control: not identical twins, Journal of Consciousness Studies, vol.14, issue.2, pp.155-176, 2007.

D. H. Hubel, The way biomedical research is organized has dramatically changed over the past half-century: Are the changes for the better?, pp.161-163, 2009.

H. Im, J. Yeon, and . Halberda, The effects of sampling and internal noise on the representation of ensemble average size, Attention, Perception, & Psychophysics 75, vol.2, pp.278-286, 2013.

C. E. Izard, Innate and universal facial expressions: evidence from developmental and cross-cultural research, Perspectives on psychological science 2.3, pp.1-25, 1994.

M. Jabbi, M. Swart, and C. Keysers, Empathy for positive and negative emotions in the gustatory cortex, pp.1744-1753, 2007.

R. E. Jack, G. B. Oliver, H. Garrod, R. Yu, P. G. Caldara et al., Facial expressions of emotion are not culturally universal, Proceedings of the National Academy of Sciences 109.19, pp.7241-7244, 2012.

R. E. Jack, W. Sun, I. Delis, G. B. Oliver, P. G. Garrod et al., Four not six: Revealing culturally common facial expressions of emotion, In: Journal of Experimental Psychology: General, vol.145, p.708, 2016.

W. James, What is an emotion?, In: Mind 9, vol.34, pp.188-205, 1884.

S. Jessen, N. Altvater-mackensen, and T. Grossmann, Pupillary responses reveal infants' discrimination of facial emotions independent of conscious perception, Cognition, vol.150, pp.163-169, 2016.

P. N. Juslin and P. Laukka, Communication of emotions in vocal expression and music performance: Different channels, same code?, In: Psychological bulletin, vol.129, p.770, 2003.

L. D. Kaczmarek, M. Behnke, A. Todd-b-kashdan, K. Kusiak, M. Marzec et al., Smile intensity in social networking profile photographs is related to greater scientific achievements, The Journal of Positive Psychology, vol.13, pp.435-439, 2018.

N. Kaganovich, J. Schumaker, and C. Rowland, Matching heard and seen speech: an ERP study of audiovisual word recognition, Brain and language, vol.157, pp.14-24, 2016.

K. Kawakami, K. Takai-kawakami, M. Tomonaga, J. Suzuki, T. Kusaka et al., Origins of smile and laughter: A preliminary study, Early Human Development, vol.82, pp.61-66, 2006.

M. Keough, A. Ozburn, E. K. Mcclay, M. D. Schwan, M. Schellenberg et al., Acoustic and articulatory qualities of smiled speech, Canadian Acoustics, vol.43, issue.3, 2015.

C. Keysers and V. Gazzola, Social neuroscience: mirror neurons recorded in humans, Current biology 20, vol.8, pp.353-354, 2010.

, Neural Correlates of Empathy in Humans, and the Need for Animal Models, Neuronal Correlates of Empathy, pp.37-52, 2018.

C. Keysers, E. Kohler, A. Umiltà, L. Nanetti, L. Fogassi et al., Audiovisual mirror neurons and action recognition, pp.628-636, 2003.

Y. Kim, J. Thayne, and Q. Wei, An embodied agent helps anxious students in mathematics learning, Educational Technology Research and Development, pp.1-17, 2016.

S. Koelsch, T. Fritz, Y. V. Cramon, K. Müller, and A. D. Friederici, Investigating emotion with music: an fMRI study, Human brain mapping, vol.27, pp.239-250, 2006.

E. Kohler, C. Keysers, A. Umilta, L. Fogassi, V. Gallese et al., Hearing sounds, understanding actions: action representation in mirror neurons, Science 297, vol.5582, pp.846-848, 2002.

L. L. Kontsevich, W. Christopher, and . Tyler, What makes Mona Lisa smile?, vol.13, pp.1493-1498, 2004.

N. Krämer, S. Kopp, C. Becker-asano, and N. Sommer, Smile and the world will smile with you-The effects of a virtual agent's smile on users' evaluation and behavior, International Journal of Human-Computer Studies, vol.71, pp.335-349, 2013.

B. Kreifelts, T. Ethofer, W. Grodd, M. Erb, and D. Wildgruber, Audiovisual integration of emotional signals in voice and face: an event-related fMRI study, Neuroimage, vol.37, pp.1445-1456, 2007.

E. Krumhuber, A. Sr-manstead, D. Cosker, D. Marshall, A. Paul-l-rosin et al., Facial dynamics as indicators of trustworthiness and cooperative behavior, p.730, 2007.

J. Ku, H. J. Jang, K. Kim, J. H. Kim, S. H. Park et al., Experimental results of affective valence and arousal to avatar's facial expressions, CyberPsychology & Behavior, vol.8, issue.5, pp.493-503, 2005.

P. K. Kuhl and A. N. Meltzoff, The bimodal perception of speech in infancy, Science 218, vol.4577, pp.1138-1141, 1982.

, Infant vocalizations in response to speech: Vocal imitation and developmental change, The journal of the Acoustical Society of America, vol.100, pp.2425-2438, 1996.

J. Künecke, A. Hildebrandt, G. Recio, W. Sommer, and O. Wilhelm, Facial EMG responses to emotional expressions are related to emotion perception ability, PloS one 9, p.84053, 2014.

M. Kunz, K. Prkachin, and S. Lautenbacher, The smile of pain, pp.273-275, 2009.

E. Lasarcyk and J. Trouvain, Spread lips+ raised larynx+ higher f0= Smiled Speech?-An articulatory synthesis approach, Proceedings of ISSP, pp.43-48, 2008.

R. Laurent, M. Barnaud, J. Schwartz, P. Bessière, and J. Diard, The complementary roles of auditory and motor information evaluated in a Bayesian perceptuo-motor model of speech perception, Psychological review, vol.124, p.572, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01484383

J. E. Ledoux and R. Brown, A higher-order theory of emotional consciousness, Proceedings of the National Academy of Sciences, p.201619316, 2017.

. Lee and . Anderson, Form and function in facial expressive behavior, Handbook of emotions, pp.495-509, 2016.

L. Lee and R. , Speaker normalization using efficient frequency warping procedures, Acoustics, Speech, and Signal Processing, vol.1, pp.353-356, 1996.

C. Lehmann, T. Mueller, A. Federspiel, D. Hubl, G. Schroth et al., Dissociation between overt and unconscious face processing in fusiform face area, pp.75-83, 2004.

G. Lemaitre, J. A. Pyles, A. R. Halpern, N. Navolio, M. Lehet et al., Who's that knocking at my door? Neural bases of sound source identification, Cerebral cortex bhw397, pp.1-14, 2017.

D. A. Leopold and G. Rhodes, A comparative view of face perception, In: Journal of Comparative Psychology, vol.124, p.233, 2010.

N. Lessard, M. Paré, F. Lepore, and M. Lassonde, Early-blind human subjects localize sound sources better than sighted subjects, p.278, 1998.

K. Levecque, F. Anseel, A. D. Beuckelaer, J. Van-der-heyden, and L. Gisle, Work organization and mental health problems in PhD students, Research Policy, vol.46, pp.868-879, 2017.

W. J. Levelt, S. Kelter-;-liberman, A. M. , and I. G. Mattingly, Surface form and memory in question answering, Cognition 21.1, pp.1-36, 1982.

A. M. Liberman, S. Franklin, . Cooper, P. Donald, M. Shankweiler et al., Perception of the speech code, Psychological review 74, vol.6, p.431, 1967.

K. U. Likowski, A. Mühlberger, B. M. Antje, M. J. Gerdes, P. Wieser et al., Facial mimicry and the mirror neuron system: simultaneous acquisition of facial electromyography and functional magnetic resonance imaging, Frontiers in Human Neuroscience, 2012.

C. F. Lima, S. Krishnan, and S. K. Scott, Roles of supplementary motor areas in auditory processing and auditory imagery, Trends in neurosciences 39, vol.8, pp.527-542, 2016.

T. Lipps, Empathy, inner imitation, and sense-feelings, A modern book of aesthetics, 1903.

H. Liu, P. K. Kuhl, and F. Tsao, An association between mothers' speech clarity and infants' speech discrimination skills, 2003.

M. Liuni and R. Axel, Phase vocoder and beyond, Musica, Tecnologia, vol.7, pp.73-120, 2013.
URL : https://hal.archives-ouvertes.fr/hal-01250848

P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar et al., The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression, Proceedings of IEEE workshop on CVPR for Human Communicative Behavior Analysis, 2010.

N. A. Macmillan and . Douglas-creelman, Detection theory: A user's guide, 2004.

C. Magariños, P. Lopez-otero, L. Docio-fernandez, E. R. Banga, C. Garcia-mateo et al., Piecewise linear definition of transformation functions for speaker de-identification, Sensing, Processing and Learning for Intelligent Machines, p.2016, 2016.

, First International Workshop on. IEEE, pp.1-5

M. J. Magnée, J. Jeroen, C. Stekelenburg, B. Kemner, and . De-gelder, Similar facial electromyographic responses to faces, voices, and body expressions, Neuroreport 18, vol.4, pp.369-372, 2007.

H. Maldonado, J. Lee, S. Brave, C. Nass, H. Nakajima et al., We learn better together: enhancing elearning with emotional characters, Proceedings of th 2005 conference on Computer support for collaborative learning: learning 2005: the next 10 years! International Society of the Learning Sciences, pp.408-417, 2005.

M. C. Mangini and I. Biederman, Making the ineffable explicit: Estimating the information employed for face classifications, Cognitive Science, vol.28, pp.209-226, 2004.

E. Maris and R. Oostenveld, Nonparametric statistical testing of EEG-and MEG-data, Journal of neuroscience methods, vol.164, pp.177-190, 2007.

J. Martin, M. Rychlowska, A. Wood, and P. Niedenthal, Smiles as Multipurpose Social Signals, Trends in cognitive sciences, 2017.

D. Matsumoto and P. Ekman, American-Japanese cultural differences in intensity ratings of facial expressions of emotion, Motivation and Emotion, vol.13, pp.143-157, 1989.

D. Matsumoto and T. Kudoh, American-Japanese cultural differences in attributions of personality based on smiles, Journal of Nonverbal Behavior, vol.17, pp.231-243, 1993.

D. Matsumoto and B. Willingham, Spontaneous facial expressions of emotion of congenitally and noncongenitally blind individuals, Journal of personality and social psychology, vol.96, p.1, 2009.

H. Mcgurk and J. Macdonald, Hearing lips and seeing voices". In: Nature 264, vol.5588, p.746, 1976.

H. Meij, J. Meij, and R. Harmsen, Animated pedagogical agents effects on enhancing student motivation and learning in a science inquiry learning environment, Educational technology research and development 63, vol.3, pp.381-403, 2015.

A. N. Meltzoff, Immediate and deferred imitation in fourteenand twenty-four-month-old infants, Social Learning: Psychological and Biological Perspectives, p.319, 1985.

A. N. Meltzoff and M. Moore, Newborn infants imitate adult facial gestures, Child development, pp.702-709, 1983.

, Explaining facial imitation: A theoretical model, pp.179-192, 1997.

A. N. Meltzoff, L. Murray, E. Simpson, M. Heimann, E. Nagy et al., Re-examination of Oostenbroek et al.(2016): evidence for neonatal imitation of tongue protrusion, 2018.

N. Mesgarani, C. Cheung, K. Johnson, and E. Chang, Phonetic feature encoding in human superior temporal gyrus, Science 343, vol.6174, pp.1006-1010, 2014.

D. Messinger, M. Dondi, and C. Nelson-goens, How sleeping neonates smile, Alessia Beghi, Alan Fogel, and Francesca Simion, pp.48-54, 2002.

J. Micheletta, J. Whitehouse, L. A. Parr, and B. M. Waller, Facial expression recognition in crested macaques (Macaca nigra), Animal Cognition, vol.18, pp.985-990, 2015.

Y. Minagawa-kawai, S. Matsuoka, I. Dan, N. Naoi, K. Nakamura et al., Prefrontal activation associated with social attachment: facial-emotion recognition in mothers and infants, Cerebral Cortex, vol.19, pp.284-292, 2008.

S. Moineau, N. F. Dronkers, and E. Bates, Exploring the processing continuum of single-word comprehension in aphasia, Journal of Speech, Language, and Hearing Research, vol.48, pp.884-896, 2005.

R. Möttönen, R. Dutton, and K. E. Watkins, Auditorymotor processing of speech sounds, Cerebral Cortex, vol.23, pp.1190-1197, 2012.

R. Mukamel, A. D. Ekstrom, J. Kaplan, M. Iacoboni, and I. Fried, Single-neuron responses in humans during execution and observation of actions, Current biology 20, vol.8, pp.750-756, 2010.

K. G. Munhall, P. Gribble, M. Sacco, and . Ward, Temporal constraints on the McGurk effect, Perception & psychophysics, vol.58, pp.351-362, 1996.

A. Murata, H. Saito, J. Schug, K. Ogawa, and T. Kameda, Spontaneous facial mimicry is enhanced by the goal of inferring emotional states: evidence for moderation of "automatic" mimicry by higher cognitive processes, 2016.

M. Natale, Convergence of mean vocal intensity in dyadic communication as a function of social desirability, Journal of Personality and Social Psychology, vol.32, p.790, 1975.

A. R. Nath, S. Michael, and . Beauchamp, A neural basis for interindividual differences in the McGurk effect, a multisensory speech illusion, pp.781-787, 2012.

J. Navarra and S. Soto-faraco, Hearing lips in a second language: visual articulatory information enables the perception of second language sounds, Psychological research 71.1, pp.4-12, 2007.

B. Neal, D. T. , and T. L. Chartrand, Embodied emotion perception: amplifying and dampening facial feedback modulates emotion perception accuracy, Social Psychological and Personality Science, vol.2, issue.6, pp.673-678, 2011.

P. Neri, How inherently noisy is human sensory processing?, In: Psychonomic Bulletin & Review, vol.17, pp.802-808, 2010.

R. Neumann and F. Strack, Mood contagion": the automatic transfer of mood between persons, Journal of personality and social psychology 79, vol.2, p.211, 2000.

P. M. Niedenthal, M. Brauer, B. Jamin, Å. Halberstadt, and . Innes-ker, When did her smile drop? Facial mimicry and the influences of emotional state on the detection of change in emotional expression, Cognition & Emotion, vol.15, pp.853-864, 2001.

P. M. Niedenthal, M. Mermillod, M. Maringer, and U. Hess, The Simulation of Smiles (SIMS) model: Embodied simulation and the meaning of facial expression, Behavioral and brain sciences 33.06, pp.417-433, 2010.
URL : https://hal.archives-ouvertes.fr/hal-00965109

, The Simulation of Smiles (SIMS) model: Embodied simulation and the meaning of facial expression, Behavioral and brain sciences 33.06, pp.417-433, 2010.

T. Noah, Y. Schul, and R. Mayo, When both the original study and its failed replication are correct: Feeling observed eliminates the facial-feedback effect, In: Journal of personality and social psychology, vol.114, p.657, 2018.

L. M. Oberman, P. Winkielman, and . Vilayanur-s-ramachandran, Face to face: Blocking facial mimicry can selectively impair recognition of emotional expressions, pp.167-178, 2007.

M. Ochs, C. Pelachaud, and G. Mckeown, A User Perception-Based Approach to Create Smiling Embodied Conversational Agents, ACM Transactions on Interactive Intelligent Systems (TiiS) 7.1, p.4, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01479296

S. Y. Oh, . Bailenson, B. Krämer, and . Li, Let the Avatar Brighten Your Smile: Effects of Enhancing Facial Expressions in Virtual Environments, PLoS ONE 11.9, 2016.

J. J. Ohala, The acoustic origin of the smile, The Journal of the Acoustical Society of America, vol.68, pp.33-33, 1980.

M. S. Okun, D. Bowers, U. Springer, A. Nathan, D. Shapira et al., , p.191

. Steven-a-rasmussen, Intra-operative observations of contralateral smiles induced by deep brain stimulation, Neurocase 10, vol.4, pp.271-279, 2004.

M. Oliva and A. Anikin, Pupil dilation reflects the time course of emotion recognition in human vocalizations, Scientific reports 8.1, p.4871, 2018.

A. ;. Olsen, J. Oostenbroek, T. Suddendorf, M. Nielsen, J. Redshaw et al., Comprehensive longitudinal study challenges the existence of neonatal imitation in humans, Tobii Technology, vol.26, pp.1334-1338, 2012.

J. Oostenbroek, J. Redshaw, J. Davis, S. Kennedy-costantini, M. Nielsen et al., Re-evaluating the neonatal imitation hypothesis, 2018.

O. Brimijoin, W. , M. A. Akeroyd, E. Tilbury, and B. Porr, The internal representation of vowel spectra investigated using behavioral response-triggered averaging, The Journal of the Acoustical Society of America, vol.133, pp.118-122, 2013.

E. Pampalk, A Matlab Toolbox to Compute Similarity from Audio, Proceedings of the ISMIR International Conference on Music Information Retrieval, 2004.

J. Panksepp, Affective neuroscience: The foundations of human and animal emotions, 2004.

, Neurologizing the psychology of affects: How appraisal-based constructivism and basic emotion theory can coexist, Perspectives on psychological science 2.3, pp.281-296, 2007.

L. A. Parr, M. Bridget, and . Waller, Understanding chimpanzee facial expression: insights into the evolution of communication, Social Cognitive and Affective Neuroscience, vol.1, issue.3, pp.221-228, 2006.

T. Partala and V. Surakka, The effects of affective interventions in human-computer interaction, Interacting with computers, vol.16, issue.2, pp.295-309, 2004.

S. Paulmann and M. D. Pell, Is there an advantage for recognizing multi-modal emotional stimuli?, In: Motivation and Emotion, vol.35, pp.192-201, 2011.

S. Paulmann, D. Titone, and M. Pell, How emotional prosody guides your way: evidence from eye movements, Speech Communication 54.1, pp.92-107, 2012.
URL : https://hal.archives-ouvertes.fr/hal-00798614

J. W. Peirce, Generating stimuli for neuroscience using Psy-choPy, Frontiers in neuroinformatics 2, 2008.

K. A. Pelphrey, J. Noah, S. Sasson, G. Reznick, . Paul et al., Visual scanning of faces in autism, Journal of autism and developmental disorders, vol.32, pp.249-261, 2002.

L. Pessoa, S. Japee, D. Sturman, and L. G. Ungerleider, Target visibility and visual awareness modulate amygdala responses to fearful faces, Cerebral cortex 16, pp.366-375, 2005.

C. I. Petkov, C. Kayser, T. Steudel, K. Whittingstall, M. Augath et al., A voice region in the monkey brain, Nature neuroscience, vol.11, p.367, 2008.

M. L. Phillips, A. W. Young, C. Senior, M. Brammer, ;. Andrew et al., A specific neural substrate for perceiving facial expressions of disgust, Nature, vol.389, p.495, 1997.

R. J. Podesva, P. Callier, R. Voigt, and D. Jurafsky, The connection between smiling and GOAT fronting: Embodied affect in sociophonetic variation, Proceedings of the International Congress of Phonetic Sciences, vol.18, 2015.

E. Ponsot, P. Arias, and J. Aucouturier, Uncovering mental representations of smiled speech using reverse correlation, The Journal of the Acoustical Society of America, vol.143, pp.19-24, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01712385

E. Ponsot, J. J. Burred, P. Belin, and J. Aucouturier, Cracking the social code of speech prosody using reverse correlation, Proceedings of the National Academy of Sciences 115.15, pp.3972-3977, 2018.
URL : https://hal.archives-ouvertes.fr/hal-02004519

E. Prochazkova, L. Prochazkova, . Michael-rojek-giffin, C. Steven-scholte, M. E. Dreu et al., Pupil mimicry promotes trust through the theory-of-mind network, Proceedings of the National Academy of Sciences, p.201803916, 2018.

R. R. Provine, Laughing, tickling, and the evolution of speech and self, Current Directions in Psychological Science, vol.13, pp.215-218, 2004.

F. Pulvermüller, M. Huss, F. Kherif, F. Moscoso-del-prado-martin, O. Hauk et al., Motor cortex maps articulatory features of speech sounds, Proceedings of the National Academy of Sciences 103.20, pp.7865-7870, 2006.

H. Quené, F. Gün-r-semin, and . Foroni, Audible smiles and frowns affect speech comprehension, Speech Communication 54, vol.7, pp.917-922, 2012.

. R-core-team, R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, 2016.

L. Rachman, M. Liuni, P. Arias, A. Lind, P. Johansson et al., DAVID: An opensource platform for real-time transformation of infra-segmental emotional cues in running speech, Behavior Research Methods, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01450511

S. A. Reber, J. Janisch, K. Torregrosa, J. Darlington, A. Kent et al., Formants provide honest acoustic cues to body size in American alligators, Scientific reports, p.1816, 2017.

H. T. Reis, I. M. Wilson, C. Monestere, S. Bernstein, K. Clark et al., What is smiling is beautiful and good, European Journal of Social Psychology, vol.20, pp.259-267, 1990.

L. F. Renner and M. W?odarczak, When a dog is a cat and how it changes your pupil size: Pupil dilation in response to information mismatch, Proc. Interspeech, pp.674-678, 2017.

S. Rigoulot and M. D. Pell, Seeing emotion with your ears: emotional prosody implicitly guides visual attention to faces, PloS one 7.1, e30740, vol.65, pp.36-49, 2012.

R. Bogart, K. , and D. Matsumoto, Facial mimicry is not necessary to recognize emotion: Facial expression recognition by people with Moebius syndrome, Social Neuroscience, vol.5, pp.241-251, 2010.

A. Röbel and X. Rodet, Efficient spectral envelope estimation and its application to pitch shifting and envelope preservation, International Conference on Digital Audio Effects, pp.30-35, 2005.

A. Roebel, Shape-invariant speech transformation with the phase vocoder, InterSpeech, pp.2146-2149, 2010.
URL : https://hal.archives-ouvertes.fr/hal-01161278

L. D. Rosenblum, A. Mark, J. A. Schmuckler, and . Johnson, The McGurk effect in infants, Perception & Psychophysics, vol.59, pp.347-357, 1997.

R. Rosenthal, The file drawer problem and tolerance for null results, In: Psychological bulletin, vol.86, p.638, 1979.

J. B. Russ, C. Ruben, W. B. Gur, and . Bilker, Validation of affective and neutral sentence content for prosodic testing, Behavior research methods 40, vol.4, pp.935-939, 2008.

M. Rychlowska, Y. Miyamoto, D. Matsumoto, U. Hess, E. Gilboa-schechtman et al., Heterogeneity of long-history migration explains cultural differences in reports of emotional expressivity and the functions of smiles, Proceedings of the National Academy of Sciences, p.201413661, 2015.

M. Rychlowska, R. E. Jack, G. B. Oliver, P. G. Garrod, J. D. Schyns et al., Functional smiles: Tools for love, sympathy, and war, Psychological science 28, vol.9, pp.1259-1270, 2017.

D. A. Sauter, F. Eisner, P. Ekman, and S. K. Scott, Crosscultural recognition of basic emotions through nonverbal emotional vocalizations, Proceedings of the National Academy of Sciences 107, pp.2408-2412, 2010.

S. Schaefer, T. Mcphail, and J. Warren, Image deformation using moving least squares, ACM transactions on graphics (TOG), 2006.

L. Scheider, B. M. Waller, L. Oña, A. M. Burrows, and K. Liebal, Social use of facial expressions in hylobatids, PloS one 11, p.151733, 2016.

J. Schwartz, F. Berthommier, and C. Savariaux, Seeing to hear better: evidence for early audio-visual interactions in speech identification, Cognition 93, vol.2, pp.69-78, 2004.
URL : https://hal.archives-ouvertes.fr/hal-00186797

S. K. Scott, Perception and production of speech: Connected, but how?, pp.33-46, 2016.

R. Sekuler, Sound alters visual motion perception, Nature 385, vol.6614, p.308, 1997.

L. Shams, Y. Kamitani, and S. Shimojo, Illusions: What you see is what you hear, Nature 408, vol.6814, p.788, 2000.

M. L. Simner, Newborn's response to the cry of another infant, p.136, 1971.

T. Singer, B. Seymour, J. O'doherty, H. Kaube, J. Raymond et al., Empathy for pain involves the affective but not sensory components of pain, Science 303, vol.5661, pp.1157-1162, 2004.

. Smith and . Little, Small is beautiful: In defense of the small-N design, Psychonomic bulletin & review, 2018.

R. Srinivasan, J. D. Golomb, and A. Martinez, A neural basis of facial action recognition in humans, In: Journal of Neuroscience, vol.36, pp.4434-4442, 2016.

A. Stasenko, F. E. Garcea, and B. Mahon, What happens to the motor theory of perception when the motor system is damaged?, In: Language and cognition, vol.5, issue.2-3, pp.225-238, 2013.

M. Stel and A. Van-knippenberg, The role of facial mimicry in the recognition of affect, Psychological Science 19, vol.10, p.984, 2008.

K. N. Stevens, Acoustic phonetics, vol.30, 2000.

B. H. Story, Mechanisms of voice production, The handbook of speech production, pp.34-58, 2015.

B. H. Story and K. Bunton, Vowel space density as an indicator of speech performance, The Journal of the Acoustical Society of America, vol.141, pp.458-464, 2017.

F. Strack, L. L. Martin, and S. Stepper, Inhibiting and facilitating conditions of the human smile: a nonobtrusive test of the facial feedback hypothesis, In: Journal of personality and social psychology, vol.54, p.768, 1988.

. Street and L. Richard, Speech convergence and speech evaluation in fact-finding interviews, Human Communication Research, vol.11, issue.2, pp.139-169, 1984.

W. H. Sumby and I. Pollack, Visual contribution to speech intelligibility in noise, The journal of the acoustical society of america 26, vol.2, pp.212-215, 1954.

J. M. Susskind, H. Daniel, A. Lee, R. Cusi, W. Feiman et al., Expressing fear enhances sensory acquisition, Nature neuroscience 11, vol.7, p.843, 2008.

M. Tamietto and B. D. Gelder, Neural bases of the nonconscious perception of emotional signals, Nature Reviews Neuroscience, vol.11, p.697, 2010.

M. Tamietto, L. Castelli, S. Vighetti, P. Perozzo, G. Geminiani et al., Unseen Bibliography facial and bodily expressions trigger fast emotional reactions, Proceedings of the National Academy of Sciences, pp.17661-17666, 2009.

V. C. Tartter, Happy talk: Perceptual and acoustic effects of smiling on speech, Attention, Perception, & Psychophysics 27.1, pp.24-27, 1980.

V. C. Tartter and D. Braun, Hearing smiles and frowns in normal and whisper registers, The Journal of the Acoustical Society of America, vol.96, pp.2101-2107, 1994.

E. Thoret, P. Depalle, and S. Mcadams, Perceptually salient spectrotemporal modulations for recognition of sustained musical instruments, The Journal of the Acoustical Society of America, vol.140, pp.478-483, 2016.

X. Tian, Z. Wu, S. W. Lee, and E. Chng, Correlation-based frequency warping for voice conversion, 9th International Symposium on. IEEE, pp.211-215, 2014.

D. Tingley, T. Yamamoto, K. Hirose, L. Keele, and K. Imai, Mediation: R Package for Causal Mediation Analysis, Journal of Statistical Software, vol.59, 2014.

S. Tomkins, Affect imagery consciousness: Volume I: The positive affects, 1962.

S. S. Tomkins, Affect, imagery and consciousness: The positive affects, 1962.

D. Valente, A. Theurel, and E. Gentaz, The role of visual experience in the production of emotional facial expressions by blind people: a review, Psychonomic bulletin & review, pp.1-15, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01890762

V. Boxtel, Facial EMG as a tool for inferring affective states, Proceedings of measuring behavior. Noldus Information Technology Wageningen, pp.104-108, 2010.

V. Hooff and J. , A comparative approach to the phylogeny of laughter and smile, 1972.

L. Varnet, F. Meunier, G. Trollé, and M. Hoen, Direct viewing of dyslexics' compensatory strategies in speech in noise using auditory classification images, PloS one 11, vol.4, p.153781, 2016.

J. H. Venezia, G. Hickok, and V. M. Richards, Auditory "bubbles": Efficient classification of the spectrotemporal modulations essential for speech intelligibility, The Journal of the Acoustical Society of America, vol.140, pp.1072-1088, 2016.

E. Verona, C. J. Patrick, J. J. Curtin, M. M. Bradley, and P. Lang, Psychopathy and physiological response to emotionally evocative sounds, Journal of abnormal psychology, vol.113, p.99, 2004.

F. Villavicencio, A. Robel, and X. Rodet, Improving LPC spectral envelope extraction of voiced speech by true-envelope estimation, Acoustics, Speech and Signal Processing, vol.1, pp.I-I, 2006.
URL : https://hal.archives-ouvertes.fr/hal-01161354

S. R. Vrana, E. L. Spence, and P. Lang, The startle probe response: a new measure of emotion?, In: Journal of abnormal psychology, vol.97, p.487, 1988.

J. Vroomen, J. Driver, and B. D. Gelder, Is cross-modal integration of emotional expressions independent of attentional resources?, In: Cognitive, Affective, & Behavioral Neuroscience, vol.1, pp.382-387, 2001.

E. Wagenmakers, T. Beek, L. Dijkhoff, Q. F. Gronau, . Acosta et al., Perspectives on Psychological Science, vol.11, issue.6, pp.917-928, 1988.

C. Y. Wan, A. G. Wood, C. David, S. J. Reutens, and . Wilson, Early but not late-blindness leads to enhanced auditory perception, Neuropsychologia 48.1, pp.344-348, 2010.

S. Wang, M. Jiang, E. A. Xavier-morin-duchesne, . Laugeson, P. Daniel et al., Atypical visual saliency in autism spectrum disorder quantified through model-based eye tracking, pp.604-616, 2015.

J. E. Warren, A. Disa, F. Sauter, J. Eisner, A. Wiland et al., Positive emotions preferentially engage an auditory-motor "mirror" system, Journal of Neuroscience, vol.26, pp.13067-13075, 2006.

K. E. Watkins, P. Antonio, T. Strafella, and . Paus, Seeing and hearing speech excites the motor system involved in speech production, Neuropsychologia 41, vol.8, pp.989-994, 2003.

R. Watson, M. Latinus, T. Noguchi, O. George-baring, F. Garrod et al., Dissociating task difficulty from incongruence in face-voice emotion integration, Frontiers in human neuroscience 7, p.744, 2013.
URL : https://hal.archives-ouvertes.fr/hal-02006942

R. Watson, M. Latinus, T. Noguchi, O. Garrod, F. Crabbe et al., Crossmodal adaptation in right posterior superior temporal sulcus during face-voice emotional integration, Journal of Neuroscience, vol.34, pp.6813-6821, 2014.
URL : https://hal.archives-ouvertes.fr/hal-02005297

P. Wel, H. Van-der, and . Van-steenbergen, Pupil dilation as an index of effort in cognitive control tasks: A review, Psychonomic bulletin & review, pp.1-11, 2018.

B. Wicker, C. Keysers, J. Plailly, J. Royet, V. Gallese et al., Both of us disgusted in My insula: the common neural basis of seeing and feeling disgust, pp.655-664, 2003.

K. D. Williams, Ostracism: The power of silence, 2002.

S. M. Wilson, A. Pinar-saygin, M. I. Sereno, and M. Iacoboni, Listening to speech activates motor areas involved in speech production, p.701, 2004.

. Wörmann, M. Viktoriya, J. Holodynski, H. Kärtner, and . Keller, A cross-cultural comparison of the development of the social smile: A longitudinal study of maternal and infant imitation in 6-and 12-week-old infants, Infant Behavior and Development, vol.35, pp.335-347, 2012.

N. Yee, J. N. Bailenson, and K. Rickertsen, A metaanalysis of the impact of the inclusion and realism of human-like faces on user experiences in interfaces, Proceedings of the SIGCHI conference on Human factors in computing systems, pp.1-10, 2007.

J. M. Yoon and C. Tennie, Contagious yawning: a reflection of empathy, mimicry, or contagion?, In: Animal Behaviour, vol.79, pp.1-3, 2010.

S. Yoshida, T. Tanikawa, S. Sakurai, M. Hirose, and T. Narumi, Manipulation of an emotional experience by realtime deformed facial feedback, Proceedings of the 4th Augmented Human International Conference, pp.35-42, 2013.

H. Yu, G. B. Oliver, P. G. Garrod, and . Schyns, Perception-driven facial expression synthesis, Computers & Graphics, vol.36, issue.3, pp.152-162, 2012.

D. I. Zafeiriou, E. Ververi, and . Vargiami, Childhood autism and associated comorbidities, Brain & development, vol.29, pp.257-272, 2007.

R. B. Zajonc, S. T. Pamela-k-adelmann, P. M. Murphy, and . Niedenthal, Convergence in the physical appearance of spouses, Motivation and emotion 11, vol.4, pp.335-346, 1987.

J. Zaki, K. Ochsner, ;. L. Barrett, M. Lewis, and J. M. Haviland-jones, Chapter 50 : Empathy". In: Handbook of emotions, Fourth Edition, 2016.

J. Zaki and K. N. Ochsner, The neuroscience of empathy: progress, pitfalls and promise, Nature neuroscience, vol.15, p.675, 2012.

M. Zilbovicius, A. Saitovitch, T. Popa, E. Rechtman, L. Diamandis et al., Autism, social cognition and superior temporal sulcus, Open Journal of Psychiatry, p.46, 2013.
URL : https://hal.archives-ouvertes.fr/hal-01253349