Multimodal complex emotions : gesture expressivity and blended facial expressions - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue International Journal of Humanoid Robotics Année : 2006

Multimodal complex emotions : gesture expressivity and blended facial expressions

Résumé

One of the challenges of designing virtual humans is the definition of appropriate models of the relation between realistic emotions and the coordination of behaviors in several modalities. In this paper, we present the annotation, representation and modeling of multimodal visual behaviors occurring during complex emotions. We illustrate our work using a corpus of TV interviews. This corpus has been annotated at several levels of information: communicative acts, emotion labels, and multimodal signs. We have defined a copy-synthesis approach to drive an Embodied Conversational Agent from these different levels of information. The second part of our paper focuses on a model of complex (superposition and masking of) emotions in facial expressions of the agent. We explain how the complementary aspects of our work on corpus and computational model is used to specify complex emotional behaviors.
Fichier principal
Vignette du fichier
LCPI_IJHR_2006_MARTIN.pdf (632.18 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00787584 , version 1 (15-02-2013)

Identifiants

Citer

Jean-Claude Martin, Radoslaw Niewiadomski, Laurence Devillers, Stéphanie Buisine, Catherine Pelachaud. Multimodal complex emotions : gesture expressivity and blended facial expressions. International Journal of Humanoid Robotics, 2006, pp.3, 1-23. ⟨10.1142/S0219843606000825⟩. ⟨hal-00787584⟩
202 Consultations
603 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More