Does a Virtual Talking Face Generate Proper Multimodal Cues to Draw User's Attention to Points of Interest? - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2006

Does a Virtual Talking Face Generate Proper Multimodal Cues to Draw User's Attention to Points of Interest?

Résumé

We present a series of experiments investigating face-to-face interaction between an Embodied Conversational Agent (ECA) and a human interlocutor. The ECA is embodied by a video realistic talking head with independent head and eye movements. For a beneficial application in face-to-face interaction, the ECA should be able to derive meaning from communicational gestures of a human interlocutor, and likewise to reproduce such gestures. Conveying its capability to interpret human behaviour, the system encourages the interlocutor to show appropriate natural activity. Therefore it is important that the ECA knows how to display what would correspond to mental states in humans. This allows to interpret the machine processes of the system in terms of human expressiveness and to assign them a corresponding meaning. Thus the system may maintain an interaction based on human patterns. During a first experiment we investigated the ability of our talking head to direct user attention with facial deictic cues (Raidt, Bailly et al. 2005). Users interact with the ECA during a simple card game offering different levels of help and guidance through facial deictic cues. We analyzed the users' performance and their perception of the quality of assistance given by the ECA. The experiment showed that users profit from its presence and its facial deictic cues. In the continuative series of experiments presented here, we investigated the effect of an enhancement of the multimodality of the deictic gestures by adding a spoken instruction.
Fichier principal
Vignette du fichier
sr_LREC06.pdf (770.45 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00366537 , version 1 (09-03-2009)

Identifiants

  • HAL Id : hal-00366537 , version 1

Citer

Stephan Raidt, Gérard Bailly, Frédéric Elisei. Does a Virtual Talking Face Generate Proper Multimodal Cues to Draw User's Attention to Points of Interest?. International conference on Language Resources and Evaluation (LREC), May 2006, Genoa, Italy. pp.2544-2549. ⟨hal-00366537⟩

Collections

UGA CNRS ICP
447 Consultations
50 Téléchargements

Partager

Gmail Facebook X LinkedIn More