Skip to Main content Skip to Navigation
New interface
Poster communications

Multimodal-Based Upper Facial Gestures Synthesis for Engaging Virtual Agents

Abstract : Myriad of applications involve the interaction of humans with machines, such as reception agents, home assistants, chatbots or autonomous vehicles' agents. Humans can control the virtual agents by the mean of various modalities including sound, vision, and touch. In this paper, we discuss about designing engaging virtual agents with expressive gestures and prosody. We also propose an architecture that generates upper facial movements based on two modalities: speech and text. This paper is part of a work that aims to review the mechanisms that govern multimodal interaction, such as the agent's expressiveness and the adaptation of its behavior, to help remove technological barriers and develop a conversational agent capable of adapting naturally and coherently to its interlocutor.
Complete list of metadata

https://hal.archives-ouvertes.fr/hal-03377549
Contributor : CCSD Sciencesconf.org Connect in order to contact the contributor
Submitted on : Thursday, October 14, 2021 - 11:14:47 AM
Last modification on : Wednesday, March 16, 2022 - 3:44:27 AM
Long-term archiving on: : Saturday, January 15, 2022 - 6:32:40 PM

File

WACAI2021_Reviewed.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-03377549, version 1

Citation

Mireille Fares, Catherine I Pelachaud, Nicolas Obin. Multimodal-Based Upper Facial Gestures Synthesis for Engaging Virtual Agents. WACAI 2021, Oct 2021, Saint Pierre d'Oléron, France. ⟨hal-03377549⟩

Share

Metrics

Record views

96

Files downloads

51