Public Speaking Training with a Multimodal Interactive Virtual Audience Framework - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2015

Public Speaking Training with a Multimodal Interactive Virtual Audience Framework

Résumé

We have developed an interactive virtual audience platform for public speaking training. Users' public speaking behavior is automatically analyzed using multimodal sensors, and ultimodal feedback is produced by virtual characters and generic visual widgets depending on the user's behavior. The flexibility of our system allows to compare different interaction mediums (e.g. virtual reality vs normal interaction), social situations (e.g. one-on-one meetings vs large audiences) and trained behaviors (e.g. general public speaking performance vs specific behaviors).
Fichier non déposé

Dates et versions

hal-02439103 , version 1 (14-01-2020)

Identifiants

Citer

Mathieu Chollet, Kalin Stefanov, Helmut Prendinger, Stefan Scherer. Public Speaking Training with a Multimodal Interactive Virtual Audience Framework. International Conference on Multimodal Interaction, Nov 2015, Seattle, United States. pp.367-368, ⟨10.1145/2818346.2823294⟩. ⟨hal-02439103⟩

Collections

TICE TEL
24 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More