Learning joint multimodal behaviors for face-to-face interaction: performance & properties of statistical models - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2015

Learning joint multimodal behaviors for face-to-face interaction: performance & properties of statistical models

Résumé

We evaluate here the ability of statistical models, namely Hidden Markov Models (HMMs) and Dynamic Bayesian Networks (DBNs), in capturing the interplay and coordination between multimodal behaviors of two individuals involved in a face-to-face interaction. We structure the intricate sensory-motor coupling of the joint multimodal scores by segmenting the whole interaction into so-called interaction units (IU). We show that the proposed statistical models are able to capture the natural dynamics of the interaction and that DBNs are particularly suitable for reproducing original distributions of so-called coordination histograms.
Fichier principal
Vignette du fichier
learning_HRI2015_short.pdf (148.66 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01110290 , version 1 (27-01-2015)

Identifiants

  • HAL Id : hal-01110290 , version 1

Citer

Gérard Bailly, Alaeddine Mihoub, Christian Wolf, Frédéric Elisei. Learning joint multimodal behaviors for face-to-face interaction: performance & properties of statistical models. Human-Robot Interaction. Workshop on Behavior Coordination between Animals, Humans, and Robots, Mar 2015, Portland, United States. ⟨hal-01110290⟩
320 Consultations
203 Téléchargements

Partager

Gmail Facebook X LinkedIn More