Learning joint multimodal behaviors for face-to-face interaction: performance & properties of statistical models

Gérard Bailly 1 Alaeddine Mihoub 2, 1 Christian Wolf 2, * Frédéric Elisei 3, 1
* Corresponding author
1 GIPSA-CRISSP - CRISSP
GIPSA-DPC - Département Parole et Cognition
2 imagine - Extraction de Caractéristiques et Identification
LIRIS - Laboratoire d'InfoRmatique en Image et Systèmes d'information
3 GIPSA-Services - GIPSA-Services
GIPSA-lab - Grenoble Images Parole Signal Automatique
Abstract : We evaluate here the ability of statistical models, namely Hidden Markov Models (HMMs) and Dynamic Bayesian Networks (DBNs), in capturing the interplay and coordination between multimodal behaviors of two individuals involved in a face-to-face interaction. We structure the intricate sensory-motor coupling of the joint multimodal scores by segmenting the whole interaction into so-called interaction units (IU). We show that the proposed statistical models are able to capture the natural dynamics of the interaction and that DBNs are particularly suitable for reproducing original distributions of so-called coordination histograms.
Document type :
Conference papers
Complete list of metadatas

Cited literature [26 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01110290
Contributor : Gérard Bailly <>
Submitted on : Tuesday, January 27, 2015 - 6:32:19 PM
Last modification on : Tuesday, February 26, 2019 - 4:35:37 PM
Long-term archiving on : Friday, September 11, 2015 - 10:20:17 AM

File

learning_HRI2015_short.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01110290, version 1

Citation

Gérard Bailly, Alaeddine Mihoub, Christian Wolf, Frédéric Elisei. Learning joint multimodal behaviors for face-to-face interaction: performance & properties of statistical models. Human-Robot Interaction. Workshop on Behavior Coordination between Animals, Humans, and Robots, Mar 2015, Portland, United States. ⟨hal-01110290⟩

Share

Metrics

Record views

529

Files downloads

270