Learning multimodal behavioral models for face-to-face social interaction

Alaeddine Mihoub 1, 2 Gérard Bailly 1 Christian Wolf 2 Frédéric Elisei 3, 1
1 GIPSA-CRISSP - CRISSP
GIPSA-DPC - Département Parole et Cognition
2 imagine - Extraction de Caractéristiques et Identification
LIRIS - Laboratoire d'InfoRmatique en Image et Systèmes d'information
3 GIPSA-Services - GIPSA-Services
GIPSA-lab - Grenoble Images Parole Signal Automatique
Abstract : The aim of this paper is to model multimodal perception-action loops of human behavior in face-to-face interactions. The long-term goal of this research is to give artificial agents social skills to engage believable interactions with human interlocutors. To this end, we propose trainable behavioral models that generate optimal actions given others’ perceived actions and joint goals. We first compare sequential models - in particular Discrete Hidden Markov Models (DHMMs) - with standard classifiers (SVMs and Decision Trees). We propose a modification of the initialization of the DHMMs in order to better capture the recurrent structure of the sensory-motor states. We show that the explicit state duration modeling by Hidden Semi Markov Models (HSMMs) improves prediction performance. We applied these models to parallel speech and gaze data collected from interacting dyads. The challenge was to predict the gaze of one subject given the gaze of the interlocutor and the voice activity of both. For both HMMs and HSMMs the Short-Time Viterbi concept is used for incremental decoding and generation. For the proposed models we evaluated objectively many properties in order to go beyond pure classification performance. Results show that while Incremental Discrete HMMs (IDHMMs) were more efficient than classic classifiers, the Incremental Discrete HSMMs (IDHSMMs) give best performance. This result emphasizes the relevance of state duration modeling.
Complete list of metadatas

https://hal.archives-ouvertes.fr/hal-01170991
Contributor : Christian Wolf <>
Submitted on : Thursday, July 2, 2015 - 4:20:29 PM
Last modification on : Tuesday, February 26, 2019 - 4:35:38 PM

Identifiers

Citation

Alaeddine Mihoub, Gérard Bailly, Christian Wolf, Frédéric Elisei. Learning multimodal behavioral models for face-to-face social interaction. Journal on Multimodal User Interfaces, Springer, 2015, 9 (3), pp.195-210. ⟨10.1007/s12193-015-0190-7 ⟩. ⟨hal-01170991⟩

Share

Metrics

Record views

836