Skip to Main content Skip to Navigation
Conference papers

Modeling Perception-Action Loops: Comparing Sequential Models with Frame-Based Classifiers

Alaeddine Mihoub 1, 2, * Gérard Bailly 2 Christian Wolf 1
* Corresponding author
1 imagine - Extraction de Caractéristiques et Identification
LIRIS - Laboratoire d'InfoRmatique en Image et Systèmes d'information
Abstract : Modeling multimodal perception-action loops in face-to-face interactions is a crucial step in the process of building sensory-motor behaviors for social robots or users-aware Embodied Conversational Agents (ECA). In this paper, we compare trainable behavioral models based on sequential models (HMMs) and classifiers (SVMs and Decision Trees) inherently inappropriate to model sequential aspects. These models aim at giving pertinent perception/action skills for robots in order to generate optimal actions given the perceived actions of others and joint goals. We applied these models to parallel speech and gaze data collected from interacting dyads. The challenge was to predict the gaze of one subject given the gaze of the interlocutor and the voice activity of both. We show that Incremental Discrete HMM (IDHMM) generally outperforms classifiers and that injecting input context in the modeling process significantly improves the performances of all algorithms.
Complete list of metadata

Cited literature [29 references]  Display  Hide  Download
Contributor : Alaeddine Mihoub <>
Submitted on : Friday, September 5, 2014 - 7:13:16 PM
Last modification on : Tuesday, June 1, 2021 - 2:08:08 PM
Long-term archiving on: : Saturday, December 6, 2014 - 11:47:00 AM


Files produced by the author(s)


  • HAL Id : hal-01061454, version 1


Alaeddine Mihoub, Gérard Bailly, Christian Wolf. Modeling Perception-Action Loops: Comparing Sequential Models with Frame-Based Classifiers. HAI 2014 - 2nd International Conference on Human-Agent Interaction, Oct 2014, Tsukuba, Japan. pp.309-314. ⟨hal-01061454⟩



Record views


Files downloads