Modeling sensory-motor behaviors for social robots

Alaeddine Mihoub 1 Gérard Bailly 2 Christian Wolf 1
1 imagine - Extraction de Caractéristiques et Identification
LIRIS - Laboratoire d'InfoRmatique en Image et Systèmes d'information
2 GIPSA-MAGIC - MAGIC
GIPSA-DPC - Département Parole et Cognition
Abstract : Modeling multimodal perception-action loops in face-to-face interactions is a crucial step in the process of building sensory-motor behaviors for social robots or users-aware Embodied Conversational Agents (ECA). In this paper, we compare trainable behavioral models based on sequential models (HMMs) and classifiers (SVMs and Decision Trees) inherently inappropriate to model sequential aspects. These models aim at giving pertinent perception/action skills for robots in order to generate optimal actions given the perceived actions of others and joint goals. We applied these models to parallel speech and gaze data collected from interacting dyads. The challenge was to predict the gaze of one subject given the gaze of the interlocutor and the voice activity of both. We show that Incremental Discrete HMM (IDHMM) generally outperforms classifiers and that injecting input context in the modeling process significantly improves the performances of all algorithms
Document type :
Conference papers
Complete list of metadatas

https://hal.archives-ouvertes.fr/hal-01527421
Contributor : Équipe Gestionnaire Des Publications Si Liris <>
Submitted on : Wednesday, May 24, 2017 - 1:44:58 PM
Last modification on : Tuesday, February 26, 2019 - 4:35:38 PM

Identifiers

  • HAL Id : hal-01527421, version 1

Citation

Alaeddine Mihoub, Gérard Bailly, Christian Wolf. Modeling sensory-motor behaviors for social robots. Workshop Affect, Compagnon Artificiel, Interaction, Jun 2014, Rouen, France. ⟨hal-01527421⟩

Share

Metrics

Record views

265