Using Emotional Interactions for Visual Navigation Task Learning - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2010

Using Emotional Interactions for Visual Navigation Task Learning

Résumé

The aim of this study is to show how robots learning could be easier and accessible to non experts if it relies on emotional interactions, more precis ely on social referencing abilities, rather than on specialized supervised learning technics. To test this idea, we coupled two systems : a robotic head able to learn to recognize and imitate emotional facial expressions and a mobile robot able to learn autonomous visual navigation tasks in a real environment. Two possible solutions for coupling these two systems are tested. First, the emotional interactions are used to qualify the robot's behavior. The robot shows its ability to learn how to reach a goal-place of its environment using emotional interaction signal from the experimentator. These signals are giving the robot information about the quality of its behavior and allow it to learn place-actions associations to construct an attraction basin around the goal-place. Second, the emotional interactions are used to qualify the robot's immediat environment. The robot shows its ability to learn how to avoid a place of its environment by associating it with the experimentator's anger facial expression. The first strategy allows the experimentator to teach the robot t o reach a specific place from anywhere in its environment. However, this strategy takes more learning time than the second strategy that is very fast but seems to be inappropriate to learn to reach a pla ce instead of avoiding it. While these two different strategies achieve satisfactory results, there is no reason why they should be mutually exclusive. In conclusion, we discuss the coupling of both type of learning. Our results also show that relying on the natural expertise of humans in recognizi ng and expressing emotions is a very promising approach to human-robot interactions. Furthermore, our approach can provide new interesting insights about how, in their early age, humans can develop high level social referencing capabilities from low level sensorimotors dynamics.
Fichier principal
Vignette du fichier
Hasson_Boucenna_Keer2010.pdf (986.32 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00538386 , version 1 (22-11-2010)

Identifiants

  • HAL Id : hal-00538386 , version 1

Citer

Cyril Hasson, Sofiane Boucenna, Philippe Gaussier, Laurence Hafemeister. Using Emotional Interactions for Visual Navigation Task Learning. International Conference on Kansei Engineering and Emotion Reasearch, Mar 2010, Paris, France. pp.1578-1587. ⟨hal-00538386⟩
121 Consultations
46 Téléchargements

Partager

Gmail Facebook X LinkedIn More