Demonstrating and Learning Multimodal Socio-communicative Behaviors for HRI: Building Interactive Models from Immersive Teleoperation Data

Gérard Bailly 1 Frédéric Elisei 2, 1
1 GIPSA-CRISSP - CRISSP
GIPSA-DPC - Département Parole et Cognition
2 GIPSA-Services - GIPSA-Services
GIPSA-lab - Grenoble Images Parole Signal Automatique
Abstract : The main aim of artificial-intelligence (AI) is to provide machines with intelligence. Machine learning is now widely used to extract such intelligence from data. Collecting and modeling mul-timodal interactive data is thus a major issue for fostering AI for HRI. We first discuss the egg-and-chicken problem of collecting ground-truth HRI data without actually disposing of robots with mature social skills. Particular issues raised by the current multimodal end-to-end mapping frameworks are also commented. We then analyze the benefits and challenges raised by using immersive tele-operation for endowing humanoid robots with such skills. We finally argue for establishing stronger gateways between HRI and Augmented/Virtual Reality research domains.
Document type :
Conference papers
Complete list of metadatas

Cited literature [1 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01835008
Contributor : Gérard Bailly <>
Submitted on : Wednesday, July 11, 2018 - 10:08:01 AM
Last modification on : Thursday, October 11, 2018 - 4:48:07 PM
Long-term archiving on : Friday, October 12, 2018 - 3:54:13 PM

File

gb_AI-MHRI2018.pdf
Files produced by the author(s)

Identifiers

Citation

Gérard Bailly, Frédéric Elisei. Demonstrating and Learning Multimodal Socio-communicative Behaviors for HRI: Building Interactive Models from Immersive Teleoperation Data. FAIM/ISCA Workshop on Artificial Intelligence for Multimodal Human Robot Interaction, Jul 2018, Stockholm, Sweden. pp.39-43, ⟨10.21437/AI-MHRI.2018-10⟩. ⟨hal-01835008⟩

Share

Metrics

Record views

153

Files downloads

136