The Impact of Human-Robot Interfaces on the Learning of Visual Objects

Pierre Rouanet 1 Pierre-Yves Oudeyer 1 Fabien Danieau 2 David Filliat 1, 3
1 Flowers - Flowing Epigenetic Robots and Systems
INRIA Bordeaux - Sud-Ouest, ENSTA ParisTech U2IS - Unité d'Informatique et d'Ingénierie des Systèmes
Abstract : This paper studies the impact of interfaces allowing non-expert users to efficiently and intuitively teach a robot to recognize new visual objects. We present challenges that need to be addressed for real-world deployment of robots capable of learning new visual ¡objects in interaction with everyday users. We argue that in addition to robust machine learning and computer vision methods, well-designed interfaces are crucial for learning efficiency. In particular, we argue that interfaces can be key in helping non-expert users to collect good learning examples and thus improve the performance of the overall learning system. Then, we present four alternative human-robot interfaces: three are based on the use of a mediating artifact (smartphone, wiimote, wiimote and laser), and one is based on natural human gestures (with a Wizard-of-Oz recognition system). These interfaces mainly vary in the kind of feedback provided to the user, allowing him to understand more or less easily what the robot is perceiving, and thus guide his way of providing training examples differently. We then evaluate the impact of these interfaces, in terms of learning efficiency, usability and user's experience, through a real world and large scale user study. In this experiment, we asked participants to teach a robot twelve different new visual objects in the context of a robotic game. This game happens in a home-like environment and was designed to motivate and engage users in an interaction where using the system was meaningful. We then discuss results that show significant differences among interfaces. In particular, we show that interfaces such as the smartphone interface allows non-expert users to intuitively provide much better training examples to the robot, almost as good as expert users who are trained for this task and aware of the different visual perception and machine learning issues. We also show that artifact-mediated teaching is significantly more efficient for robot learning, and equally good in terms of usability and user's experience, than teaching thanks to a gesture-based human-like interaction.
Document type :
Journal articles
IEEE Transactions on Robotics, Institute of Electrical and Electronics Engineers (IEEE), 2013, 29 (2), pp.525-541. <10.1109/TRO.2012.2228134>


https://hal.inria.fr/hal-00758241
Contributor : Matthieu Lapeyre <>
Submitted on : Wednesday, November 28, 2012 - 1:41:45 PM
Last modification on : Friday, January 3, 2014 - 10:56:27 AM

File

tro-2010-v10.pdf
fileSource_public_author

Identifiers

Collections

Citation

Pierre Rouanet, Pierre-Yves Oudeyer, Fabien Danieau, David Filliat. The Impact of Human-Robot Interfaces on the Learning of Visual Objects. IEEE Transactions on Robotics, Institute of Electrical and Electronics Engineers (IEEE), 2013, 29 (2), pp.525-541. <10.1109/TRO.2012.2228134>. <hal-00758241>

Export

Share

Metrics

Consultation de
la notice

435

Téléchargement du document

258