Learning Legible Motion from Human–Robot Interactions

Abstract : In collaborative tasks, displaying legible behavior enables other members of the team to anticipate intentions and to thus coordinate their actions accordingly. Behavior is therefore considered to be legible when an observer is able to quickly and correctly infer the intention of the agent generating the behavior. In previous work, legible robot behavior has been generated by using model-based methods to optimize task-specific models of legibility. In our work, we rather use model-free reinforcement learning with a generic, task-independent cost function. In the context of experiments involving a joint task between (thirty) human subjects and a humanoid robot, we show that: 1) legible behavior arises when rewarding the efficiency of joint task completion during human-robot interactions 2) behavior that has been optimized for one subject is also more legible for other subjects 3) the universal legibility of behavior is influenced by the choice of the policy representation. Fig. 1 Illustration of the button pressing experiment, where the robot reaches for and presses a button. The human subject predicts which button the robot will push, and is instructed to quickly press a button of the same color when sufficiently confident about this prediction. By rewarding the robot for fast and successful joint completion of the task, which indirectly rewards how quickly the human recognizes the robot's intention and thus how quickly the human can start the complementary action, the robot learns to perform more legible motion. The three example trajectories illustrate the concept of legible behavior: it enables correct prediction of the intention early on in the trajectory.
Document type :
Journal articles
Liste complète des métadonnées

Cited literature [23 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01629451
Contributor : Baptiste Busch <>
Submitted on : Monday, November 6, 2017 - 2:42:40 PM
Last modification on : Friday, December 15, 2017 - 1:25:18 PM

File

main_final.pdf
Files produced by the author(s)

Identifiers

Citation

Baptiste Busch, Jonathan Grizou, Manuel Lopes, Freek Stulp. Learning Legible Motion from Human–Robot Interactions. International Journal of Social Robotics, Springer, 2017, 211 (3-4), pp.517 - 530. ⟨10.1007/s12369-017-0400-4⟩. ⟨hal-01629451⟩

Share

Metrics

Record views

186

Files downloads

187