Relational Reinforcement Learning for Planning with Exogenous Effects

Abstract : Probabilistic planners have improved recently to the point that they can solve difficult tasks with complex and expressive models. In contrast, learners cannot tackle yet the expressive models that planners do, which forces complex models to be mostly handcrafted. We propose a new learning approach that can learn relational probabilistic models with both action effects and exogenous effects. The proposed learning approach combines a multi-valued variant of inductive logic programming for the generation of candidate models, with an optimization method to select the best set of planning operators to model a problem. We also show how to combine this learner with reinforcement learning algorithms to solve complete problems. Finally, experimental validation is provided that shows improvements over previous work in both simulation and a robotic task. The robotic task involves a dynamic scenario with several agents where a manipulator robot has to clear the tableware on a table. We show that the exogenous effects learned by our approach allowed the robot to clear the table in a more efficient way.
Complete list of metadatas

Cited literature [51 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01710491
Contributor : Tony Ribeiro <>
Submitted on : Friday, February 16, 2018 - 8:36:59 AM
Last modification on : Thursday, June 27, 2019 - 1:36:06 PM
Long-term archiving on : Sunday, May 6, 2018 - 8:19:22 PM

File

16-326.pdf
Publisher files allowed on an open archive

Licence


Distributed under a Creative Commons Attribution 4.0 International License

Identifiers

  • HAL Id : hal-01710491, version 1

Citation

David Martinez, Guillem Alenya, Tony Ribeiro, Katsumi Inoue, Carme Torras. Relational Reinforcement Learning for Planning with Exogenous Effects. Journal of Machine Learning Research, Microtome Publishing, 2017, 18, pp.1 - 44. ⟨hal-01710491⟩

Share

Metrics

Record views

427

Files downloads

29