Offline Learning for Planning: A Summary - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Offline Learning for Planning: A Summary

Résumé

The training of autonomous agents often requires expensive and unsafe trial-and-error interactions with the environment. Nowadays several data sets containing recorded experiences of intelligent agents performing various tasks, spanning from the control of unmanned vehicles to human-robot interaction and medical applications are accessible on the internet. With the intention of limiting the costs of the learning procedure it is convenient to exploit the information that is already available rather than collecting new data. Nevertheless, the incapability to augment the batch can lead the autonomous agents to develop far from optimal behaviors when the sampled experiences do not allow for a good estimate of the true distribution of the environment. Offline learning is the area of machine learning concerned with efficiently obtaining an optimal policy with a batch of previously collected experiences without further interaction with the environment. In this paper we adumbrate the ideas motivating the development of the state-of-the-art offline learning baselines. The listed methods consist in the introduction of epistemic uncertainty dependent constraints during the classical resolution of a Markov Decision Process, with and without function approximators, that aims to alleviate the bad effects of the distributional mismatch between the available samples and real world. We provide comments on the practical utility of the theoretical bounds that justify the application of these algorithms and suggest the utilization of Generative Adversarial Networks to estimate the distributional shift that affects all of the proposed model-free and model-based approaches.

Domaines

Autre
Fichier principal
Vignette du fichier
Angelotti_26790.pdf (270.38 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03125176 , version 1 (29-01-2021)

Identifiants

  • HAL Id : hal-03125176 , version 1
  • OATAO : 26790

Citer

Giorgio Angelotti, Nicolas Drougard, Caroline Ponzoni Carvalho Chanel. Offline Learning for Planning: A Summary. Bridging the Gap Between AI Planning and Reinforcement Learning (PRL), ICAPS 2020 Workshop, Oct 2020, Nancy, France. pp.153-161. ⟨hal-03125176⟩

Collections

ANR ANITI
33 Consultations
31 Téléchargements

Partager

Gmail Facebook X LinkedIn More