Reinforcement Learning in Linear MDPs: Constant Regret and Representation Selection - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Reinforcement Learning in Linear MDPs: Constant Regret and Representation Selection

Andrea Tirinzoni
  • Fonction : Auteur
  • PersonId : 1115339
Aldo Pacchiano
  • Fonction : Auteur
  • PersonId : 1118669
Alessandro Lazaric
  • Fonction : Auteur
  • PersonId : 1105515
Matteo Pirotta
  • Fonction : Auteur
  • PersonId : 1105514

Résumé

We study the role of the representation of state-action value functions in regret minimization in finite-horizon Markov Decision Processes (MDPs) with linear structure. We first derive a necessary condition on the representation, called universally spanning optimal features (UNISOFT), to achieve constant regret in any MDP with linear reward function. This result encompasses the well-known settings of low-rank MDPs and, more generally, zero inherent Bellman error (also known as the Bellman closure assumption). We then demonstrate that this condition is also sufficient for these classes of problems by deriving a constant regret bound for two optimistic algorithms (LSVI-UCB and ELEANOR). Finally, we propose an algorithm for representation selection and we prove that it achieves constant regret when one of the given representations, or a suitable combination of them, satisfies the UNISOFT condition.
Fichier principal
Vignette du fichier
unisoft.pdf (618.25 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03479324 , version 1 (14-12-2021)

Identifiants

  • HAL Id : hal-03479324 , version 1

Citer

Matteo Papini, Andrea Tirinzoni, Aldo Pacchiano, Marcello Restilli, Alessandro Lazaric, et al.. Reinforcement Learning in Linear MDPs: Constant Regret and Representation Selection. Thirty-Fifth Conference on Neural Information Processing Systems, Dec 2021, Virtual, France. ⟨hal-03479324⟩
23 Consultations
55 Téléchargements

Partager

Gmail Facebook X LinkedIn More