Optimal Regret Bounds for Selecting the State Representation in Reinforcement Learning

Odalric-Ambrym Maillard 1 Phuong Nguyen 2 Ronald Ortner 1 Daniil Ryabko 3
3 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, Inria Lille - Nord Europe, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal
Abstract : We consider an agent interacting with an environment in a single stream of actions, observations, and rewards, with no reset. This process is not assumed to be a Markov Decision Process (MDP). Rather, the agent has several representations (mapping histories of past interactions to a discrete state space) of the environment with unknown dynamics, only some of which result in an MDP. The goal is to minimize the average regret criterion against an agent who knows an MDP representation giving the highest optimal reward, and acts optimally in it. Recent regret bounds for this setting are of order $O(T^{2/3})$ with an additive term constant yet exponential in some characteristics of the optimal MDP. We propose an algorithm whose regret after $T$ time steps is $O(\sqrt{T})$, with all constants reasonably small. This is optimal in $T$ since $O(\sqrt{T})$ is the optimal regret in the setting of learning in a (single discrete) MDP.
Type de document :
Communication dans un congrès
ICML - 30th International Conference on Machine Learning, 2013, Atlanta, USA, United States. 28(1), pp.543-551, 2013, JMLR W&CP
Liste complète des métadonnées

Littérature citée [14 références]  Voir  Masquer  Télécharger

https://hal.inria.fr/hal-00778586
Contributeur : Daniil Ryabko <>
Soumis le : mercredi 20 mars 2013 - 10:52:14
Dernière modification le : mardi 19 janvier 2016 - 01:06:50
Document(s) archivé(s) le : vendredi 21 juin 2013 - 04:12:20

Fichier

icml1_iblb_cr-corrected.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-00778586, version 1

Citation

Odalric-Ambrym Maillard, Phuong Nguyen, Ronald Ortner, Daniil Ryabko. Optimal Regret Bounds for Selecting the State Representation in Reinforcement Learning. ICML - 30th International Conference on Machine Learning, 2013, Atlanta, USA, United States. 28(1), pp.543-551, 2013, JMLR W&CP. 〈hal-00778586〉

Partager

Métriques

Consultations de
la notice

289

Téléchargements du document

128