Robustness of Anytime Bandit Policies

Antoine Salomon 1, 2 Jean-Yves Audibert 1, 2
1 IMAGINE [Marne-la-Vallée]
LIGM - Laboratoire d'Informatique Gaspard-Monge, CSTB - Centre Scientifique et Technique du Bâtiment, ENPC - École des Ponts ParisTech
Abstract : This paper studies the deviations of the regret in a stochastic multi-armed bandit problem. When the total number of plays n is known beforehand by the agent, Audibert et al. (2009) exhibit a policy such that with probability at least 1-1/n, the regret of the policy is of order log(n). They have also shown that such a property is not shared by the popular ucb1 policy of Auer et al. (2002). This work first answers an open question: it extends this negative result to any anytime policy. The second contribution of this paper is to design anytime robust policies for specific multi-armed bandit problems in which some restrictions are put on the set of possible distributions of the different arms.
Document type :
Preprints, Working Papers, ...
Liste complète des métadonnées

Cited literature [22 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-00579607
Contributor : Antoine Salomon <>
Submitted on : Monday, July 25, 2011 - 2:04:45 PM
Last modification on : Thursday, July 5, 2018 - 2:26:41 PM
Document(s) archivé(s) le : Sunday, December 4, 2016 - 9:07:24 AM

Files

anytime.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-00579607, version 3
  • ARXIV : 1107.4506

Citation

Antoine Salomon, Jean-Yves Audibert. Robustness of Anytime Bandit Policies. 2011. ⟨hal-00579607v3⟩

Share

Metrics

Record views

577

Files downloads

125