Thompson sampling for one-dimensional exponential family bandits

Nathaniel Korda 1 Emilie Kaufmann 2 Rémi Munos 1
1 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, Inria Lille - Nord Europe, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal
Abstract : Thompson Sampling has been demonstrated in many complex bandit models, however the theoretical guarantees available for the parametric multi-armed bandit are still limited to the Bernoulli case. Here we extend them by proving asymptotic optimality of the algorithm using the Jeffreys prior for 1-dimensional exponential family bandits. Our proof builds on previous work, but also makes extensive use of closed forms for Kullback-Leibler divergence and Fisher information (and thus Jeffreys prior) available in an exponential family. This allow us to give a finite time exponential concentration inequality for posterior distributions on exponential families that may be of interest in its own right. Moreover our analysis covers some distributions for which no optimistic algorithm has yet been proposed, including heavy-tailed exponential families.
Document type :
Conference papers
Complete list of metadatas

Cited literature [16 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-00923683
Contributor : Rémi Munos <>
Submitted on : Friday, January 3, 2014 - 7:11:54 PM
Last modification on : Thursday, October 17, 2019 - 12:36:06 PM
Long-term archiving on: Thursday, April 3, 2014 - 10:40:09 PM

File

nips13-TS.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-00923683, version 1

Citation

Nathaniel Korda, Emilie Kaufmann, Rémi Munos. Thompson sampling for one-dimensional exponential family bandits. Advances in Neural Information Processing Systems, 2013, United States. ⟨hal-00923683⟩

Share

Metrics

Record views

462

Files downloads

325