Skip to Main content Skip to Navigation
Conference papers

Statistical efficiency of Thompson sampling for combinatorial semi-bandits

Abstract : We investigate stochastic combinatorial multi-armed bandit with semi-bandit feedback (CMAB). In CMAB, the question of the existence of an efficient policy with an optimal asymptotic regret (up to a factor poly-logarithmic with the action size) is still open for many families of distributions, including mutually independent outcomes, and more generally the multivariate sub-Gaussian family. We propose to answer the above question for these two families by analyzing variants of the Combinatorial Thompson Sampling policy (CTS). For mutually independent outcomes in $[0,1]$, we propose a tight analysis of CTS using Beta priors. We then look at the more general setting of multivariate sub-Gaussian outcomes and propose a tight analysis of CTS using Gaussian priors. This last result gives us an alternative to the Efficient Sampling for Combinatorial Bandit policy (ESCB), which, although optimal, is not computationally efficient.
Complete list of metadata

https://hal.archives-ouvertes.fr/hal-03089794
Contributor : Vianney Perchet Connect in order to contact the contributor
Submitted on : Monday, December 28, 2020 - 7:14:06 PM
Last modification on : Tuesday, January 4, 2022 - 6:12:16 AM

Links full text

Identifiers

  • HAL Id : hal-03089794, version 1
  • ARXIV : 2006.06613

Citation

Pierre Perrault, Etienne Boursier, Vianney Perchet, Michal Valko. Statistical efficiency of Thompson sampling for combinatorial semi-bandits. Neural Information Processing Systems, Dec 2020, Virtual, France. ⟨hal-03089794⟩

Share

Metrics

Les métriques sont temporairement indisponibles