Pure Exploration in Infinitely-Armed Bandit Models with Fixed-Confidence

Abstract : We consider the problem of near-optimal arm identification in the fixed confidence setting of the infinitely armed bandit problem when nothing is known about the arm reservoir distribution. We (1) introduce a PAC-like framework within which to derive and cast results; (2) derive a sample complexity lower bound for near-optimal arm identification; (3) propose an algorithm that identifies a nearly-optimal arm with high probability and derive an upper bound on its sample complexity which is within a log factor of our lower bound; and (4) discuss whether our log^2(1/delta) dependence is inescapable for ``two-phase'' (select arms first, identify the best later) algorithms in the infinite setting. This work permits the application of bandit models to a broader class of problems where fewer assumptions hold.
Document type :
Conference papers
Complete list of metadatas

Cited literature [31 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01729969
Contributor : Emilie Kaufmann <>
Submitted on : Monday, March 12, 2018 - 6:59:43 PM
Last modification on : Friday, March 22, 2019 - 1:36:27 AM
Long-term archiving on : Wednesday, June 13, 2018 - 3:00:31 PM

Identifiers

  • HAL Id : hal-01729969, version 1
  • ARXIV : 1803.04665

Citation

Maryam Aziz, Jesse Anderton, Emilie Kaufmann, Javed Aslam. Pure Exploration in Infinitely-Armed Bandit Models with Fixed-Confidence. ALT 2018 - Algorithmic Learning Theory, Apr 2018, Lanzarote, Spain. ⟨hal-01729969⟩

Share

Metrics

Record views

250

Files downloads

108