Stochastic bandits with vector losses: Minimizing $\ell^\infty$-norm of relative losses - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2020

Stochastic bandits with vector losses: Minimizing $\ell^\infty$-norm of relative losses

Résumé

Multi-armed bandits are widely applied in scenarios like recommender systems, for which the goal is to maximize the click rate. However, more factors should be considered, e.g., user stickiness, user growth rate, user experience assessment, etc. In this paper, we model this situation as a problem of K-armed bandit with multiple losses. We define relative loss vector of an arm where the i-th entry compares the arm and the optimal arm with respect to the i-th loss. We study two goals: (a) finding the arm with the minimum $\ell^\infty$-norm of relative losses with a given confidence level (which refers to fixed-confidence best-arm identification); (b) minimizing the $\ell^\infty$-norm of cumulative relative losses (which refers to regret minimization). For goal (a), we derive a problem-dependent sample complexity lower bound and discuss how to achieve matching algorithms. For goal (b), we provide a regret lower bound of Ω(T 2/3) and provide a matching algorithm.
Fichier principal
Vignette du fichier
shang2020vector.pdf (451.55 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02968536 , version 1 (15-10-2020)

Identifiants

  • HAL Id : hal-02968536 , version 1

Citer

Xuedong Shang, Han Shao, Jian Qian. Stochastic bandits with vector losses: Minimizing $\ell^\infty$-norm of relative losses. 2020. ⟨hal-02968536⟩
77 Consultations
86 Téléchargements

Partager

Gmail Facebook X LinkedIn More