Corrupt Bandits for Preserving Local Privacy

Abstract : We study a variant of the stochastic multi-armed bandit (MAB) problem in which the rewards are corrupted. In this framework, motivated by privacy preservation in online recommender systems, the goal is to maximize the sum of the (unobserved) rewards, based on the observation of transformation of these rewards through a stochastic corruption process with known parameters. We provide a lower bound on the expected regret of any bandit algorithm in this corrupted setting. We devise a frequentist algoritthm, KLUCB-CF, and a Bayesian algorithm, TS-CF and give upper bounds on their regret. We also provide the appropriate corruption parameters to guarantee a desired level of local privacy and analyze how this impacts the regret. Finally, we present some experimental results that confirm our analysis.
Document type :
Conference papers
Complete list of metadatas

https://hal.archives-ouvertes.fr/hal-01757297
Contributor : Emilie Kaufmann <>
Submitted on : Tuesday, April 3, 2018 - 3:57:47 PM
Last modification on : Friday, March 22, 2019 - 1:36:33 AM