Skip to Main content Skip to Navigation
Conference papers

Corrupt Bandits for Preserving Local Privacy

Abstract : We study a variant of the stochastic multi-armed bandit (MAB) problem in which the rewards are corrupted. In this framework, motivated by privacy preservation in online recommender systems, the goal is to maximize the sum of the (unobserved) rewards, based on the observation of transformation of these rewards through a stochastic corruption process with known parameters. We provide a lower bound on the expected regret of any bandit algorithm in this corrupted setting. We devise a frequentist algoritthm, KLUCB-CF, and a Bayesian algorithm, TS-CF and give upper bounds on their regret. We also provide the appropriate corruption parameters to guarantee a desired level of local privacy and analyze how this impacts the regret. Finally, we present some experimental results that confirm our analysis.
Document type :
Conference papers
Complete list of metadata
Contributor : Emilie Kaufmann Connect in order to contact the contributor
Submitted on : Tuesday, April 3, 2018 - 3:57:47 PM
Last modification on : Monday, December 14, 2020 - 5:26:33 PM