Skip to Main content Skip to Navigation
Conference papers

Exploration vs Exploitation vs Safety: Risk-averse Multi-Armed Bandits

Nicolas Galichet 1, 2 Michèle Sebag 2 Olivier Teytaud 2
2 TAO - Machine Learning and Optimisation
CNRS - Centre National de la Recherche Scientifique : UMR8623, Inria Saclay - Ile de France, UP11 - Université Paris-Sud - Paris 11, LRI - Laboratoire de Recherche en Informatique
Abstract : Motivated by applications in energy management, this paper presents the Multi-Armed Risk-Aware Bandit (MARAB) algorithm. With the goal of limiting the exploration of risky arms, MARAB takes as arm quality its conditional value at risk. When the user-supplied risk level goes to 0, the arm quality tends toward the essential infimum of the arm distribution density, and MARAB tends toward the MIN multi-armed bandit algorithm, aimed at the arm with maximal minimal value. As a first contribution, this paper presents a theoretical analysis of the MIN algorithm under mild assumptions, establishing its robustness comparatively to UCB. The analysis is supported by extensive experimental validation of MIN and MARAB compared to UCB and state-of-art risk-aware MAB algorithms on artificial and real-world problems.
Document type :
Conference papers
Complete list of metadata
Contributor : Nicolas Galichet <>
Submitted on : Monday, January 6, 2014 - 3:27:30 PM
Last modification on : Thursday, November 5, 2020 - 9:02:03 AM
Long-term archiving on: : Thursday, April 10, 2014 - 5:20:30 PM


Files produced by the author(s)


  • HAL Id : hal-00924062, version 2
  • ARXIV : 1401.1123



Nicolas Galichet, Michèle Sebag, Olivier Teytaud. Exploration vs Exploitation vs Safety: Risk-averse Multi-Armed Bandits. Asian Conference on Machine Learning 2013, Nov 2013, Canberra, Australia. pp.245-260. ⟨hal-00924062v2⟩



Record views


Files downloads