Efficient online algorithms for fast-rate regret bounds under sparsity

Pierre Gaillard 1 Olivier Wintenberger 2
1 SIERRA - Statistical Machine Learning and Parsimony
DI-ENS - Département d'informatique de l'École normale supérieure, CNRS - Centre National de la Recherche Scientifique, Inria de Paris
Abstract : We consider the online convex optimization problem. In the setting of arbitrary sequences and finite set of parameters, we establish a new fast-rate quantile regret bound. Then we investigate the optimization into the L1-ball by discretizing the parameter space. Our algorithm is projection free and we propose an efficient solution by restarting the algorithm on adaptive discretization grids. In the adversarial setting, we develop an algorithm that achieves several rates of convergence with different dependences on the sparsity of the objective. In the i.i.d. setting, we establish new risk bounds that are adaptive to the sparsity of the problem and to the regularity of the risk (ranging from a rate 1 / √ T for general convex risk to 1 /T for strongly convex risk). These results generalize previous works on sparse online learning. They are obtained under a weak assumption on the risk (Łojasiewicz's assumption) that allows multiple optima which is crucial when dealing with degenerate situations.
Type de document :
Pré-publication, Document de travail
2018
Liste complète des métadonnées

Littérature citée [37 références]  Voir  Masquer  Télécharger

https://hal.archives-ouvertes.fr/hal-01798201
Contributeur : Pierre Gaillard <>
Soumis le : mercredi 23 mai 2018 - 11:48:20
Dernière modification le : jeudi 24 mai 2018 - 01:17:01

Fichiers

SABOA.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01798201, version 1
  • ARXIV : 1805.09174

Collections

Citation

Pierre Gaillard, Olivier Wintenberger. Efficient online algorithms for fast-rate regret bounds under sparsity. 2018. 〈hal-01798201〉

Partager

Métriques

Consultations de la notice

86

Téléchargements de fichiers

26