OPTIMAL NON-ASYMPTOTIC BOUND OF THE RUPPERT-POLYAK AVERAGING WITHOUT STRONG CONVEXITY

Abstract : This paper is devoted to the non-asymptotic control of the mean-squared error for the Ruppert-Polyak stochastic averaged gradient descent introduced in the seminal contributions of [Rup88] and [PJ92]. In our main results, we establish non-asymptotic tight bounds (optimal with respect to the Cramer-Rao lower bound) in a very general framework that includes the uniformly strongly convex case as well as the one where the function f to be minimized satisfies a weaker Kurdyka-Lojiasewicz-type condition [Loj63, Kur98]. In particular, it makes it possible to recover some pathological examples such as on-line learning for logistic regression (see [Bac14]) and recursive quan-tile estimation (an even non-convex situation). Finally, our bound is optimal when the decreasing step (γn) n≥1 satisfies: γn = γn −β with β = 3/4, leading to a second-order term in O(n −5/4).
Type de document :
Pré-publication, Document de travail
2017
Liste complète des métadonnées

Littérature citée [18 références]  Voir  Masquer  Télécharger

https://hal.archives-ouvertes.fr/hal-01623986
Contributeur : Fabien Panloup <>
Soumis le : mercredi 25 octobre 2017 - 23:32:23
Dernière modification le : mercredi 12 décembre 2018 - 15:22:08
Document(s) archivé(s) le : vendredi 26 janvier 2018 - 15:56:46

Fichier

GP_09_20.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01623986, version 1

Citation

Sébastien Gadat, Fabien Panloup. OPTIMAL NON-ASYMPTOTIC BOUND OF THE RUPPERT-POLYAK AVERAGING WITHOUT STRONG CONVEXITY. 2017. 〈hal-01623986〉

Partager

Métriques

Consultations de la notice

181

Téléchargements de fichiers

30