OPTIMAL NON-ASYMPTOTIC BOUND OF THE RUPPERT-POLYAK AVERAGING WITHOUT STRONG CONVEXITY

Abstract : This paper is devoted to the non-asymptotic control of the mean-squared error for the Ruppert-Polyak stochastic averaged gradient descent introduced in the seminal contributions of [Rup88] and [PJ92]. In our main results, we establish non-asymptotic tight bounds (optimal with respect to the Cramer-Rao lower bound) in a very general framework that includes the uniformly strongly convex case as well as the one where the function f to be minimized satisfies a weaker Kurdyka-Lojiasewicz-type condition [Loj63, Kur98]. In particular, it makes it possible to recover some pathological examples such as on-line learning for logistic regression (see [Bac14]) and recursive quan-tile estimation (an even non-convex situation). Finally, our bound is optimal when the decreasing step (γn) n≥1 satisfies: γn = γn −β with β = 3/4, leading to a second-order term in O(n −5/4).
Document type :
Preprints, Working Papers, ...
Liste complète des métadonnées

Cited literature [18 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01623986
Contributor : Fabien Panloup <>
Submitted on : Wednesday, October 25, 2017 - 11:32:23 PM
Last modification on : Friday, April 12, 2019 - 4:22:51 PM
Document(s) archivé(s) le : Friday, January 26, 2018 - 3:56:46 PM

File

GP_09_20.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01623986, version 1

Citation

Sébastien Gadat, Fabien Panloup. OPTIMAL NON-ASYMPTOTIC BOUND OF THE RUPPERT-POLYAK AVERAGING WITHOUT STRONG CONVEXITY. 2017. ⟨hal-01623986⟩

Share

Metrics

Record views

204

Files downloads

33