Skip to Main content Skip to Navigation
Journal articles

Stochastic optimization with momentum: convergence, fluctuations, and traps avoidance

Abstract : In this paper, a general stochastic optimization procedure is studied, unifying several variants of the stochastic gradient descent such as, among others, the stochastic heavy ball method, the Stochastic Nesterov Accelerated Gradient algorithm (S-NAG), and the widely used Adam algorithm. The algorithm is seen as a noisy Euler discretization of a nonautonomous ordinary differential equation, recently introduced by Belotto da Silva and Gazeau, which is analyzed in depth. Assuming that the objective function is non-convex and differentiable, the stability and the almost sure convergence of the iterates to the set of critical points are established. A noteworthy special case is the convergence proof of SNAG in a nonconvex setting. Under some assumptions, the convergence rate is provided under the form of a Central Limit Theorem. Finally, the non-convergence of the algorithm to undesired critical points, such as local maxima or saddle points, is established. Here, the main ingredient is a new avoidance of traps result for non-autonomous settings, which is of independent interest.
Complete list of metadata

https://hal.archives-ouvertes.fr/hal-03310455
Contributor : Walid Hachem Connect in order to contact the contributor
Submitted on : Friday, July 30, 2021 - 2:32:47 PM
Last modification on : Friday, April 1, 2022 - 3:47:37 AM
Long-term archiving on: : Sunday, October 31, 2021 - 6:18:09 PM

File

20adam.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-03310455, version 1

Citation

Anas Barakat, Pascal Bianchi, Walid Hachem, Sholom Schechtman. Stochastic optimization with momentum: convergence, fluctuations, and traps avoidance. Electronic Journal of Statistics , Shaker Heights, OH : Institute of Mathematical Statistics, 2021, 15 (2), pp.3892-3947. ⟨hal-03310455⟩

Share

Metrics

Record views

26

Files downloads

24