Stochastic optimization with momentum: convergence, fluctuations, and traps avoidance - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Electronic Journal of Statistics Année : 2021

Stochastic optimization with momentum: convergence, fluctuations, and traps avoidance

Résumé

In this paper, a general stochastic optimization procedure is studied, unifying several variants of the stochastic gradient descent such as, among others, the stochastic heavy ball method, the Stochastic Nesterov Accelerated Gradient algorithm (S-NAG), and the widely used Adam algorithm. The algorithm is seen as a noisy Euler discretization of a nonautonomous ordinary differential equation, recently introduced by Belotto da Silva and Gazeau, which is analyzed in depth. Assuming that the objective function is non-convex and differentiable, the stability and the almost sure convergence of the iterates to the set of critical points are established. A noteworthy special case is the convergence proof of SNAG in a nonconvex setting. Under some assumptions, the convergence rate is provided under the form of a Central Limit Theorem. Finally, the non-convergence of the algorithm to undesired critical points, such as local maxima or saddle points, is established. Here, the main ingredient is a new avoidance of traps result for non-autonomous settings, which is of independent interest.
Fichier principal
Vignette du fichier
20adam.pdf (649.79 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03310455 , version 1 (30-07-2021)

Identifiants

Citer

Anas Barakat, Pascal Bianchi, Walid Hachem, Sholom Schechtman. Stochastic optimization with momentum: convergence, fluctuations, and traps avoidance. Electronic Journal of Statistics , 2021, 15 (2), pp.3892-3947. ⟨10.1214/21-EJS1880⟩. ⟨hal-03310455⟩
79 Consultations
78 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More