Stochastic Runge-Kutta methods and adaptive SGD-G2 stochastic gradient descent - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Stochastic Runge-Kutta methods and adaptive SGD-G2 stochastic gradient descent

Résumé

The minimization of the loss function is of paramount importance in deep neural networks. On the other hand, many popular optimization algorithms have been shown to correspond to some evolution equation of gradient flow type. Inspired by the numerical schemes used for general evolution equations we introduce a second order stochastic Runge Kutta method and show that it yields a consistent procedure for the minimization of the loss function. In addition it can be coupled, in an adaptive framework, with a Stochastic Gradient Descent (SGD) to adjust automatically the learning rate of the SGD, without the need of any additional information on the Hessian of the loss functional. The adaptive SGD, called SGD-G2, is successfully tested on standard datasets.
Fichier principal
Vignette du fichier
turinici_ayadi2020-rk-adaptive-sgd.pdf (490.25 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02483988 , version 1 (18-02-2020)

Identifiants

Citer

Imen Ayadi, Gabriel Turinici. Stochastic Runge-Kutta methods and adaptive SGD-G2 stochastic gradient descent. 25th International Conference on Pattern Recognition (ICPR 2020), Jan 2021, Milano, Italy. ⟨hal-02483988⟩
78 Consultations
178 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More