Speed learning on the fly

Pierre-Yves Massé 1, 2 Yann Ollivier 1, 2
2 TAO - Machine Learning and Optimisation
CNRS - Centre National de la Recherche Scientifique : UMR8623, Inria Saclay - Ile de France, UP11 - Université Paris-Sud - Paris 11, LRI - Laboratoire de Recherche en Informatique
Abstract : The practical performance of online stochastic gradient descent algorithms is highly dependent on the chosen step size, which must be tediously hand-tuned in many applications. The same is true for more advanced variants of stochastic gradients, such as SAGA, SVRG, or AdaGrad. Here we propose to adapt the step size by performing a gradient descent on the step size itself, viewing the whole performance of the learning trajectory as a function of step size. Importantly, this adaptation can be computed online at little cost, without having to iterate backward passes over the full data.
Type de document :
Pré-publication, Document de travail
preprint. 2015
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-01228955
Contributeur : Yann Ollivier <>
Soumis le : dimanche 15 novembre 2015 - 18:19:03
Dernière modification le : jeudi 5 avril 2018 - 12:30:12

Lien texte intégral

Identifiants

  • HAL Id : hal-01228955, version 1
  • ARXIV : 1511.02540

Citation

Pierre-Yves Massé, Yann Ollivier. Speed learning on the fly. preprint. 2015. 〈hal-01228955〉

Partager

Métriques

Consultations de la notice

204