Statistical Optimality of Stochastic Gradient Descent on Hard Learning Problems through Multiple Passes

Loucas Pillaud-Vivien 1, 2 Alessandro Rudi 1, 2 Francis Bach 1, 2
1 SIERRA - Statistical Machine Learning and Parsimony
DI-ENS - Département d'informatique de l'École normale supérieure, CNRS - Centre National de la Recherche Scientifique, Inria de Paris
Abstract : We consider stochastic gradient descent (SGD) for least-squares regression with potentially several passes over the data. While several passes have been widely reported to perform practically better in terms of predictive performance on unseen data, the existing theoretical analysis of SGD suggests that a single pass is statistically optimal. While this is true for low-dimensional easy problems, we show that for hard problems, multiple passes lead to statistically optimal predictions while single pass does not; we also show that in these hard models, the optimal number of passes over the data increases with sample size. In order to define the notion of hardness and show that our predictive performances are optimal, we consider potentially infinite-dimensional models and notions typically associated to kernel methods, namely, the decay of eigenvalues of the covariance matrix of the features and the complexity of the optimal predictor as measured through the covariance matrix. We illustrate our results on synthetic experiments with non-linear kernel methods and on a classical benchmark with a linear model.
Type de document :
Communication dans un congrès
Neural Information Processing Systems (NIPS), Dec 2018, Montréal, Canada. 2018
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-01799116
Contributeur : Loucas Pillaud-Vivien <>
Soumis le : jeudi 22 novembre 2018 - 14:25:08
Dernière modification le : samedi 24 novembre 2018 - 01:24:43

Fichiers

multipass_sgd.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01799116, version 3
  • ARXIV : 1805.10074

Collections

Citation

Loucas Pillaud-Vivien, Alessandro Rudi, Francis Bach. Statistical Optimality of Stochastic Gradient Descent on Hard Learning Problems through Multiple Passes. Neural Information Processing Systems (NIPS), Dec 2018, Montréal, Canada. 2018. 〈hal-01799116v3〉

Partager

Métriques

Consultations de la notice

54

Téléchargements de fichiers

25