On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport

Lenaic Chizat 1 Francis Bach 1
1 SIERRA - Statistical Machine Learning and Parsimony
DI-ENS - Département d'informatique de l'École normale supérieure, CNRS - Centre National de la Recherche Scientifique, Inria de Paris
Abstract : Many tasks in machine learning and signal processing can be solved by minimizing a convex function of a measure. This includes sparse spikes deconvolution or training a neural network with a single hidden layer. For these problems, we study a simple minimization method: the unknown measure is discretized into a mixture of particles and a continuous-time gradient descent is performed on their weights and positions. This is an idealization of the usual way to train neural networks with a large hidden layer. We show that, when initialized correctly and in the many-particle limit, this gradient flow, although non-convex, converges to global minimizers. The proof involves Wasserstein gradient flows, a by-product of optimal transport theory. Numerical experiments show that this asymptotic behavior is already at play for a reasonable number of particles, even in high dimension.
Liste complète des métadonnées

Littérature citée [32 références]  Voir  Masquer  Télécharger

https://hal.archives-ouvertes.fr/hal-01798792
Contributeur : Lénaïc Chizat <>
Soumis le : mercredi 23 mai 2018 - 22:44:12
Dernière modification le : mercredi 6 juin 2018 - 01:09:11
Document(s) archivé(s) le : vendredi 24 août 2018 - 22:58:07

Fichiers

chizatbach2018global.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01798792, version 1
  • ARXIV : 1805.09545

Collections

Citation

Lenaic Chizat, Francis Bach. On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport. 2018. 〈hal-01798792〉

Partager

Métriques

Consultations de la notice

329

Téléchargements de fichiers

566