On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2018

On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport

Lenaic Chizat
Francis Bach
  • Fonction : Auteur
  • PersonId : 863086

Résumé

Many tasks in machine learning and signal processing can be solved by minimizing a convex function of a measure. This includes sparse spikes deconvolution or training a neural network with a single hidden layer. For these problems, we study a simple minimization method: the unknown measure is discretized into a mixture of particles and a continuous-time gradient descent is performed on their weights and positions. This is an idealization of the usual way to train neural networks with a large hidden layer. We show that, when initialized correctly and in the many-particle limit, this gradient flow, although non-convex, converges to global minimizers. The proof involves Wasserstein gradient flows, a by-product of optimal transport theory. Numerical experiments show that this asymptotic behavior is already at play for a reasonable number of particles, even in high dimension.
Fichier principal
Vignette du fichier
chizatbach2018global.pdf (1.09 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01798792 , version 1 (23-05-2018)
hal-01798792 , version 2 (27-10-2018)

Identifiants

Citer

Lenaic Chizat, Francis Bach. On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport. Advances in Neural Information Processing Systems (NIPS), Dec 2018, Montréal, Canada. ⟨hal-01798792v1⟩
853 Consultations
1989 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More