On the Gradient Formula for learning Generative Models with Regularized Optimal Transport Costs - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Transactions on Machine Learning Research Journal Année : 2023

On the Gradient Formula for learning Generative Models with Regularized Optimal Transport Costs

Résumé

The use of optimal transport costs for learning generative models has become popular with Wasserstein Generative Adversarial Networks (WGANs). Training a WGAN requires the computation of the differentiation of the optimal transport cost with respect to the parameters of the generative model. In this work, we provide sufficient conditions for the existence of a gradient formula in two different frameworks: the case of semi-discrete optimal transport (i.e. with a discrete target distribution) and the case of regularized optimal transport (i.e. with an entropic penalty). Both cases are based on the dual formulation of the transport cost, and the gradient formula involves a solution of the dual problem. The learning problem is addressed with an alternate algorithm, whose behavior is examined for the problem of MNIST digits generation. In particular, we analyze the impact of entropic regularization both on visual results and convergence speed.
Fichier principal
Vignette du fichier
gradient_wgan_tmlr_preprint.pdf (1.45 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03740368 , version 1 (29-07-2022)

Identifiants

  • HAL Id : hal-03740368 , version 1

Citer

Antoine Houdard, Arthur Leclaire, Nicolas Papadakis, Julien Rabin. On the Gradient Formula for learning Generative Models with Regularized Optimal Transport Costs. Transactions on Machine Learning Research Journal, 2023. ⟨hal-03740368⟩
138 Consultations
125 Téléchargements

Partager

Gmail Facebook X LinkedIn More