Adversarial training of partially invertible variational autoencoders - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2019

Adversarial training of partially invertible variational autoencoders

Résumé

Adversarial generative image models yield outstanding sample quality, but suffer from two drawbacks: (i) they mode-drop, i.e., do not cover the full support of the target distribution, and (ii) they do not allow for likelihood evaluations on held-out data. Conversely, maximum likelihood estimation encourages models to cover the full support of the training data, but yields poor samples. To address these mutual shortcomings, we propose a generative model that can be jointly trained with both procedures. In our approach, the conditional independence assumption typically made in variational autoencoders is relaxed by leveraging invertible models. This leads to improved sample quality, as well as improved likelihood on heldout data. Our model significantly improves on existing hybrid models, yielding GAN-like samples, and IS and FID scores that are competitive with fully adversarial models, while offering likelihoods measures on held-out data comparable to recent likelihood-based methods.
Fichier principal
Vignette du fichier
ms.pdf (2.51 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01886285 , version 1 (02-10-2018)
hal-01886285 , version 2 (01-03-2019)
hal-01886285 , version 3 (03-12-2019)
hal-01886285 , version 4 (03-01-2020)
hal-01886285 , version 5 (14-02-2020)

Identifiants

  • HAL Id : hal-01886285 , version 2

Citer

Thomas Lucas, Konstantin Shmelkov, Karteek Alahari, Cordelia Schmid, Jakob Verbeek. Adversarial training of partially invertible variational autoencoders. INNF'19 - Workshop on Invertible Neural Nets and Normalizing Flows, Jun 2019, Long Beach, United States. pp.1-14. ⟨hal-01886285v2⟩
1175 Consultations
1220 Téléchargements

Partager

Gmail Facebook X LinkedIn More