Disentangled Representation Learning and Generation with Manifold Optimization - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2022

Disentangled Representation Learning and Generation with Manifold Optimization

Résumé

Disentanglement is a useful property in representation learning which increases the interpretability of generative models such as Variational Auto-Encoders (VAE), Generative Adversarial Models, and their many variants. Typically in such models, an increase in disentanglement performance is traded-off with generation quality. In the context of latent space models, this work presents a representation learning framework that explicitly promotes disentanglement by encouraging orthogonal directions of variations. The proposed objective is the sum of an auto-encoder error term along with a Principal Component Analysis reconstruction error in the feature space. This has an interpretation of a Restricted Kernel Machine with the eigenvector matrix valued on the Stiefel manifold. Our analysis shows that such a construction promotes disentanglement by matching the principal directions in the latent space with the directions of orthogonal variation in data space. In an alternating minimization scheme, we use Cayley ADAM algorithm -- a stochastic optimization method on the Stiefel manifold along with the ADAM optimizer. Our theoretical discussion and various experiments show that the proposed model improves over many VAE variants in terms of both generation quality and disentangled representation learning.

Dates et versions

hal-03511352 , version 1 (04-01-2022)

Identifiants

Citer

Arun Pandey, Michaël Fanuel, Joachim Schreurs, Johan A. K. Suykens. Disentangled Representation Learning and Generation with Manifold Optimization. 2022. ⟨hal-03511352⟩
20 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More