Multi-View Data Generation Without View Supervision - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2018

Multi-View Data Generation Without View Supervision

Résumé

The development of high-dimensional generative models has recently gained a great surge of interest with the introduction of variational auto-encoders and generative adversarial neural networks. Different variants have been proposed where the underlying latent space is structured, for example, based on attributes describing the data to generate. We focus on a particular problem where one aims at generating samples corresponding to a number of objects under various views. We assume that the distribution of the data is driven by two independent latent factors: the content, which represents the intrinsic features of an object, and the view, which stands for the settings of a particular observation of that object. Therefore, we propose a generative model and a conditional variant built on such a disentangled latent space. This approach allows us to generate realistic samples corresponding to various objects in a high variety of views. Unlike many multi-view approaches, our model doesn't need any supervision on the views but only on the content. Compared to other conditional generation approaches that are mostly based on binary or categorical attributes, we make no such assumption about the factors of variations. Our model can be used on problems with a huge, potentially infinite, number of categories. We experiment it on four image datasets on which we demonstrate the effectiveness of the model and its ability to generalize.

Dates et versions

hal-01689997 , version 1 (22-01-2018)

Identifiants

Citer

Mickaël Chen, Ludovic Denoyer, Thierry Artières. Multi-View Data Generation Without View Supervision. 2018. ⟨hal-01689997⟩
1060 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More