Tag Disentangled Generative Adversarial Networks for Object Image Re-rendering - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2017

Tag Disentangled Generative Adversarial Networks for Object Image Re-rendering

Résumé

In this paper, we propose a principled Tag Disentangled Generative Adversarial Networks (TD-GAN) for re-rendering new images for the object of interest from a single image of it by specifying multiple scene properties (such as viewpoint, illumination , expression, etc.). The whole framework consists of a disentangling network, a generative network, a tag mapping net, and a discriminative network, which are trained jointly based on a given set of images that are completely/partially tagged (i.e., supervised/semi-supervised setting). Given an input image, the disentangling network extracts disentangled and interpretable representations, which are then used to generate images by the generative network. In order to boost the quality of disentangled representations, the tag mapping net is integrated to explore the consistency between the image and its tags. Furthermore, the discriminative network is introduced to implement the adversar-ial training strategy for generating more realistic images. Experiments on two challenging datasets demonstrate the state-of-the-art performance of the proposed framework in the problem of interest.
Fichier principal
Vignette du fichier
IJCAI2017_ObjectImageRerendering.pdf (2.84 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01741207 , version 1 (22-03-2018)

Identifiants

Citer

Chaoyue Wang, Chaohui Wang, Chang Xu, Dacheng Tao. Tag Disentangled Generative Adversarial Networks for Object Image Re-rendering. International Joint Conference on Artificial Intelligence (IJCAI), Aug 2017, Melbourne, Australia. ⟨10.24963/ijcai.2017/404⟩. ⟨hal-01741207⟩
424 Consultations
295 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More