Deformable GANs for Pose-based Human Image Generation - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2018

Deformable GANs for Pose-based Human Image Generation

Résumé

In this paper we address the problem of generating person images conditioned on a given pose. Specifically, given an image of a person and a target pose, we synthesize a new image of that person in the novel pose. In order to deal with pixel-to-pixel misalignments caused by the pose differences, we introduce deformable skip connections in the generator of our Generative Adversarial Network. Moreover, a nearest-neighbour loss is proposed instead of the common L1 and L2 losses in order to match the details of the generated image with the target image. We test our approach using photos of persons in different poses and we compare our method with previous work in this area showing state-of-the-art results in two benchmarks. Our method can be applied to the wider field of deformable object generation, provided that the pose of the articulated object can be extracted using a keypoint detector.
Fichier principal
Vignette du fichier
1801.00055.pdf (4.44 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01761539 , version 1 (09-04-2018)

Identifiants

Citer

Aliaksandr Siarohin, Enver Sangineto, Stéphane Lathuilière, Nicu Sebe. Deformable GANs for Pose-based Human Image Generation. IEEE Conference on Computer Vision and Pattern Recognition, Jun 2018, Salt Lake City, United States. pp.3408-3416, ⟨10.1109/CVPR.2018.00359⟩. ⟨hal-01761539⟩
395 Consultations
175 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More