Deforming Autoencoders: Unsupervised Disentangling of Shape and Appearance
Résumé
In this work we introduce Deforming Autoencoders, a generative model for images that disentangles shape from appearance in an unsupervised manner. As in the deformable template paradigm, shape is represented as a deformation between a canonical coordinate system ('template') and an observed image, while appearance is modeled in 'canonical', template, coordinates, thus discarding variability due to deformations. We introduce novel techniques that allow this approach to be deployed in the setting of autoencoders and show that this method can be used for unsupervised group-wise image alignment. We show experiments with expression morphing in humans, hands, and digits, face manipulation, such as shape and appearance interpolation, as well as unsupervised landmark local-ization. A more powerful form of unsupervised disentangling becomes possible in template coordinates, allowing us to successfully decompose face images into shading and albedo, and further manipulate face images. Latent Representation Input Image Generated Deformation Generated Texture Decoder Decoder Spatial Warping Reconstructed Image Encoder Fig. 1. Deforming Autoencoders follow the deformable template paradigm and model image generation through a cascade of appearance (or, 'texture') synthesis in a canonical coordinate system and a spatial deformation that warps the texture to the observed image coordinates. By keeping the latent vector for texture short the network is forced to model shape variability through the deformation branch, so as to minimize a reconstruction loss. This allows us to train a deep gen-erative image model that disentangles shape and appearance in an entirely unsupervised manner.
Origine : Fichiers produits par l'(les) auteur(s)
Loading...