Fully Convolutional Network with Superpixel Parsing for Fashion Web Image Segmentation

Lixuan Yang 1 Helena Rodriguez Michel Crucianu 1 Marin Ferecatu
1 CEDRIC - VERTIGO - CEDRIC. Bases de données avancées
CEDRIC - Centre d'études et de recherche en informatique et communications
Abstract : In this paper we introduce a new method for extracting deformable clothing items from still images by extending the output of a Fully Convolutional Neural Network (FCN) to infer context from local units (superpixels). To achieve this we optimize an energy function, that combines the large scale structure of the image with the local low-level visual descriptions of superpixels, over the space of all possible pixel labelings. To assess our method we compare it to the unmodified FCN network used as a baseline, as well as to the well-known Paper Doll and Co-parsing methods for fashion images.
Complete list of metadatas

Cited literature [39 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-02435310
Contributor : Michel Crucianu <>
Submitted on : Friday, January 10, 2020 - 4:58:17 PM
Last modification on : Thursday, February 6, 2020 - 2:16:06 PM

File

yang17fully-convolutional.pdf
Files produced by the author(s)

Identifiers

Collections

Citation

Lixuan Yang, Helena Rodriguez, Michel Crucianu, Marin Ferecatu. Fully Convolutional Network with Superpixel Parsing for Fashion Web Image Segmentation. Laurent Amsaleg, Gylfi Þór Guðmundsson, Cathal Gurrin, Björn Þór Jónsson, Shin'ichi Satoh. MultiMedia Modeling - 23rd International Conference, MMM 2017, Reykjavik, Iceland, January 4-6, 2017, Proceedings, Part II, 10133, Springer, pp.139-151, 2017, Lecture Notes in Computer Science, ISBN 978-3-319-51813-8. ⟨10.1007/978-3-319-51811-4_12⟩. ⟨hal-02435310⟩

Share

Metrics

Record views

2

Files downloads

16