Skip to Main content Skip to Navigation
Book sections

Fully Convolutional Network with Superpixel Parsing for Fashion Web Image Segmentation

Lixuan yang 1 Helena Rodriguez 2 Michel Crucianu 1 Marin Ferecatu 1 
1 CEDRIC - VERTIGO - CEDRIC. Données complexes, apprentissage et représentations
CEDRIC - Centre d'études et de recherche en informatique et communications
Abstract : In this paper we introduce a new method for extracting deformable clothing items from still images by extending the output of a Fully Convolutional Neural Network (FCN) to infer context from local units (superpixels). To achieve this we optimize an energy function, that combines the large scale structure of the image with the local low-level visual descriptions of superpixels, over the space of all possible pixel labelings. To assess our method we compare it to the unmodified FCN network used as a baseline, as well as to the well-known Paper Doll and Co-parsing methods for fashion images.
Complete list of metadata

Cited literature [39 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-02435310
Contributor : Michel CRUCIANU Connect in order to contact the contributor
Submitted on : Friday, January 10, 2020 - 4:58:17 PM
Last modification on : Monday, February 21, 2022 - 3:38:20 PM

File

yang17fully-convolutional.pdf
Files produced by the author(s)

Identifiers

Collections

Citation

Lixuan yang, Helena Rodriguez, Michel Crucianu, Marin Ferecatu. Fully Convolutional Network with Superpixel Parsing for Fashion Web Image Segmentation. Laurent Amsaleg, Gylfi Þór Guðmundsson, Cathal Gurrin, Björn Þór Jónsson, Shin'ichi Satoh. MultiMedia Modeling - 23rd International Conference, MMM 2017, Reykjavik, Iceland, January 4-6, 2017, Proceedings, Part II, 10133, Springer, pp.139-151, 2017, Lecture Notes in Computer Science, ISBN 978-3-319-51813-8. ⟨10.1007/978-3-319-51811-4_12⟩. ⟨hal-02435310⟩

Share

Metrics

Record views

50

Files downloads

128