Skip to Main content Skip to Navigation
Book sections

Fully Convolutional Network with Superpixel Parsing for Fashion Web Image Segmentation

Abstract : In this paper we introduce a new method for extracting deformable clothing items from still images by extending the output of a Fully Convolutional Neural Network (FCN) to infer context from local units (superpixels). To achieve this we optimize an energy function, that combines the large scale structure of the image with the local low-level visual descriptions of superpixels, over the space of all possible pixel labelings. To assess our method we compare it to the unmodified FCN network used as a baseline, as well as to the well-known Paper Doll and Co-parsing methods for fashion images.
Complete list of metadatas

Cited literature [39 references]  Display  Hide  Download
Contributor : Michel Crucianu <>
Submitted on : Friday, January 10, 2020 - 4:58:17 PM
Last modification on : Monday, September 7, 2020 - 12:10:03 PM


Files produced by the author(s)




Lixuan Yang, Helena Rodriguez, Michel Crucianu, Marin Ferecatu. Fully Convolutional Network with Superpixel Parsing for Fashion Web Image Segmentation. Laurent Amsaleg, Gylfi Þór Guðmundsson, Cathal Gurrin, Björn Þór Jónsson, Shin'ichi Satoh. MultiMedia Modeling - 23rd International Conference, MMM 2017, Reykjavik, Iceland, January 4-6, 2017, Proceedings, Part II, 10133, Springer, pp.139-151, 2017, Lecture Notes in Computer Science, ISBN 978-3-319-51813-8. ⟨10.1007/978-3-319-51811-4_12⟩. ⟨hal-02435310⟩



Record views


Files downloads