A Global-Local Approach to Extracting Deformable Fashion Items from Web Images

Lixuan Yang 1 Helena Rodriguez Michel Crucianu 2 Marin Ferecatu 2
2 CEDRIC - VERTIGO - CEDRIC. Bases de données avancées
CEDRIC - Centre d'études et de recherche en informatique et communications
Abstract : In this work we propose a new framework for extracting deformable clothing items from images by using a three stage global-local fitting procedure. First, a set of initial segmentation templates are generated from a handcrafted database. Then, each template initiates an object extraction process by a global alignment of the model, followed by a local search minimizing a measure of the misfit with respect to the potential boundaries in the neighborhood. Finally, the results provided by each template are aggregated, with a global fitting criterion, to obtain the final segmentation. The method is validated on the Fashionista database and on a new database of manually segmented images. Our method compares favorably with the Paper Doll clothing parsing and with the recent GrabCut on One Cut foreground extraction method. We quantitatively analyze each component, and show examples of both successful segmentation and difficult cases.
Complete list of metadatas

Cited literature [21 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-02435296
Contributor : Michel Crucianu <>
Submitted on : Friday, January 10, 2020 - 4:52:20 PM
Last modification on : Thursday, February 6, 2020 - 2:16:06 PM

File

yang16global-local.pdf
Files produced by the author(s)

Identifiers

Collections

Citation

Lixuan Yang, Helena Rodriguez, Michel Crucianu, Marin Ferecatu. A Global-Local Approach to Extracting Deformable Fashion Items from Web Images. Enqing Chen; Yihong Gong; Yun Tie. Advances in Multimedia Information Processing - PCM 2016, 10132, Springer, pp.1-12, 2016, Lecture Notes in Computer Science, 978-3-319-48889-9. ⟨10.1007/978-3-319-48896-7_1⟩. ⟨hal-02435296⟩

Share

Metrics

Record views

5

Files downloads

13