Exploiting Visual Concepts to Improve Text-Based Image Retrieval

Abstract : In this paper, we study how to automatically exploit visual concepts in a text-based image retrieval task. First, we use Forest of Fuzzy Decision Trees (FFDTs) to automatically annotate images with visual concepts. Second, using optionally WordNet, we match visual concepts and textual query. Finally, we filter the text-based image retrieval result list using the FFDTs. This study is performed in the context of two tasks of the CLEF2008 international campaign: the Visual Concept Detection Task (VCDT) (17 visual concepts) and the photographic retrieval task (ImageCLEFphoto) (39 queries and 20k images). Our best VCDT run is the 4th best of the 53 submitted runs. The ImageCLEFphoto results show that there is a clear improvement, in terms of precision at 20, when using the visual concepts explicitly appearing in the query.
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-00402448
Contributor : Sabrina Tollari <>
Submitted on : Tuesday, July 7, 2009 - 1:26:36 PM
Last modification on : Thursday, March 21, 2019 - 1:09:10 PM

Links full text

Identifiers

Citation

Sabrina Tollari, Marcin Detyniecki, Christophe Marsala, Ali Fakeri-Tabrizi, Massih-Reza Amini, et al.. Exploiting Visual Concepts to Improve Text-Based Image Retrieval. European Conference on Information Retrieval (ECIR), Apr 2009, Toulouse, France. pp.701 - 705, ⟨10.1007/978-3-642-00958-7_70⟩. ⟨hal-00402448⟩

Share

Metrics

Record views

94