Toward a higher-level visual representation for content-based image retrieval - Archive ouverte HAL Access content directly
Journal Articles Multimedia Tools and Applications Year : 2012

Toward a higher-level visual representation for content-based image retrieval

Ismail Elsayad
  • Function : Author
Jean Martinet
Thierry Urruty
SIC
Chabane Djeraba

Abstract

Having effective methods to access the desired images is essential nowadays with the availability of a huge amount of digital images. The proposed approach is based on an analogy between content-based image retrieval and text retrieval. The aim of the approach is to build a meaningful mid-level representation of images to be used later on for matching between a query image and other images in the desired database. The approach is based firstly on constructing different visual words using local patch extraction and fusion of descriptors. Secondly, we introduce a new method using multilayer pLSA to eliminate the noisiest words generated by the vocabulary building process. Thirdly, a new spatial weighting scheme is introduced that consists of weighting visual words according to the probability of each visual word to belong to each of the n Gaussian. Finally, we construct visual phrases from groups of visual words that are involved in strong association rules. Experimental results show that our approach outperforms the results of traditional image retrieval techniques.
Fichier principal
Vignette du fichier
JournalMTAP.pdf (694.58 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-00876204 , version 1 (24-10-2013)

Identifiers

Cite

Ismail Elsayad, Jean Martinet, Thierry Urruty, Chabane Djeraba. Toward a higher-level visual representation for content-based image retrieval. Multimedia Tools and Applications, 2012, 60 (2), pp.455-482. ⟨10.1007/s11042-010-0596-x⟩. ⟨hal-00876204⟩
309 View
396 Download

Altmetric

Share

Gmail Facebook X LinkedIn More