Predicting modality from text queries for medical image retrieval - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2011

Predicting modality from text queries for medical image retrieval

Résumé

In recent years, attention has been raised on the use of image modality in medical image retrieval. Several methods have been developed to automatically identify the modality of images and integrate this information to image retrieval systems. Results show that using the modality can significantly improve the performance of these systems. However, doing so also requires to identify the modality expressed in the queries. This task is usually performed by elementary pattern matching techniques that can be applied only to a small proportion of queries. This paper addresses the issue of predicting the modality expressed in the queries in a general way. First, a taxonomy of queries and the specificities of the problem are described. Then, a Bayesian classifier is proposed to automatically predict the modality expressed in the queries, as well as two models to integrate these prediction to an image retrieval system. Experiments performed on data from the ImageCLEFMed 2009 and 2010 challenges show that our approach can outperform current systems in precision, although the performance can differ significantly from one query to another.
Fichier non déposé

Dates et versions

hal-00812183 , version 1 (11-04-2013)

Identifiants

Citer

Pierre Tirilly, Kun Lu, Xiangming Mu. Predicting modality from text queries for medical image retrieval. ACM Multimedia - Workshop on Medical Multimedia Analysis and Retrieval, Nov 2011, Scottsdale, United States. pp.7-12, ⟨10.1145/2072545.2072548⟩. ⟨hal-00812183⟩
47 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More