Data fusion through cross-modality metric learning using similarity-sensitive hashing

Abstract : Visual understanding is often based on measuring similarity between observations. Learning similarities specific to a certain perception task from a set of examples has been shown advantageous in various computer vision and pattern recognition problems. In many important applications, the data that one needs to compare come from different representations or modalities, and the similarity between such data operates on objects that may have different and often incommensurable structure and dimensionality. In this paper, we propose a framework for supervised similarity learning based on embedding the input data from two arbitrary spaces into the Hamming space. The mapping is expressed as a binary classification problem with positive and negative examples, and can be efficiently learned using boosting algorithms. The utility and efficiency of such a generic approach is demonstrated on several challenging applications including cross-representation shape retrieval and alignment of multi-modal medical images.
Type de document :
Communication dans un congrès
23rd IEEE Conference on Computer Vision and Pattern Recognition - CVPR 2010, Jun 2010, San Francisco, United States. pp.3594-3601, 2010, 〈10.1109/CVPR.2010.5539928〉
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-00856061
Contributeur : Vivien Fécamp <>
Soumis le : vendredi 30 août 2013 - 13:56:37
Dernière modification le : mardi 5 février 2019 - 13:52:14

Identifiants

Collections

Citation

Michael Bronstein, Alexander Bronstein, Fabrice Michel, Nikos Paragios. Data fusion through cross-modality metric learning using similarity-sensitive hashing. 23rd IEEE Conference on Computer Vision and Pattern Recognition - CVPR 2010, Jun 2010, San Francisco, United States. pp.3594-3601, 2010, 〈10.1109/CVPR.2010.5539928〉. 〈hal-00856061〉

Partager

Métriques

Consultations de la notice

2041