Algorithmic Robustness for Semi-Supervised (ε, γ, τ )-Good Metric Learning

Abstract : The importance of metrics in machine learning has attracted a growing interest for distance and similarity learning, and especially the Mahalanobis distance. However, it is worth noting that this research field lacks theoretical guarantees that can be expected on the generalization capacity of the classifier associated to a learned metric. The theoreti- cal framework of (ε,γ,τ)-good similarity functions [1] has been one of the first attempts to draw a link between the properties of a similarity function and those of a linear classifier making use of it. In this paper, we extend this theory to a method where the metric and the separator are jointly learned in a semi-supervised way, setting that has not been explored before. We furthermore provide a generalization bound for the associated classifier based on the algorithmic robustness framework. The behavior of our method is illustrated via some experimental results.
Type de document :
Communication dans un congrès
International Conference on Neural Information Processing ICONIP, Nov 2015, Istanbul, Turkey. pp.10, 2015
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-01223411
Contributeur : Amaury Habrard <>
Soumis le : lundi 2 novembre 2015 - 15:51:32
Dernière modification le : vendredi 13 mai 2016 - 10:47:10

Identifiants

  • HAL Id : hal-01223411, version 1

Collections

Citation

Nicolae Irina, Marc Sebban, Amaury Habrard, Eric Gaussier, Massih-Reza Amini. Algorithmic Robustness for Semi-Supervised (ε, γ, τ )-Good Metric Learning. International Conference on Neural Information Processing ICONIP, Nov 2015, Istanbul, Turkey. pp.10, 2015. <hal-01223411>

Partager

Métriques

Consultations de la notice

265