Similarity Learning for Provably Accurate Sparse Linear Classification

Abstract : In recent years, the crucial importance of metrics in machine learning algorithms has led to an increasing interest for optimizing distance and similarity functions. Most of the state of the art focus on learning Mahalanobis distances (requiring to fulfill a constraint of positive semi-definiteness) for use in a local k-NN algorithm. However, no theoretical link is established between the learned metrics and their performance in classification. In this paper, we make use of the formal framework of good similarities introduced by Balcan et al. to design an algorithm for learning a non PSD linear similarity optimized in a nonlinear feature space, which is then used to build a global linear classifier. We show that our approach has uniform stability and derive a generalization bound on the classification error. Experiments performed on various datasets confirm the effectiveness of our approach compared to stateof-the-art methods and provide evidence that (i) it is fast, (ii) robust to overfitting and (iii) produces very sparse classifiers.
Keywords : Metric Learning
Document type :
Conference papers
Complete list of metadatas

Cited literature [13 references]  Display  Hide  Download
Contributor : Marc Sebban <>
Submitted on : Thursday, June 21, 2012 - 12:01:30 PM
Last modification on : Tuesday, September 10, 2019 - 11:32:08 AM
Long-term archiving on : Saturday, September 22, 2012 - 2:21:30 AM


Files produced by the author(s)


  • HAL Id : hal-00708401, version 1



Aurélien Bellet, Amaury Habrard, Marc Sebban. Similarity Learning for Provably Accurate Sparse Linear Classification. International Conference on Machine Learning, Jun 2012, United Kingdom. ⟨hal-00708401⟩



Record views


Files downloads