Skip to Main content Skip to Navigation
Journal articles

Good edit similarity learning by loss minimization

Abstract : Similarity functions are a fundamental component of many learning algorithms. When dealing with string or tree-structured data, edit distancebased measures are widely used, and there exists a few methods for learning them from data. However, these methods offer no theoretical guarantee as to the generalization ability and discriminative power of the learned similarities. In this paper, we propose a loss minimization-based edit similarity learning approach, called GESL. It is driven by the notion of (e, γ, τ )-goodness, a theory that bridges the gap between the properties of a similarity function and its performance in classification. We show that our learning framework is a suitable way to deal not only with strings but also with tree-structured data. Using the notion of uniform stability, we derive generalization guarantees for a large class of loss functions. We also provide experimental results on two realworld datasets which show that edit similarities learned with GESL induce more accurate and sparser classifiers than other (standard or learned) edit similarities.
Document type :
Journal articles
Complete list of metadata

Cited literature [37 references]  Display  Hide  Download
Contributor : Marc Sebban Connect in order to contact the contributor
Submitted on : Monday, August 20, 2012 - 2:49:57 PM
Last modification on : Monday, January 13, 2020 - 5:46:04 PM
Long-term archiving on: : Wednesday, November 21, 2012 - 2:20:08 AM


Files produced by the author(s)




Aurélien Bellet, Amaury Habrard, Marc Sebban. Good edit similarity learning by loss minimization. Machine Learning, Springer Verlag, 2012, pp.5-35. ⟨10.1007/s10994-012-5293-8⟩. ⟨hal-00690240⟩



Les métriques sont temporairement indisponibles