Good edit similarity learning by loss minimization

Abstract : Similarity functions are a fundamental component of many learning algorithms. When dealing with string or tree-structured data, edit distancebased measures are widely used, and there exists a few methods for learning them from data. However, these methods offer no theoretical guarantee as to the generalization ability and discriminative power of the learned similarities. In this paper, we propose a loss minimization-based edit similarity learning approach, called GESL. It is driven by the notion of (e, γ, τ )-goodness, a theory that bridges the gap between the properties of a similarity function and its performance in classification. We show that our learning framework is a suitable way to deal not only with strings but also with tree-structured data. Using the notion of uniform stability, we derive generalization guarantees for a large class of loss functions. We also provide experimental results on two realworld datasets which show that edit similarities learned with GESL induce more accurate and sparser classifiers than other (standard or learned) edit similarities.
Document type :
Journal articles
Complete list of metadatas

Cited literature [37 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-00690240
Contributor : Marc Sebban <>
Submitted on : Monday, August 20, 2012 - 2:49:57 PM
Last modification on : Tuesday, September 10, 2019 - 11:32:08 AM
Long-term archiving on : Wednesday, November 21, 2012 - 2:20:08 AM

File

MLJ2012-preprint.pdf
Files produced by the author(s)

Identifiers

Collections

Citation

Aurélien Bellet, Amaury Habrard, Marc Sebban. Good edit similarity learning by loss minimization. Machine Learning, Springer Verlag, 2012, pp.5-35. ⟨10.1007/s10994-012-5293-8⟩. ⟨hal-00690240⟩

Share

Metrics

Record views

213

Files downloads

863