A Theoretical Analysis of Metric Hypothesis Transfer Learning

Abstract : We consider the problem of transferring some a priori knowledge in the context of supervised metric learning approaches. While this setting has been successfully applied in some empirical contexts, no theoretical evidence exists to justify this approach. In this paper, we provide a theoretical justification based on the notion of algorithmic stability adapted to the regularized metric learning setting. We propose an on-average-replace-two-stability model allowing us to prove fast generalization rates when an auxiliary source metric is used to bias the regularizer. Moreover, we prove a consistency result from which we show the interest of considering biased weighted regularized formulations and we provide a solution to estimate the associated weight. We also present some experiments illustrating the interest of the approach in standard metric learning tasks and in a transfer learning problem where few labelled data are available.
Document type :
Conference papers
Complete list of metadatas

Contributor : Michaël Perrot <>
Submitted on : Friday, July 10, 2015 - 5:28:30 PM
Last modification on : Thursday, July 26, 2018 - 1:10:20 AM


  • HAL Id : hal-01175610, version 1


Michaël Perrot, Amaury Habrard. A Theoretical Analysis of Metric Hypothesis Transfer Learning. International Conference on Machine Learning, Jul 2015, Lille, France. ⟨hal-01175610⟩



Record views