Boosting for Unsupervised Domain Adaptation

Abstract : To cope with machine learning problems where the learner receives data from different source and target distributions, a new learning framework named domain adaptation (DA) has emerged, opening the door for designing theoretically well-founded algorithms. In this paper, we present SLDAB, a self-labeling DA algorithm, which takes its origin from both the theory of boosting and the theory of DA. SLDAB works in the difficult unsupervised DA setting where source and target training data are available, but only the former are labeled. To deal with the absence of labeled target information, SLDAB jointly minimizes the classification error over the source domain and the proportion of margin violations over the target domain. To prevent the algorithm from inducing degenerate models, we introduce a measure of divergence whose goal is to penalize hypotheses that are not able to decrease the discrepancy between the two domains. We present a theoretical analysis of our algorithm and show practical evidences of its efficiency compared to two widely used DA approaches.
Document type :
Conference papers
Complete list of metadatas

Cited literature [21 references]  Display  Hide  Download
Contributor : Marc Sebban <>
Submitted on : Thursday, October 3, 2013 - 10:54:13 AM
Last modification on : Wednesday, July 25, 2018 - 2:05:31 PM
Long-term archiving on : Monday, January 6, 2014 - 10:06:57 AM


Files produced by the author(s)


  • HAL Id : hal-00869394, version 1



Amaury Habrard, Jean-Philippe Peyrache, Marc Sebban. Boosting for Unsupervised Domain Adaptation. ECML PKDD 2013, Sep 2013, Prague, Czech Republic. pp.433-448. ⟨hal-00869394⟩



Record views


Files downloads