Gradient boosting for kernelized output spaces - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2007

Gradient boosting for kernelized output spaces

Résumé

A general framework is proposed for gradient boosting in supervised learning problems where the loss function is defined using a kernel over the output space. It extends boosting in a principled way to complex output spaces (images, text, graphs etc.) and can be applied to a general class of base learners working in kernelized output spaces. Empirical results are provided on three problems: a regression problem, an image completion task and a graph prediction problem. In these experiments, the framework is combined with tree-based base learners, which have interesting algorithmic properties. The results show that gradient boosting significantly improves these base learners and provides competitive results with other tree-based ensemble methods based on randomization.
Fichier principal
Vignette du fichier
geurts-icml2007.pdf (167.63 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00341945 , version 1 (19-07-2009)

Identifiants

Citer

Pierre Geurts, Louis Wehenkel, Florence d'Alché-Buc. Gradient boosting for kernelized output spaces. Proceedings of the 24th Annual International Conference on Machine Learning (ICML 2007), 2007, Corvallis, OR, United States. pp.289--296, ⟨10.1145/1273496.1273533⟩. ⟨hal-00341945⟩
185 Consultations
261 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More