Accelerated proximal boosting

Abstract : Gradient boosting is a prediction method that iteratively combines weak learners to produce a complex and accurate model. From an optimization point of view, the learning procedure of gradient boosting mimics a gradient descent on a functional variable. This paper proposes to build upon the proximal point algorithm when the empirical risk to minimize is not differentiable. In addition , the novel boosting approach, called accelerated proximal boosting, benefits from Nesterov's acceleration in the same way as gradient boosting [Biau et al., 2018]. Advantages of leveraging proximal methods for boosting are illustrated by numerical experiments on simulated and real-world data. In particular, we exhibit a favorable comparison over gradient boosting regarding convergence rate and prediction accuracy.
Type de document :
Pré-publication, Document de travail
Liste complète des métadonnées
Contributeur : Maxime Sangnier <>
Soumis le : jeudi 2 août 2018 - 16:55:55
Dernière modification le : samedi 16 mars 2019 - 02:01:49
Document(s) archivé(s) le : samedi 3 novembre 2018 - 15:59:20


Fichiers produits par l'(les) auteur(s)


  • HAL Id : hal-01853244, version 1


Erwan Fouillen, Claire Boyer, Maxime Sangnier. Accelerated proximal boosting. 2018. 〈hal-01853244〉



Consultations de la notice


Téléchargements de fichiers