Accelerated proximal boosting

Abstract : Gradient boosting is a prediction method that iteratively combines weak learners to produce a complex and accurate model. From an optimization point of view, the learning procedure of gradient boosting mimics a gradient descent on a functional variable. This paper proposes to build upon the proximal point algorithm when the empirical risk to minimize is not differentiable. In addition , the novel boosting approach, called accelerated proximal boosting, benefits from Nesterov's acceleration in the same way as gradient boosting [Biau et al., 2018]. Advantages of leveraging proximal methods for boosting are illustrated by numerical experiments on simulated and real-world data. In particular, we exhibit a favorable comparison over gradient boosting regarding convergence rate and prediction accuracy.
Type de document :
Pré-publication, Document de travail
2018
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-01853244
Contributeur : Maxime Sangnier <>
Soumis le : jeudi 2 août 2018 - 16:55:55
Dernière modification le : samedi 16 mars 2019 - 02:01:49
Document(s) archivé(s) le : samedi 3 novembre 2018 - 15:59:20

Fichier

paper.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01853244, version 1

Citation

Erwan Fouillen, Claire Boyer, Maxime Sangnier. Accelerated proximal boosting. 2018. 〈hal-01853244〉

Partager

Métriques

Consultations de la notice

124

Téléchargements de fichiers

89