Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

Proximal boosting and its acceleration

Abstract : Gradient boosting is a prediction method that iteratively combines weak learners to produce a complex and accurate model. From an optimization point of view, the learning procedure of gradient boosting mimics a gradient descent on a functional variable. This paper proposes to build upon the proximal point algorithm when the empirical risk to minimize is not differentiable to introduce a novel boosting approach, called proximal boosting. Besides being motivated by non-differentiable optimization, the proposed algorithm benefits from Nesterov’s acceleration in the same way as gradient boosting [Biau et al., 2018]. This leads to a variant, called accelerated proximal boosting. Advantages of leveraging proximal methods for boosting are illustrated by numerical experiments on simulated and real-world data. In particular, we exhibit a favorable comparison over gradient boosting regarding convergence rate and prediction accuracy.
Document type :
Preprints, Working Papers, ...
Complete list of metadatas

Cited literature [35 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01853244
Contributor : Maxime Sangnier <>
Submitted on : Wednesday, January 22, 2020 - 10:25:11 AM
Last modification on : Tuesday, September 22, 2020 - 3:57:49 AM
Long-term archiving on: : Thursday, April 23, 2020 - 1:11:09 PM

File

paper.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01853244, version 2

Citation

Erwan Fouillen, Claire Boyer, Maxime Sangnier. Proximal boosting and its acceleration. 2020. ⟨hal-01853244v2⟩

Share

Metrics

Record views

90

Files downloads

159