Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

Proximal boosting and its acceleration

Abstract : Gradient boosting is a prediction method that iteratively combines weak learners to produce a complex and accurate model. From an optimization point of view, the learning procedure of gradient boosting mimics a gradient descent on a functional variable. This paper proposes to build upon the proximal point algorithm when the empirical risk to minimize is not differentiable to introduce a novel boosting approach, called proximal boosting. Besides being motivated by non-differentiable optimization, the proposed algorithm benefits from Nesterov’s acceleration in the same way as gradient boosting [Biau et al., 2018]. This leads to a variant, called accelerated proximal boosting. Advantages of leveraging proximal methods for boosting are illustrated by numerical experiments on simulated and real-world data. In particular, we exhibit a favorable comparison over gradient boosting regarding convergence rate and prediction accuracy.
Document type :
Preprints, Working Papers, ...
Complete list of metadata

Cited literature [35 references]  Display  Hide  Download
Contributor : Maxime Sangnier <>
Submitted on : Wednesday, January 22, 2020 - 10:25:11 AM
Last modification on : Thursday, July 1, 2021 - 5:32:16 PM
Long-term archiving on: : Thursday, April 23, 2020 - 1:11:09 PM


Files produced by the author(s)


  • HAL Id : hal-01853244, version 2


Erwan Fouillen, Claire Boyer, Maxime Sangnier. Proximal boosting and its acceleration. 2020. ⟨hal-01853244v2⟩



Record views


Files downloads