Soft-max boosting

Matthieu Geist 1
1 MALIS - MAchine Learning and Interactive Systems
SUPELEC-Campus Metz, CentraleSupélec
Abstract : The standard multi-class classification risk, based on the binary loss, is rarely directly minimized. This is due to (i) the lack of convexity and (ii) the lack of smoothness (and even continuity). The classic approach consists in minimizing instead a convex surrogate. In this paper, we propose to replace the usually considered deterministic decision rule by a stochastic one, which allows obtaining a smooth risk (generalizing the expected binary loss, and more generally the cost-sensitive loss). Practically, this (empirical) risk is minimized by performing a gradient descent in the function space linearly spanned by a base learner (a.k.a. boosting). We provide a convergence analysis of the resulting algorithm and experiment it on a bunch of synthetic and real-world data sets (with noiseless and noisy domains, compared to convex and non-convex boosters).
Document type :
Journal articles
Complete list of metadatas

Cited literature [25 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01258816
Contributor : Matthieu Geist <>
Submitted on : Tuesday, January 19, 2016 - 3:02:03 PM
Last modification on : Thursday, April 5, 2018 - 12:30:24 PM

File

ml_sm_boost_rev.pdf
Files produced by the author(s)

Identifiers

Citation

Matthieu Geist. Soft-max boosting. Machine Learning, Springer Verlag, 2015, 100 (2), pp.305-332. ⟨http://link.springer.com/article/10.1007/s10994-015-5491-2⟩. ⟨10.1007/s10994-015-5491-2⟩. ⟨hal-01258816⟩

Share

Metrics

Record views

220

Files downloads

417