Margin adaptive model selection in statistical learning - Archive ouverte HAL Access content directly
Journal Articles Bernoulli Year : 2011

Margin adaptive model selection in statistical learning

Abstract

A classical condition for fast learning rates is the margin condition, first introduced by Mammen and Tsybakov. We tackle in this paper the problem of adaptivity to this condition in the context of model selection, in a general learning framework. Actually, we consider a weaker version of this condition that allows us to take into account that learning within a small model can be much easier than in a large one. Requiring this ``strong margin adaptivity'' makes the model selection problem more challenging. We first prove, in a very general framework, that some penalization procedures (including local Rademacher complexities) exhibit this adaptivity when the models are nested. Contrary to previous results, this holds with penalties that only depend on the data. Our second main result is that strong margin adaptivity is not always possible when the models are not nested: for every model selection procedure (even a randomized one), there is a problem for which it does not demonstrate strong margin adaptivity.
Fichier principal
Vignette du fichier
margin.pdf (282.02 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-00274327 , version 1 (18-04-2008)
hal-00274327 , version 2 (07-02-2010)

Identifiers

Cite

Sylvain Arlot, Peter Bartlett. Margin adaptive model selection in statistical learning. Bernoulli, 2011, 17 (2), pp.687-713. ⟨10.3150/10-BEJ288⟩. ⟨hal-00274327v2⟩
316 View
200 Download

Altmetric

Share

Gmail Facebook X LinkedIn More