V-fold cross-validation improved: V-fold penalization - Archive ouverte HAL Access content directly
Preprints, Working Papers, ... Year : 2008

V-fold cross-validation improved: V-fold penalization

Abstract

We study the efficiency of V-fold cross-validation (VFCV) for model selection from the non-asymptotic viewpoint, and suggest an improvement on it, which we call ``V-fold penalization''. Considering a particular (though simple) regression problem, we prove that VFCV with a bounded V is suboptimal for model selection, because it ``overpenalizes'' all the more that V is large. Hence, asymptotic optimality requires V to go to infinity. However, when the signal-to-noise ratio is low, it appears that overpenalizing is necessary, so that the optimal V is not always the larger one, despite of the variability issue. This is confirmed by some simulated data. In order to improve on the prediction performance of VFCV, we define a new model selection procedure, called ``V-fold penalization'' (penVF). It is a V-fold subsampling version of Efron's bootstrap penalties, so that it has the same computational cost as VFCV, while being more flexible. In a heteroscedastic regression framework, assuming the models to have a particular structure, we prove that penVF satisfies a non-asymptotic oracle inequality with a leading constant that tends to 1 when the sample size goes to infinity. In particular, this implies adaptivity to the smoothness of the regression function, even with a highly heteroscedastic noise. Moreover, it is easy to overpenalize with penVF, independently from the V parameter. A simulation study shows that this results in a significant improvement on VFCV in non-asymptotic situations.
Fichier principal
Vignette du fichier
penVF.pdf (539.41 Ko) Télécharger le fichier
penVF_appendix.pdf (254.18 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-00239182 , version 1 (05-02-2008)
hal-00239182 , version 2 (07-02-2008)

Identifiers

Cite

Sylvain Arlot. V-fold cross-validation improved: V-fold penalization. 2008. ⟨hal-00239182v2⟩
484 View
369 Download

Altmetric

Share

Gmail Facebook X LinkedIn More