Penalization versus Goldenshluger − Lepski strategies in warped bases regression

Abstract : This paper deals with the problem of estimating a regression function f , in a random design framework. We build and study two adaptive estimators based on model selection , applied with warped bases. We start with a collection of nite dimensional linear spaces, spanned by orthonormal bases. Instead of expanding directly the target function f on these bases, we rather consider the expansion of an intermediate function, the convolution product of f with the inverse of the cumulative distribution function of the design, following Kerkyacharian and Picard (2004). The data-driven selection of the (best) space is done with two strategies: we use both a penalization version of a "warped contrast", and a model selection device in the spirit of Goldenshluger and Lepski (2011). We propose by these methods two functions, easier to compute than least-squares estimators. We establish nonasymptotic mean-squared integrated risk bounds for the resulting estimators. We study also adaptive properties, in case the regression function belongs to a Besov or Sobolev space, and compare the theoretical and practical performances of the two selection rules.
Document type :
Journal articles
Complete list of metadatas

Cited literature [32 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-02132877
Contributor : Gaëlle Chagny <>
Submitted on : Friday, May 17, 2019 - 4:11:11 PM
Last modification on : Friday, September 20, 2019 - 4:34:03 PM

File

ArticlRegRevision.pdf
Files produced by the author(s)

Identifiers

Collections

Citation

Gaëlle Chagny. Penalization versus Goldenshluger − Lepski strategies in warped bases regression. ESAIM: Probability and Statistics, EDP Sciences, 2013, 17, pp.328-358. ⟨10.1051/ps/2011165⟩. ⟨hal-02132877⟩

Share

Metrics

Record views

36

Files downloads

95