Generalization of l1 constraints for high dimensional regression problems
Résumé
We consider the linear regression problem where the number p of covariates is possibly larger than the number n of observations. In the paper, we propose to approximate the unknown regression parameters under sparsity assumptions with a class of estimators that are motivated by geometrical considerations. Popular estimators based on the control of the l1 norm of the regression coefficients (such as the LASSO and the Dantzig selector for example) can be seen as special cases of our estimator for which we derive Sparsity Inequalities, i.e., bounds involving the sparsity of the parameter we try to estimate. In such a generalized setup, we show that it is possible to consider variations of the loss function to be minimized. In particular, under a suitable setting, we derive a new estimator that is a transductive version of the LASSO, and we analyze its performance with milder assumptions than in the well-known results about the "usual" LASSO.
Origine : Fichiers produits par l'(les) auteur(s)