Sparse Prediction with the $k$-Support Norm
Résumé
We derive a novel norm that corresponds to the tightest convex relaxation of sparsity combined with an L_2 penalty. We show that this new k-support norm provides a tighter relaxation than the elastic net and can thus be advantageous in sparse prediction problems. We also bound the looseness of the elastic net, thus shedding new light on it and providing justification for its use.