# Optimization with Sparsity-Inducing Penalties

2 SIERRA - Statistical Machine Learning and Parsimony
DI-ENS - Département d'informatique de l'École normale supérieure, ENS Paris - École normale supérieure - Paris, Inria Paris-Rocquencourt, CNRS - Centre National de la Recherche Scientifique : UMR8548
Abstract : Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. They were first dedicated to linear variable selection but numerous extensions have now emerged such as structured sparsity or kernel selection. It turns out that many of the related estimation problems can be cast as convex optimization problems by regularizing the empirical risk with appropriate non-smooth norms. The goal of this paper is to present from a general perspective optimization tools and techniques dedicated to such sparsity-inducing penalties. We cover proximal methods, block-coordinate descent, reweighted $\ell_2$-penalized techniques, working-set and homotopy methods, as well as non-convex formulations and extensions, and provide an extensive set of experiments to compare various algorithms from a computational point of view.
Keywords :
Document type :
Journal articles
Domain :

https://hal.archives-ouvertes.fr/hal-00613125
Contributor : Francis Bach <>
Submitted on : Sunday, November 20, 2011 - 2:56:23 PM
Last modification on : Wednesday, August 7, 2019 - 12:19:23 PM
Long-term archiving on : Tuesday, February 21, 2012 - 2:20:18 AM

### Files

Bach-Jenatton-Mairal-Obozinski...
Files produced by the author(s)

### Identifiers

• HAL Id : hal-00613125, version 2
• ARXIV : 1108.0775

### Citation

Francis Bach, Rodolphe Jenatton, Julien Mairal, Guillaume Obozinski. Optimization with Sparsity-Inducing Penalties. Foundations and Trends in Machine Learning, Now Publishers, 2011. ⟨hal-00613125v2⟩

Record views