déposer
version française rss feed
HAL : inria-00516723, version 4

Voir la fiche détaillée  BibTeX,EndNote,...
Journal of Machine Learning Research, 12 (2011) 2297-2334
Versions disponibles
Proximal Methods for Hierarchical Sparse Coding
Rodolphe Jenatton 1, 2, 3, Julien Mairal 1, 2, Guillaume Obozinski 1, 2, 3, Francis Bach 1, 2, 3
(07/2011)

Sparse coding consists in representing signals as sparse linear combinations of atoms selected from a dictionary. We consider an extension of this framework where the atoms are further assumed to be embedded in a tree. This is achieved using a recently introduced tree-structured sparse regularization norm, which has proven useful in several applications. This norm leads to regularized problems that are difficult to optimize, and we propose in this paper efficient algorithms for solving them. More precisely, we show that the proximal operator associated with this norm is computable exactly via a dual approach that can be viewed as the composition of elementary proximal operators. Our procedure has a complexity linear, or close to linear, in the number of atoms, and allows the use of accelerated gradient techniques to solve the tree-structured sparse approximation problem at the same computational cost as traditional ones using the L1-norm. Our method is efficient and scales gracefully to millions of variables, which we illustrate in two types of applications: first, we consider fixed hierarchical dictionaries of wavelets to denoise natural images. Then, we apply our optimization tools in the context of dictionary learning, where learned dictionary elements naturally organize in a prespecified arborescent structure, leading to a better performance in reconstruction of natural image patches. When applied to text documents, our method learns hierarchies of topics, thus providing a competitive alternative to probabilistic topic models.
1 :  WILLOW (INRIA Paris-Rocquencourt)
INRIA – École normale supérieure [ENS] - Paris – CNRS : UMR8548
2 :  Laboratoire d'informatique de l'école normale supérieure (LIENS)
CNRS : UMR8548 – École normale supérieure [ENS] - Paris
3 :  SIERRA (INRIA Paris - Rocquencourt)
INRIA : PARIS - ROCQUENCOURT – École normale supérieure [ENS] - Paris – CNRS : UMR8548
Statistiques/Machine Learning
Convex optimization – proximal methods – sparse coding – dictionary learning – structured sparsity – matrix factorization
Liste des fichiers attachés à ce document :
PDF
hal_jenatton11a.pdf(580.4 KB)
PS
hal_jenatton11a.ps(2.3 MB)

tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...
tous les articles de la base du CCSd...