Learning a fast transform with a dictionary

Abstract : — A powerful approach to sparse representation, dictionary learning consists in finding a redundant frame in which the representation of a particular class of images is sparse. In practice, all algorithms performing dictionary learning iteratively estimate the dictionary and a sparse representation of the images using this dictionary. However, the numerical complexity of dictionary learning restricts its use to atoms with a small support. A way to alleviate these issues is introduced in this paper, consisting in dictionary atoms obtained by translating the composition of K convolutions with S-sparse kernels of known support. The dictionary update step associated with this strategy is a non-convex optimization problem, which we study here. A block-coordinate descent or Gauss-Seidel algorithm is proposed to solve this problem, whose search space is of dimension KS, which is much smaller than the size of the image. Moreover, the complexity of the algorithm is linear with respect to the size of the image, allowing larger atoms to be learned (as opposed to small patches). An experiment is presented that shows the approximation of a large cosine atom with K = 7 sparse kernels, demonstrating a very good accuracy.
Type de document :
Communication dans un congrès
iTwist, 2014, Namur, Belgium. 2014
Liste complète des métadonnées

Contributeur : Francois Malgouyres <>
Soumis le : vendredi 10 mars 2017 - 15:02:32
Dernière modification le : mardi 14 mars 2017 - 01:09:42


Fichiers produits par l'(les) auteur(s)


  • HAL Id : hal-01486838, version 1



Olivier Chabiron, François Malgouyres, Jean-Yves Tourneret, Nicolas Dobigeon. Learning a fast transform with a dictionary. iTwist, 2014, Namur, Belgium. 2014. <hal-01486838>



Consultations de
la notice


Téléchargements du document