Learning a fast transform with a dictionary

Abstract : — A powerful approach to sparse representation, dictionary learning consists in finding a redundant frame in which the representation of a particular class of images is sparse. In practice, all algorithms performing dictionary learning iteratively estimate the dictionary and a sparse representation of the images using this dictionary. However, the numerical complexity of dictionary learning restricts its use to atoms with a small support. A way to alleviate these issues is introduced in this paper, consisting in dictionary atoms obtained by translating the composition of K convolutions with S-sparse kernels of known support. The dictionary update step associated with this strategy is a non-convex optimization problem, which we study here. A block-coordinate descent or Gauss-Seidel algorithm is proposed to solve this problem, whose search space is of dimension KS, which is much smaller than the size of the image. Moreover, the complexity of the algorithm is linear with respect to the size of the image, allowing larger atoms to be learned (as opposed to small patches). An experiment is presented that shows the approximation of a large cosine atom with K = 7 sparse kernels, demonstrating a very good accuracy.
Type de document :
Communication dans un congrès
iTwist, 2014, Namur, Belgium. 2014
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-01486838
Contributeur : Francois Malgouyres <>
Soumis le : vendredi 10 mars 2017 - 15:02:32
Dernière modification le : mardi 14 mars 2017 - 01:09:42
Document(s) archivé(s) le : dimanche 11 juin 2017 - 15:34:38

Fichier

itwist14.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01486838, version 1

Collections

Citation

Olivier Chabiron, François Malgouyres, Jean-Yves Tourneret, Nicolas Dobigeon. Learning a fast transform with a dictionary. iTwist, 2014, Namur, Belgium. 2014. <hal-01486838>

Partager

Métriques

Consultations de
la notice

66

Téléchargements du document

12