Toward fast transform learning - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue International Journal of Computer Vision Année : 2015

Toward fast transform learning

Résumé

This paper introduces a new dictionary learning strategy based on atoms obtained by translating the composition of K convolutions with S-sparse kernels of known support. The dictionary update step associated with this strategy is a non-convex optimization problem. We propose a practical formulation of this problem and introduce a Gauss–Seidel type algorithm referred to as alternative least square algorithm for its resolution. The search space of the proposed algorithm is of dimension KS, which is typically smaller than the size of the target atom and much smaller than the size of the image. Moreover, the complexity of this algorithm is linear with respect to the image size, allowing larger atoms to be learned (as opposed to small patches). The conducted experiments show that we are able to accurately approximate atoms such as wavelets, curvelets, sinc functions or cosines for large values of K. The proposed experiments also indicate that the algorithm generally converges to a global minimum for large values of K and S.
Fichier principal
Vignette du fichier
Chabiron_15202.pdf (2.69 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00862903 , version 1 (17-09-2013)
hal-00862903 , version 2 (28-11-2013)
hal-00862903 , version 3 (16-07-2014)
hal-00862903 , version 4 (03-03-2016)

Identifiants

Citer

Olivier Chabiron, François Malgouyres, Jean-Yves Tourneret, Nicolas Dobigeon. Toward fast transform learning. International Journal of Computer Vision, 2015, vol. 114 (n° 2), pp. 195-216. ⟨10.1007/s11263-014-0771-z⟩. ⟨hal-00862903v4⟩
633 Consultations
617 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More