A Flexible Framework for Designing Trainable Priors with Adaptive Smoothing and Game Encoding - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

A Flexible Framework for Designing Trainable Priors with Adaptive Smoothing and Game Encoding

Résumé

We introduce a general framework for designing and training neural network layers whose forward passes can be interpreted as solving non-smooth convex optimization problems, and whose architectures are derived from an optimization algorithm. We focus on convex games, solved by local agents represented by the nodes of a graph and interacting through regularization functions. This approach is appealing for solving imaging problems, as it allows the use of classical image priors within deep models that are trainable end to end. The priors used in this presentation include variants of total variation, Laplacian regularization, bilateral filtering, sparse coding on learned dictionaries, and non-local self similarities. Our models are fully interpretable as well as parameter and data efficient. Our experiments demonstrate their effectiveness on a large diversity of tasks ranging from image denoising and compressed sensing for fMRI to dense stereo matching.
Fichier principal
Vignette du fichier
main.pdf (1.17 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02881924 , version 1 (26-06-2020)
hal-02881924 , version 2 (28-10-2020)

Identifiants

  • HAL Id : hal-02881924 , version 2

Citer

Bruno Lecouat, Jean Ponce, Julien Mairal. A Flexible Framework for Designing Trainable Priors with Adaptive Smoothing and Game Encoding. NeurIPS '20 - 34th International Conference on Neural Information Processing Systems, Dec 2020, Vancouver, France. pp.15664-15675. ⟨hal-02881924v2⟩
353 Consultations
261 Téléchargements

Partager

Gmail Facebook X LinkedIn More