Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Journal of Optimization Theory and Applications Année : 2018

Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms

Résumé

We propose a unifying algorithm for non-smooth non-convex optimization. The algorithm approximates the objective function by a convex model function and finds an approximate (Bregman) proximal point of the convex model. This approximate minimizer of the model function yields a descent direction, along which the next iterate is found. Complemented with an Armijo-like line search strategy, we obtain a flexible algorithm for which we prove (subsequential) convergence to a stationary point under weak assumptions on the growth of the model function error. Special instances of the algorithm with a Euclidean distance function are, for example, Gradient Descent, Forward-Backward Splitting, ProxDescent, without the common requirement of a "Lipschitz continuous gradient". In addition, we consider a broad class of Bregman distance functions (generated by Legendre functions), replacing the Euclidean distance. The algorithm has a wide range of applications including many linear and non-linear inverse problems in signal/image processing and machine learning.
Fichier principal
Vignette du fichier
AbstrBregMin.pdf (821.66 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01921804 , version 1 (14-11-2018)

Identifiants

  • HAL Id : hal-01921804 , version 1

Citer

Peter Ochs, Jalal M. Fadili, Thomas Brox. Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms. Journal of Optimization Theory and Applications, In press. ⟨hal-01921804⟩
31 Consultations
39 Téléchargements

Partager

Gmail Facebook X LinkedIn More