DC Proximal Newton for Non-Convex Optimization Problems - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue IEEE Transactions on Neural Networks Année : 2016

DC Proximal Newton for Non-Convex Optimization Problems

Résumé

We introduce a novel algorithm for solving learning problems where both the loss function and the regularizer are non-convex but belong to the class of difference of convex (DC) functions. Our contribution is a new general purpose proximal Newton algorithm that is able to deal with such a situation. The algorithm consists in obtaining a descent direction from an approximation of the loss function and then in performing a line search to ensure sufficient descent. A theoretical analysis is provided showing that the iterates of the proposed algorithm {admit} as limit points stationary points of the DC objective function. Numerical experiments show that our approach is more efficient than current state of the art for a problem with a convex loss functions and non-convex regularizer. We have also illustrated the benefit of our algorithm in high-dimensional transductive learning problem where both loss function and regularizers are non-convex.
Fichier principal
Vignette du fichier
ProxNewton.pdf (237.2 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00952445 , version 1 (26-02-2014)
hal-00952445 , version 2 (02-06-2015)
hal-00952445 , version 3 (02-07-2015)

Identifiants

Citer

Alain Rakotomamonjy, Remi Flamary, Gilles Gasso. DC Proximal Newton for Non-Convex Optimization Problems. IEEE Transactions on Neural Networks, 2016. ⟨hal-00952445v3⟩
384 Consultations
1152 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More