Skip to Main content Skip to Navigation
Journal articles

DC Proximal Newton for Non-Convex Optimization Problems

Abstract : We introduce a novel algorithm for solving learning problems where both the loss function and the regularizer are non-convex but belong to the class of difference of convex (DC) functions. Our contribution is a new general purpose proximal Newton algorithm that is able to deal with such a situation. The algorithm consists in obtaining a descent direction from an approximation of the loss function and then in performing a line search to ensure sufficient descent. A theoretical analysis is provided showing that the iterates of the proposed algorithm {admit} as limit points stationary points of the DC objective function. Numerical experiments show that our approach is more efficient than current state of the art for a problem with a convex loss functions and non-convex regularizer. We have also illustrated the benefit of our algorithm in high-dimensional transductive learning problem where both loss function and regularizers are non-convex.
Complete list of metadata

Cited literature [49 references]  Display  Hide  Download
Contributor : Alain Rakotomamonjy <>
Submitted on : Thursday, July 2, 2015 - 1:04:05 AM
Last modification on : Monday, October 12, 2020 - 11:10:19 AM
Long-term archiving on: : Tuesday, April 25, 2017 - 9:21:18 PM


Files produced by the author(s)


  • HAL Id : hal-00952445, version 3
  • ARXIV : 1507.00438


Alain Rakotomamonjy, Remi Flamary, Gilles Gasso. DC Proximal Newton for Non-Convex Optimization Problems. IEEE Transactions on Neural Networks, Institute of Electrical and Electronics Engineers, 2016. ⟨hal-00952445v3⟩



Record views


Files downloads