Optimal Computational Trade-Off of Inexact Proximal Methods
Résumé
In this paper, we investigate the trade-off between convergence rate and computational cost when minimizing a composite functional with proximal-gradient methods, which are a popular optimization tool in machine learning. We consider the case when the proximity operator is approximated via an iterative procedure, which leads to an algorithm with two nested loops. We show that the computationally optimal strategy to reach a desired accuracy in finite time is to set the number of inner iterations to a constant, which differs from the strategy indicated by a convergence rate analysis. In the process, we also present a new procedure called SIP that is both computationally and practically efficient. Our numerical experiments confirm the theoretical findings and suggest that SIP can be a very competitive alternative to the standard procedure.
Origine : Fichiers produits par l'(les) auteur(s)