Skip to Main content Skip to Navigation
Conference papers

Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization

Mark Schmidt 1, 2, * Nicolas Le Roux 1, 2 Francis Bach 1, 2
* Corresponding author
1 SIERRA - Statistical Machine Learning and Parsimony
DI-ENS - Département d'informatique de l'École normale supérieure, ENS Paris - École normale supérieure - Paris, Inria Paris-Rocquencourt, CNRS - Centre National de la Recherche Scientifique : UMR8548
Abstract : We consider the problem of optimizing the sum of a smooth convex function and a non-smooth convex function using proximal-gradient methods, where an error is present in the calculation of the gradient of the smooth term or in the proximity operator with respect to the non-smooth term. We show that both the basic proximal-gradient method and the accelerated proximal-gradient method achieve the same convergence rate as in the error-free case, provided that the errors decrease at appropriate rates.Using these rates, we perform as well as or better than a carefully chosen fixed error level on a set of structured sparsity problems.
Complete list of metadatas

Cited literature [39 references]  Display  Hide  Download

https://hal.inria.fr/inria-00618152
Contributor : Nicolas Le Roux <>
Submitted on : Thursday, December 1, 2011 - 4:03:15 PM
Last modification on : Thursday, February 7, 2019 - 3:49:42 PM
Document(s) archivé(s) le : Monday, December 5, 2016 - 12:57:43 AM

Files

111201_arxiv_inexact_prox.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : inria-00618152, version 3
  • ARXIV : 1109.2415

Collections

Citation

Mark Schmidt, Nicolas Le Roux, Francis Bach. Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization. NIPS'11 - 25 th Annual Conference on Neural Information Processing Systems, Dec 2011, Grenada, Spain. ⟨inria-00618152v3⟩

Share

Metrics

Record views

1182

Files downloads

2281