Skip to Main content Skip to Navigation
Journal articles

Minimizing Finite Sums with the Stochastic Average Gradient

Mark Schmidt 1, 2 Nicolas Le Roux 2, 1 Francis Bach 2, 1
1 SIERRA - Statistical Machine Learning and Parsimony
DI-ENS - Département d'informatique de l'École normale supérieure, CNRS - Centre National de la Recherche Scientifique, Inria de Paris
Abstract : We propose the stochastic average gradient (SAG) method for optimizing the sum of a finite number of smooth convex functions. Like stochastic gradient (SG) methods, the SAG method's iteration cost is independent of the number of terms in the sum. However, by incorporating a memory of previous gradient values the SAG method achieves a faster convergence rate than black-box SG methods. The convergence rate is improved from O(1/k^{1/2}) to O(1/k) in general, and when the sum is strongly-convex the convergence rate is improved from the sub-linear O(1/k) to a linear convergence rate of the form O(p^k) for p < 1. Further, in many cases the convergence rate of the new method is also faster than black-box deterministic gradient methods, in terms of the number of gradient evaluations. Numerical experiments indicate that the new algorithm often dramatically outperforms existing SG and deterministic gradient methods, and that the performance may be further improved through the use of non-uniform sampling strategies.
Complete list of metadatas

https://hal.inria.fr/hal-00860051
Contributor : Mark Schmidt <>
Submitted on : Tuesday, May 10, 2016 - 10:28:36 PM
Last modification on : Thursday, February 7, 2019 - 2:42:35 PM

Files

sagMP.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-00860051, version 2
  • ARXIV : 1309.2388

Collections

Citation

Mark Schmidt, Nicolas Le Roux, Francis Bach. Minimizing Finite Sums with the Stochastic Average Gradient. Mathematical Programming B, Springer, 2017, 162 (1-2), pp.83-112. ⟨hal-00860051v2⟩

Share

Metrics

Record views

2484

Files downloads

9698