Stochastic Optimization with Variance Reduction for Infinite Datasets with Finite-Sum Structure - Laboratoire Jean Kuntzmann Access content directly
Preprints, Working Papers, ... Year : 2017

Stochastic Optimization with Variance Reduction for Infinite Datasets with Finite-Sum Structure

Abstract

Stochastic optimization algorithms with variance reduction have proven successful for minimizing large finite sums of functions. However, in the context of empirical risk minimization, it is often helpful to augment the training set by considering random perturbations of input examples. In this case, the objective is no longer a finite sum, and the main candidate for optimization is the stochastic gradient descent method (SGD). In this paper, we introduce a variance reduction approach for this setting when the objective is strongly convex. After an initial linearly convergent phase, the algorithm achieves a $O(1/t)$ convergence rate in expectation like SGD, but with a constant factor that is typically much smaller, depending on the variance of gradient estimates due to perturbations on a single example. Extensions of the algorithm to composite objectives and non-uniform sampling are also studied.
Fichier principal
Vignette du fichier
stoch-miso.pdf (629.47 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-01375816 , version 1 (03-10-2016)
hal-01375816 , version 2 (10-01-2017)
hal-01375816 , version 3 (23-01-2017)
hal-01375816 , version 4 (27-02-2017)
hal-01375816 , version 5 (01-06-2017)
hal-01375816 , version 6 (15-11-2017)

Identifiers

Cite

Alberto Bietti, Julien Mairal. Stochastic Optimization with Variance Reduction for Infinite Datasets with Finite-Sum Structure. 2017. ⟨hal-01375816v2⟩
1180 View
755 Download

Altmetric

Share

Gmail Facebook X LinkedIn More