Stochastic Variance Reduction Methods for Saddle-Point Problems - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2016

Stochastic Variance Reduction Methods for Saddle-Point Problems

Résumé

We consider convex-concave saddle-point problems where the objective functions may be split in many components, and extend recent stochastic variance reduction methods (such as SVRG or SAGA) to provide the first large-scale linearly convergent algorithms for this class of problems which is common in machine learning. While the algorithmic extension is straightforward, it comes with challenges and opportunities: (a) the convex minimization analysis does not apply and we use the notion of monotone operators to prove convergence, showing in particular that the same algorithm applies to a larger class of problems, such as variational inequalities, (b) there are two notions of splits, in terms of functions, or in terms of partial derivatives, (c) the split does need to be done with convex-concave terms, (d) non-uniform sampling is key to an efficient algorithm, both in theory and practice, and (e) these incremental algorithms can be easily accelerated using a simple extension of the "catalyst" framework, leading to an algorithm which is always superior to accelerated batch algorithms.
Fichier principal
Vignette du fichier
sagasaddle_hal.pdf (848.32 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-01319293 , version 1 (20-05-2016)
hal-01319293 , version 2 (02-11-2016)

Identifiants

Citer

P Balamurugan, Francis Bach. Stochastic Variance Reduction Methods for Saddle-Point Problems. 2016. ⟨hal-01319293v1⟩
4229 Consultations
1801 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More