Optimal algorithms for smooth and strongly convex distributed optimization in networks

Kevin Scaman 1 Francis Bach 2 Sébastien Bubeck 3 Yin Tat Lee 3 Laurent Massoulié 1
2 SIERRA - Statistical Machine Learning and Parsimony
DI-ENS - Département d'informatique de l'École normale supérieure, CNRS - Centre National de la Recherche Scientifique, Inria de Paris
Abstract : In this paper, we determine the optimal convergence rates for strongly convex and smooth distributed optimization in two settings: centralized and decentralized communications over a network. For centralized (i.e. master/slave) algorithms, we show that distributing Nesterov's accelerated gradient descent is optimal and achieves a precision $\varepsilon > 0$ in time $O(\sqrt{\kappa_g}(1+\Delta\tau)\ln(1/\varepsilon))$, where $\kappa_g$ is the condition number of the (global) function to optimize, $\Delta$ is the diameter of the network, and $\tau$ (resp. $1$) is the time needed to communicate values between two neighbors (resp. perform local computations). For decentralized algorithms based on gossip, we provide the first optimal algorithm, called the multi-step dual accelerated (MSDA) method, that achieves a precision $\varepsilon > 0$ in time $O(\sqrt{\kappa_l}(1+\frac{\tau}{\sqrt{\gamma}})\ln(1/\varepsilon))$, where $\kappa_l$ is the condition number of the local functions and $\gamma$ is the (normalized) eigengap of the gossip matrix used for communication between nodes. We then verify the efficiency of MSDA against state-of-the-art methods for two problems: least-squares regression and classification by logistic regression.
Type de document :
Pré-publication, Document de travail
2017
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-01478317
Contributeur : Kevin Scaman <>
Soumis le : mardi 28 février 2017 - 09:59:55
Dernière modification le : jeudi 26 avril 2018 - 10:29:04
Document(s) archivé(s) le : lundi 29 mai 2017 - 13:06:46

Fichiers

distributed_dual_axiv.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01478317, version 1
  • ARXIV : 1702.08704

Citation

Kevin Scaman, Francis Bach, Sébastien Bubeck, Yin Tat Lee, Laurent Massoulié. Optimal algorithms for smooth and strongly convex distributed optimization in networks. 2017. 〈hal-01478317〉

Partager

Métriques

Consultations de la notice

700

Téléchargements de fichiers

901