Optimal algorithms for smooth and strongly convex distributed optimization in networks

Kevin Scaman 1 Francis Bach 2 Sébastien Bubeck 3 Yin Tat Lee 3 Laurent Massoulié 1
2 SIERRA - Statistical Machine Learning and Parsimony
DI-ENS - Département d'informatique de l'École normale supérieure, CNRS - Centre National de la Recherche Scientifique, Inria de Paris
Abstract : In this paper, we determine the optimal convergence rates for strongly convex and smooth distributed optimization in two settings: centralized and decentralized communications over a network. For centralized (i.e. master/slave) algorithms, we show that distributing Nesterov's accelerated gradient descent is optimal and achieves a precision $\varepsilon > 0$ in time $O(\sqrt{\kappa_g}(1+\Delta\tau)\ln(1/\varepsilon))$, where $\kappa_g$ is the condition number of the (global) function to optimize, $\Delta$ is the diameter of the network, and $\tau$ (resp. $1$) is the time needed to communicate values between two neighbors (resp. perform local computations). For decentralized algorithms based on gossip, we provide the first optimal algorithm, called the multi-step dual accelerated (MSDA) method, that achieves a precision $\varepsilon > 0$ in time $O(\sqrt{\kappa_l}(1+\frac{\tau}{\sqrt{\gamma}})\ln(1/\varepsilon))$, where $\kappa_l$ is the condition number of the local functions and $\gamma$ is the (normalized) eigengap of the gossip matrix used for communication between nodes. We then verify the efficiency of MSDA against state-of-the-art methods for two problems: least-squares regression and classification by logistic regression.
Document type :
Preprints, Working Papers, ...
Complete list of metadatas

https://hal.archives-ouvertes.fr/hal-01478317
Contributor : Kevin Scaman <>
Submitted on : Tuesday, February 28, 2017 - 9:59:55 AM
Last modification on : Wednesday, January 30, 2019 - 11:07:40 AM
Long-term archiving on : Monday, May 29, 2017 - 1:06:46 PM

Files

distributed_dual_axiv.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01478317, version 1
  • ARXIV : 1702.08704

Citation

Kevin Scaman, Francis Bach, Sébastien Bubeck, Yin Tat Lee, Laurent Massoulié. Optimal algorithms for smooth and strongly convex distributed optimization in networks. 2017. ⟨hal-01478317⟩

Share

Metrics

Record views

734

Files downloads

1055