# Optimal algorithms for smooth and strongly convex distributed optimization in networks

2 SIERRA - Statistical Machine Learning and Parsimony
DI-ENS - Département d'informatique de l'École normale supérieure, CNRS - Centre National de la Recherche Scientifique, Inria de Paris
4 DYOGENE - Dynamics of Geometric Networks
Inria de Paris, CNRS - Centre National de la Recherche Scientifique : UMR 8548, DI-ENS - Département d'informatique de l'École normale supérieure
Abstract : In this paper, we determine the optimal convergence rates for strongly convex and smooth distributed optimization in two settings: centralized and decentralized communications over a network. For centralized (i.e. master/slave) algorithms, we show that distributing Nesterov's accelerated gradient descent is optimal and achieves a precision $\varepsilon > 0$ in time $O(\sqrt{\kappa_g}(1+\Delta\tau)\ln(1/\varepsilon))$, where $\kappa_g$ is the condition number of the (global) function to optimize, $\Delta$ is the diameter of the network, and $\tau$ (resp. $1$) is the time needed to communicate values between two neighbors (resp. perform local computations). For decentralized algorithms based on gossip, we provide the first optimal algorithm, called the multi-step dual accelerated (MSDA) method, that achieves a precision $\varepsilon > 0$ in time $O(\sqrt{\kappa_l}(1+\frac{\tau}{\sqrt{\gamma}})\ln(1/\varepsilon))$, where $\kappa_l$ is the condition number of the local functions and $\gamma$ is the (normalized) eigengap of the gossip matrix used for communication between nodes. We then verify the efficiency of MSDA against state-of-the-art methods for two problems: least-squares regression and classification by logistic regression.
Keywords :
Document type :
Preprints, Working Papers, ...
Domain :

https://hal.archives-ouvertes.fr/hal-01478317
Contributor : Kevin Scaman <>
Submitted on : Tuesday, February 28, 2017 - 9:59:55 AM
Last modification on : Monday, January 20, 2020 - 8:16:33 AM
Document(s) archivé(s) le : Monday, May 29, 2017 - 1:06:46 PM

### Files

distributed_dual_axiv.pdf
Files produced by the author(s)

### Identifiers

• HAL Id : hal-01478317, version 1
• ARXIV : 1702.08704

### Citation

Kevin Scaman, Francis Bach, Sébastien Bubeck, Yin Tat Lee, Laurent Massoulié. Optimal algorithms for smooth and strongly convex distributed optimization in networks. 2017. ⟨hal-01478317⟩

Record views