Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

Sparsification as a Remedy for Staleness in Distributed Asynchronous SGD

Abstract : Large scale machine learning is increasingly relying on distributed optimization, whereby several machines contribute to the training process of a statistical model. In this work we study the performance of asynchronous, distributed settings, when applying sparsification, a technique used to reduce communication overheads. In particular, for the first time in an asynchronous, non-convex setting, we theoretically prove that, in presence of staleness, sparsification does not harm SGD performance: the ergodic convergence rate matches the known result of standard SGD, that is $\mathcal{O} \left( 1/\sqrt{T} \right)$. We also carry out an empirical study to complement our theory, and confirm that the effects of sparsification on the convergence rate are negligible, when compared to 'vanilla' SGD, even in the challenging scenario of an asynchronous, distributed system.
Document type :
Preprints, Working Papers, ...
Complete list of metadata
Contributor : Centre de Documentation Eurecom Connect in order to contact the contributor
Submitted on : Wednesday, September 15, 2021 - 1:48:48 PM
Last modification on : Thursday, September 16, 2021 - 3:40:44 AM

Links full text


  • HAL Id : hal-03345253, version 1
  • ARXIV : 1910.09466



Rosa Candela, Giulio Franzese, Maurizio Filippone, Pietro Michiardi. Sparsification as a Remedy for Staleness in Distributed Asynchronous SGD. 2019. ⟨hal-03345253⟩



Record views