Asynchronous decentralized convex optimization through short-term gradient averaging

Jerome Fellus 1 David Picard 2 Philippe-Henri Gosselin 2
1 MIDI
ETIS - Equipes Traitement de l'Information et Systèmes
Abstract : This paper considers decentralized convex optimization over a network in large scale contexts, where large simultaneously applies to number of training examples, dimensionality and number of networking nodes. We first propose a centralized optimization scheme that generalizes successful existing methods based on gradient averaging, improving their flexibility by making the number of averaged gradients an explicit parameter of the method. We then propose an asynchronous distributed algorithm that implements this original scheme for large decentralized computing networks.
Document type :
Conference papers
Complete list of metadatas

Cited literature [13 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01148648
Contributor : Jérôme Fellus <>
Submitted on : Tuesday, May 5, 2015 - 9:23:52 AM
Last modification on : Thursday, May 3, 2018 - 3:18:06 PM
Long-term archiving on : Monday, September 14, 2015 - 6:50:52 PM

File

jfellus_ESANN15.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01148648, version 1

Collections

Citation

Jerome Fellus, David Picard, Philippe-Henri Gosselin. Asynchronous decentralized convex optimization through short-term gradient averaging. European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, Apr 2015, Bruges, Belgium. ⟨hal-01148648⟩

Share

Metrics

Record views

563

Files downloads

123