Asynchronous decentralized convex optimization through short-term gradient averaging - Archive ouverte HAL Access content directly
Conference Papers Year : 2015

Asynchronous decentralized convex optimization through short-term gradient averaging

Abstract

This paper considers decentralized convex optimization over a network in large scale contexts, where large simultaneously applies to number of training examples, dimensionality and number of networking nodes. We first propose a centralized optimization scheme that generalizes successful existing methods based on gradient averaging, improving their flexibility by making the number of averaged gradients an explicit parameter of the method. We then propose an asynchronous distributed algorithm that implements this original scheme for large decentralized computing networks.
Fichier principal
Vignette du fichier
jfellus_ESANN15.pdf (104.3 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-01148648 , version 1 (05-05-2015)

Identifiers

  • HAL Id : hal-01148648 , version 1

Cite

Jerome Fellus, David Picard, Philippe-Henri Gosselin. Asynchronous decentralized convex optimization through short-term gradient averaging. European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, Apr 2015, Bruges, Belgium. ⟨hal-01148648⟩
416 View
96 Download

Share

Gmail Facebook X LinkedIn More