Skip to Main content Skip to Navigation

Accelerated Methods for Distributed Optimization

Hadrien Hendrikx 1, 2, 3 
1 DYOGENE - Dynamics of Geometric Networks
DI-ENS - Département d'informatique - ENS Paris, CNRS - Centre National de la Recherche Scientifique : UMR 8548, Inria de Paris
2 SIERRA - Statistical Machine Learning and Parsimony
DI-ENS - Département d'informatique - ENS Paris, CNRS - Centre National de la Recherche Scientifique, Inria de Paris
Abstract : In order to make meaningful predictions, modern machine learning models require huge amounts of data, and are generally trained in a distributed way, i.e., using many computing units. Indeed, the data is often too large or to sensitive to be gathered and stored at one place, and stacking computing units increases the computing power. Yet, machine learning models are usually trained using stochastic optimization methods, that perform a sequence of steps which are noisy but relatively easy to compute. Besides, many algorithms reuse past information to speed up convergence, which requires a high level of synchrony between agents. This thesis presents a set of results that extend the recent advances from stochastic and accelerated convex optimization to the decentralized setting, in which there is no central coordination but only pairwise communications.
Complete list of metadata
Contributor : Hadrien Hendrikx Connect in order to contact the contributor
Submitted on : Friday, December 10, 2021 - 7:23:22 PM
Last modification on : Wednesday, June 8, 2022 - 12:50:06 PM
Long-term archiving on: : Friday, March 11, 2022 - 7:45:22 PM


Files produced by the author(s)


  • HAL Id : tel-03475383, version 1


Hadrien Hendrikx. Accelerated Methods for Distributed Optimization. Optimization and Control [math.OC]. PSL, 2021. English. ⟨tel-03475383⟩



Record views


Files downloads