Distributed Derivative-free Learning Method for Stochastic Optimization over a Network with Sparse Activity - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue IEEE Transactions on Automatic Control Année : 2021

Distributed Derivative-free Learning Method for Stochastic Optimization over a Network with Sparse Activity

Résumé

This article addresses a distributed optimization problem in a communication network where nodes are active sporadically. Each active node applies some learning method to control its action to maximize the global utility function, which is defined as the sum of the local utility functions of active nodes. We deal with stochastic optimization problem with the setting that utility functions are disturbed by some nonadditive stochastic process. We consider a more challenging situation where the learning method has to be performed only based on a scalar approximation of the utility function, rather than its closed-form expression, so that the typical gradient descent method cannot be applied. This setting is quite realistic when the network is affected by some stochastic and time-varying process, and that each node cannot have the full knowledge of the network states. We propose a distributed optimization algorithm and prove its almost surely convergence to the optimum. Convergence rate is also derived with an additional assumption that the objective function is strongly concave. Numerical results are also presented to justify our claim.

Dates et versions

hal-03518461 , version 1 (09-01-2022)

Identifiants

Citer

Wenjie Li, Mohamad Assaad, Shiqi Zheng. Distributed Derivative-free Learning Method for Stochastic Optimization over a Network with Sparse Activity. IEEE Transactions on Automatic Control, 2021, 67 (5), pp.2221-2236. ⟨10.1109/TAC.2021.3077516⟩. ⟨hal-03518461⟩
66 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More