Stochastic Proximal Gradient Consensus over Random Networks

Mingyi Hong, Tsung Hui Chang

Research output: Contribution to journalArticlepeer-review

14 Scopus citations


We consider solving a convex optimization problem with possibly stochastic gradient, and over a randomly time-varying multiagent network. Each agent has access to some local objective function, and it only has unbiased estimates of the gradients of the smooth component. We develop a dynamic stochastic proximal-gradient consensus algorithm, with the following key features: 1) it works for both the static and certain randomly time-varying networks; 2) it allows the agents to utilize either the exact or stochastic gradient information; 3) it is convergent with provable rate. In particular, the proposed algorithm converges to a global optimal solution, with a rate of \mathcal O(1/r) [resp. \mathcal O(1/r) when the exact (resp. stochastic) gradient is available, where r is the iteration counter. Interestingly, the developed algorithm establishes a close connection among a number of (seemingly unrelated) distributed algorithms, such as the EXTRA, the PG-EXTRA, the IC/IDC-ADMM, the DLM, and the classical distributed subgradient method.

Original languageEnglish (US)
Article number7862886
Pages (from-to)2933-2948
Number of pages16
JournalIEEE Transactions on Signal Processing
Issue number11
StatePublished - Jun 1 2017

Bibliographical note

Funding Information:
The work of M. Hong was supported in part by the National Science Foundation under Grant CCF-1526078 and in part by Air Force Office of Scientific Research under Grant 15RT0767. The work of T.-H. Chang was supported by the National Natural Science Foundation of China, under Grant 61571385.

Publisher Copyright:
© 2017 IEEE.


  • ADMM
  • Distributed optimization
  • fast algorithms
  • rate analysis

Fingerprint Dive into the research topics of 'Stochastic Proximal Gradient Consensus over Random Networks'. Together they form a unique fingerprint.

Cite this