Abstract
This paper proposes and analyzes an asynchronous communication-efficient distributed optimization framework for a general type of machine learning and signal processing problems. At each iteration, worker machines compute gradients of a known empirical loss function using their own local data, and a master machine solves a related minimization problem to update the current estimate. We establish that the proposed algorithm converges with a sublinear rate over the number of communication rounds, coinciding with the best theoretical rate that can be achieved for nonconvex nonsmooth problems. Moreover, under a strong convexity assumption of the smooth part of the loss function, linear convergence is established. Extensive numerical experiments show that the performance of the proposed approach indeed improves - sometimes significantly - over other state-of-the-art algorithms in terms of total communication efficiency.
Original language | English (US) |
---|---|
Title of host publication | 2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Proceedings |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 638-642 |
Number of pages | 5 |
ISBN (Electronic) | 9781728112954 |
DOIs | |
State | Published - Jul 2 2018 |
Event | 2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Anaheim, United States Duration: Nov 26 2018 → Nov 29 2018 |
Publication series
Name | 2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Proceedings |
---|
Conference
Conference | 2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 |
---|---|
Country/Territory | United States |
City | Anaheim |
Period | 11/26/18 → 11/29/18 |
Bibliographical note
Publisher Copyright:© 2018 IEEE.
Keywords
- Asynchronous
- Communication-efficient
- Convergence
- Distributed algorithm
- Nonconvex