Abstract
We propose a communication and computation efficient algorithm for high-dimensional distributed sparse learning, motivated by the approach of (Wang et al., 2016). At each iteration, local machines compute local gradients on their own local data and using these, a master machine solves a shifted l\ regularized minimization problem. Here, our contribution reduces the communication cost per transmission from the order of the parameter dimension to the order of the number of nonzero entries in the parameter via a Two-Way Truncation procedure. Theoretically, we prove that the estimation error of the proposed algorithm decreases exponentially and matches that of the centralized method under mild conditions. Extensive experiments on both simulated data and real data support that the proposed algorithm is efficient and has statistical performance comparable with the centralized method.
Original language | English (US) |
---|---|
Title of host publication | 2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, CAMSAP 2017 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 1-5 |
Number of pages | 5 |
ISBN (Electronic) | 9781538612514 |
DOIs | |
State | Published - Mar 9 2018 |
Event | 7th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, CAMSAP 2017 - Curacao Duration: Dec 10 2017 → Dec 13 2017 |
Publication series
Name | 2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, CAMSAP 2017 |
---|---|
Volume | 2017-December |
Conference
Conference | 7th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, CAMSAP 2017 |
---|---|
City | Curacao |
Period | 12/10/17 → 12/13/17 |
Bibliographical note
Funding Information:ACKNOWLEDGMENT The authors graciously acknowledge support from DARPA Young Faculty Award N66001-14-1-4047 and thank Jialei Wang for very useful suggestions.
Publisher Copyright:
© 2017 IEEE.