LAG: Lazily aggregated gradient for communication-efficient distributed learning

Tianyi Chen, Georgios B. Giannakis, Tao Sun, Wotao Yin

Research output: Contribution to journalConference articlepeer-review

182 Scopus citations

Abstract

This paper presents a new class of gradient methods for distributed machine learning that adaptively skip the gradient calculations to learn with reduced communication and computation. Simple rules are designed to detect slowly-varying gradients and, therefore, trigger the reuse of outdated gradients. The resultant gradient-based algorithms are termed Lazily Aggregated Gradient - justifying our acronym LAG used henceforth. Theoretically, the merits of this contribution are: i) the convergence rate is the same as batch gradient descent in strongly-convex, convex, and nonconvex cases; and, ii) if the distributed datasets are heterogeneous (quantified by certain measurable constants), the communication rounds needed to achieve a targeted accuracy are reduced thanks to the adaptive reuse of lagged gradients. Numerical experiments on both synthetic and real data corroborate a significant communication reduction compared to alternatives.

Original languageEnglish (US)
Pages (from-to)5050-5060
Number of pages11
JournalAdvances in Neural Information Processing Systems
Volume2018-December
StatePublished - 2018
Event32nd Conference on Neural Information Processing Systems, NeurIPS 2018 - Montreal, Canada
Duration: Dec 2 2018Dec 8 2018

Bibliographical note

Funding Information:
The work by T. Chen and G. Giannakis is supported in part by NSF 1500713 and 1711471, and NIH 1R01GM104975-01. The work by T. Chen is also supported by the Doctoral Dissertation Fellowship from the University of Minnesota. The work by T. Sun is supported in part by China Scholarship Council. The work by W. Yin is supported in part by NSF DMS-1720237 and ONR N0001417121.

Publisher Copyright:
© 2018 Curran Associates Inc..All rights reserved.

Fingerprint

Dive into the research topics of 'LAG: Lazily aggregated gradient for communication-efficient distributed learning'. Together they form a unique fingerprint.

Cite this