Network resource allocation shows revived popularity in the era of data deluge and information explosion. Existing stochastic optimization approaches fall short in attaining a desirable cost-delay tradeoff. Recognizing the central role of Lagrange multipliers in a network resource allocation, a novel learn-and-adapt stochastic dual gradient (LA-SDG) method is developed in this paper to learn the sample-optimal Lagrange multiplier from historical data, and accordingly adapt the upcoming resource allocation strategy. Remarkably, an LA-SDG method only requires just an extra sample (gradient) evaluation relative to the celebrated stochastic dual gradient method. LA-SDG can be interpreted as a foresighted learning scheme with an eye on the future, or, a modified heavy-ball iteration from an optimization viewpoint. It has been established - both theoretically and empirically - that LA-SDG markedly improves the cost-delay tradeoff over state-of-the-art allocation schemes.
Bibliographical noteFunding Information:
Manuscript received July 10, 2017; revised September 12, 2017; accepted October 31, 2017. Date of publication November 15, 2017; date of current version December 14, 2018. This work was supported in part by NSF 1509040, 1508993, 1509005; in part by NSF China 61573331; in part by NSF Anhui 1608085QF130; and in part by CAS-XDA06040602. Recommended for publication by Associate Editor Fabio Fagnani. (Corresponding author: Georgios B. Giannakis.) T. Chen and G. B. Giannakis are with the Department of Electrical and Computer Engineering and the Digital Technology Center, University of Minnesota, Minneapolis, MN 55455 USA (e-mail: firstname.lastname@example.org; email@example.com).
© 2017 IEEE.
- First-order method
- network resource allocation
- statistical learning
- stochastic approximation