The variance reduction class of algorithms including the representative ones, SVRG and SARAH, have well documented merits for empirical risk minimization problems. However, they require grid search to tune parameters (step size and the number of iterations per inner loop) for optimal performance. This work introduces 'almost tune-free' SVRG and SARAH schemes equipped with i) Barzilai-Borwein (BB) step sizes; ii) averaging; and, iii) the inner loop length adjusted to the BB step sizes. In particular, SVRG, SARAH, and their BB variants are first reexamined through an 'estimate sequence' lens to enable new averaging methods that tighten their convergence rates theoretically, and improve their performance empirically when the step size or the inner loop length is chosen large. Then a simple yet effective means to adjust the number of iterations per inner loop is developed to enhance the merits of the proposed averaging schemes and BB step sizes. Numerical tests corroborate the proposed methods.
|Original language||English (US)|
|Title of host publication||37th International Conference on Machine Learning, ICML 2020|
|Editors||Hal Daume, Aarti Singh|
|Publisher||International Machine Learning Society (IMLS)|
|Number of pages||10|
|State||Published - 2020|
|Event||37th International Conference on Machine Learning, ICML 2020 - Virtual, Online|
Duration: Jul 13 2020 → Jul 18 2020
|Name||37th International Conference on Machine Learning, ICML 2020|
|Conference||37th International Conference on Machine Learning, ICML 2020|
|Period||7/13/20 → 7/18/20|
Bibliographical noteFunding Information:
The authors would like to thank anonymous reviewers for their constructive feedback. We also gratefully acknowledge the support from NSF grants 1711471, and 1901134.
© International Conference on Machine Learning, ICML 2020. All rights reserved.