TY - JOUR

T1 - On faster convergence of cyclic block coordinate descent-type methods for strongly convex minimization

AU - Li, Xingguo

AU - Zhao, Tuo

AU - Arora, Raman

AU - Liu, Han

AU - Hong, Mingyi

PY - 2018/4/1

Y1 - 2018/4/1

N2 - The cyclic block coordinate descent-type (CBCD-type) methods, which perform iterative updates for a few coordinates (a block) simultaneously throughout the procedure, have shown remarkable computational performance for solving strongly convex minimization problems. Typical applications include many popular statistical machine learning methods such as elastic-net regression, ridge penalized logistic regression, and sparse additive regression. Existing optimization literature has shown that for strongly convex minimization, the CBCD-type methods attain iteration complexity of O(p log(1/)), where is a pre-specified accuracy of the objective value, and p is the number of blocks. However, such iteration complexity explicitly depends on p, and therefore is at least p times worse than the complexity O(log(1/)) of gradient descent (GD) methods. To bridge this theoretical gap, we propose an improved convergence analysis for the CBCD-type methods. In particular, we first show that for a family of quadratic minimization problems, the iteration complexity O(log2(p) · log(1/)) of the CBCD-type methods matches that of the GD methods in term of dependency on p, up to a log2 p factor. Thus our complexity bounds are sharper than the existing bounds by at least a factor of p/log2(p). We also provide a lower bound to confirm that our improved complexity bounds are tight (up to a log2(p) factor), under the assumption that the largest and smallest eigenvalues of the Hessian matrix do not scale with p. Finally, we generalize our analysis to other strongly convex minimization problems beyond quadratic ones.

AB - The cyclic block coordinate descent-type (CBCD-type) methods, which perform iterative updates for a few coordinates (a block) simultaneously throughout the procedure, have shown remarkable computational performance for solving strongly convex minimization problems. Typical applications include many popular statistical machine learning methods such as elastic-net regression, ridge penalized logistic regression, and sparse additive regression. Existing optimization literature has shown that for strongly convex minimization, the CBCD-type methods attain iteration complexity of O(p log(1/)), where is a pre-specified accuracy of the objective value, and p is the number of blocks. However, such iteration complexity explicitly depends on p, and therefore is at least p times worse than the complexity O(log(1/)) of gradient descent (GD) methods. To bridge this theoretical gap, we propose an improved convergence analysis for the CBCD-type methods. In particular, we first show that for a family of quadratic minimization problems, the iteration complexity O(log2(p) · log(1/)) of the CBCD-type methods matches that of the GD methods in term of dependency on p, up to a log2 p factor. Thus our complexity bounds are sharper than the existing bounds by at least a factor of p/log2(p). We also provide a lower bound to confirm that our improved complexity bounds are tight (up to a log2(p) factor), under the assumption that the largest and smallest eigenvalues of the Hessian matrix do not scale with p. Finally, we generalize our analysis to other strongly convex minimization problems beyond quadratic ones.

KW - Cyclic block coordinate descent

KW - Gradient descent

KW - Improved iteration complexity

KW - Quadratic minimization

KW - Strongly convex minimization

UR - http://www.scopus.com/inward/record.url?scp=85048752091&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85048752091&partnerID=8YFLogxK

M3 - Article

AN - SCOPUS:85048752091

VL - 18

SP - 1

EP - 24

JO - Journal of Machine Learning Research

JF - Journal of Machine Learning Research

SN - 1532-4435

ER -