TY - JOUR
T1 - On the convergence of the coordinate descent method for convex differentiable minimization
AU - Luo, Z. Q.
AU - Tseng, P.
N1 - Copyright:
Copyright 2007 Elsevier B.V., All rights reserved.
PY - 1992/1
Y1 - 1992/1
N2 - The coordinate descent method enjoys a long history in convex differentiable minimization. Surprisingly, very little is known about the convergence of the iterates generated by this method. Convergence typically requires restrictive assumptions such as that the cost function has bounded level sets and is in some sense strictly convex. In a recent work, Luo and Tseng showed that the iterates are convergent for the symmetric monotone linear complementarity problem, for which the cost function is convex quadratic, but not necessarily strictly convex, and does not necessarily have bounded level sets. In this paper, we extend these results to problems for which the cost function is the composition of an affine mapping with a strictly convex function which is twice differentiable in its effective domain. In addition, we show that the convergence is at least linear. As a consequence of this result, we obtain, for the first time, that the dual iterates generated by a number of existing methods for matrix balancing and entropy optimization are linearly convergent.
AB - The coordinate descent method enjoys a long history in convex differentiable minimization. Surprisingly, very little is known about the convergence of the iterates generated by this method. Convergence typically requires restrictive assumptions such as that the cost function has bounded level sets and is in some sense strictly convex. In a recent work, Luo and Tseng showed that the iterates are convergent for the symmetric monotone linear complementarity problem, for which the cost function is convex quadratic, but not necessarily strictly convex, and does not necessarily have bounded level sets. In this paper, we extend these results to problems for which the cost function is the composition of an affine mapping with a strictly convex function which is twice differentiable in its effective domain. In addition, we show that the convergence is at least linear. As a consequence of this result, we obtain, for the first time, that the dual iterates generated by a number of existing methods for matrix balancing and entropy optimization are linearly convergent.
KW - Coordinate descent
KW - convex differentiable optimization
KW - symmetric linear complementarity problems
UR - http://www.scopus.com/inward/record.url?scp=0026678659&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0026678659&partnerID=8YFLogxK
U2 - 10.1007/BF00939948
DO - 10.1007/BF00939948
M3 - Article
AN - SCOPUS:0026678659
SN - 0022-3239
VL - 72
SP - 7
EP - 35
JO - Journal of Optimization Theory and Applications
JF - Journal of Optimization Theory and Applications
IS - 1
ER -