On the linear convergence of the proximal gradient method for trace norm regularization

Ke Hou, Zirui Zhou, Anthony Man Cho So, Zhi Quan Luo

Research output: Contribution to journalConference articlepeer-review

23 Scopus citations

Abstract

Motivated by various applications in machine learning, the problem of minimizing a convex smooth loss function with trace norm regularization has received much attention lately. Currently, a popular method for solving such problem is the proximal gradient method (PGM), which is known to have a sublinear rate of convergence. In this paper, we show that for a large class of loss functions, the convergence rate of the PGMis in fact linear. Our result is established without any strong convexity assumption on the loss function. A key ingredient in our proof is a new Lipschitzian error bound for the aforementioned trace norm-regularized problem, which may be of independent interest.

Original languageEnglish (US)
JournalAdvances in Neural Information Processing Systems
StatePublished - Jan 1 2013
Event27th Annual Conference on Neural Information Processing Systems, NIPS 2013 - Lake Tahoe, NV, United States
Duration: Dec 5 2013Dec 10 2013

Fingerprint Dive into the research topics of 'On the linear convergence of the proximal gradient method for trace norm regularization'. Together they form a unique fingerprint.

Cite this