On the Linear Convergence of a Proximal Gradient Method for a Class of Nonsmooth Convex Minimization Problems

Haibin Zhang, Jiaojiao Jiang, Zhi Quan Luo

Research output: Contribution to journalArticlepeer-review

30 Scopus citations

Abstract

We consider a class of nonsmooth convex optimization problems where the objective function is the composition of a strongly convex differentiable function with a linear mapping, regularized by the sum of both ℓ 1-norm and ℓ 2-norm of the optimization variables. This class of problems arise naturally from applications in sparse group Lasso, which is a popular technique for variable selection. An effective approach to solve such problems is by the Proximal Gradient Method (PGM). In this paper we prove a local error bound around the optimal solution set for this problem and use it to establish the linear convergence of the PGM method without assuming strong convexity of the overall objective function.

Original languageEnglish (US)
Pages (from-to)163-186
Number of pages24
JournalJournal of the Operations Research Society of China
Volume1
Issue number2
DOIs
StatePublished - Jun 2013

Bibliographical note

Funding Information:
This work was partially supported by the National Natural Science Foundation of China (Nos. 61179033, DMS-1015346). Part of this work was performed during a research visit by the first author to the University of Minnesota, with support from the Education Commission of Beijing Municipal Government.

Keywords

  • Error bound
  • Linear convergence
  • Proximal gradient method
  • Sparse group Lasso

Fingerprint

Dive into the research topics of 'On the Linear Convergence of a Proximal Gradient Method for a Class of Nonsmooth Convex Minimization Problems'. Together they form a unique fingerprint.

Cite this