On the Linear Convergence of a Proximal Gradient Method for a Class of Nonsmooth Convex Minimization Problems

Haibin Zhang, Jiaojiao Jiang, Zhi Quan Luo

Research output: Contribution to journalArticlepeer-review

22 Scopus citations

Abstract

We consider a class of nonsmooth convex optimization problems where the objective function is the composition of a strongly convex differentiable function with a linear mapping, regularized by the sum of both ℓ 1-norm and ℓ 2-norm of the optimization variables. This class of problems arise naturally from applications in sparse group Lasso, which is a popular technique for variable selection. An effective approach to solve such problems is by the Proximal Gradient Method (PGM). In this paper we prove a local error bound around the optimal solution set for this problem and use it to establish the linear convergence of the PGM method without assuming strong convexity of the overall objective function.

Original languageEnglish (US)
Pages (from-to)163-186
Number of pages24
JournalJournal of the Operations Research Society of China
Volume1
Issue number2
DOIs
StatePublished - Jan 1 2013

Keywords

  • Error bound
  • Linear convergence
  • Proximal gradient method
  • Sparse group Lasso

Fingerprint Dive into the research topics of 'On the Linear Convergence of a Proximal Gradient Method for a Class of Nonsmooth Convex Minimization Problems'. Together they form a unique fingerprint.

Cite this