On the Linear Convergence of the Approximate Proximal Splitting Method for Non-smooth Convex Optimization

Mojtaba Kadkhodaie, Maziar Sanjabi, Zhi Quan Luo

Research output: Contribution to journalArticlepeer-review

7 Scopus citations

Abstract

Consider the problem of minimizing the sum of two convex functions, one being smooth and the other non-smooth. In this paper, we introduce a general class of approximate proximal splitting (APS) methods for solving such minimization problems. Methods in the APS class include many well-known algorithms such as the proximal splitting method, the block coordinate descent method (BCD), and the approximate gradient projection methods for smooth convex optimization. We establish the linear convergence of APS methods under a local error bound assumption. Since the latter is known to hold for compressive sensing and sparse group LASSO problems, our analysis implies the linear convergence of the BCD method for these problems without strong convexity assumption.

Original languageEnglish (US)
Pages (from-to)123-141
Number of pages19
JournalJournal of the Operations Research Society of China
Volume2
Issue number2
DOIs
StatePublished - Jul 2014

Keywords

  • Block coordinate descent method
  • Convergence rate analysis
  • Convex optimization
  • Local error bound
  • Proximal splitting method

Fingerprint

Dive into the research topics of 'On the Linear Convergence of the Approximate Proximal Splitting Method for Non-smooth Convex Optimization'. Together they form a unique fingerprint.

Cite this