Finding a sparse vector in a subspace: Linear sparsity using alternating directions

Qing Qu, Ju Sun, John Wright

Research output: Contribution to journalConference articlepeer-review

52 Scopus citations

Abstract

We consider the problem of recovering the sparsest vector in a subspace S ⊆ ℝp with dim (S) = n. This problem can be considered a homogeneous variant of the sparse recovery problem, and finds applications in sparse dictionary learning, sparse PCA, and other problems in signal processing and machine learning. Simple convex heuristics for this problem provably break down when the fraction of nonzero entries in the target sparse vector substantially exceeds 1/√ n. In contrast, we exhibit a relatively simple nonconvex approach based on alternating directions, which provably succeeds even when the fraction of nonzero entries is Ω(1). To our knowledge, this is the first practical algorithm to achieve this linear scaling. This result assumes a planted sparse model, in which the target sparse vector is embedded in an otherwise random subspace. Empirically, our proposed algorithm also succeeds in more challenging data models arising, e.g., from sparse dictionary learning.

Original languageEnglish (US)
Pages (from-to)3401-3409
Number of pages9
JournalAdvances in Neural Information Processing Systems
Volume4
Issue numberJanuary
StatePublished - 2014
Externally publishedYes
Event28th Annual Conference on Neural Information Processing Systems 2014, NIPS 2014 - Montreal, Canada
Duration: Dec 8 2014Dec 13 2014

Fingerprint

Dive into the research topics of 'Finding a sparse vector in a subspace: Linear sparsity using alternating directions'. Together they form a unique fingerprint.

Cite this