Abstract
Consider the regularized sparse minimization problem, which involves empirical sums of loss functions for n data points (each of dimension d) and a nonconvex sparsity penalty. We prove that finding an O(nC1dC2)-optimal solution to the regularized sparse optimization problem is strongly NP-hard for any C1, C2 ∈ [0, 1) such that C1 + C2 < 1. The result applies to a broad class of loss functions and sparse penalty functions. It suggests that one cannot even approximately solve the sparse optimization problem in polynomial time, unless P = NP.
Original language | English (US) |
---|---|
Title of host publication | 34th International Conference on Machine Learning, ICML 2017 |
Publisher | International Machine Learning Society (IMLS) |
Pages | 1230-1251 |
Number of pages | 22 |
ISBN (Electronic) | 9781510855144 |
State | Published - 2017 |
Event | 34th International Conference on Machine Learning, ICML 2017 - Sydney, Australia Duration: Aug 6 2017 → Aug 11 2017 |
Publication series
Name | 34th International Conference on Machine Learning, ICML 2017 |
---|---|
Volume | 2 |
Other
Other | 34th International Conference on Machine Learning, ICML 2017 |
---|---|
Country/Territory | Australia |
City | Sydney |
Period | 8/6/17 → 8/11/17 |
Bibliographical note
Publisher Copyright:© 2017 by the author(s).
Keywords
- Computational complexity
- Concave penalty
- NP-hardness
- Nonconvex optimization
- Sparsity