Consider the regularized sparse minimization problem, which involves empirical sums of loss functions for n data points (each of dimension d) and a nonconvex sparsity penalty. We prove that finding an O(nC1dC2)-optimal solution to the regularized sparse optimization problem is strongly NP-hard for any C1, C2 ∈ [0, 1) such that C1 + C2 < 1. The result applies to a broad class of loss functions and sparse penalty functions. It suggests that one cannot even approximately solve the sparse optimization problem in polynomial time, unless P = NP.
|Original language||English (US)|
|Title of host publication||34th International Conference on Machine Learning, ICML 2017|
|Publisher||International Machine Learning Society (IMLS)|
|Number of pages||22|
|State||Published - 2017|
|Event||34th International Conference on Machine Learning, ICML 2017 - Sydney, Australia|
Duration: Aug 6 2017 → Aug 11 2017
|Name||34th International Conference on Machine Learning, ICML 2017|
|Other||34th International Conference on Machine Learning, ICML 2017|
|Period||8/6/17 → 8/11/17|
Bibliographical notePublisher Copyright:
© 2017 by the author(s).
- Computational complexity
- Concave penalty
- Nonconvex optimization