Abstract
Sparse penalized quantile regression is a useful tool for variable selection, robust estimation, and heteroscedasticity detection in high-dimensional data analysis. The computational issue of the sparse penalized quantile regression has not yet been fully resolved in the literature, due to nonsmoothness of the quantile regression loss function. We introduce fast alternating direction method of multipliers (ADMM) algorithms for computing the sparse penalized quantile regression. The convergence properties of the proposed algorithms are established. Numerical examples demonstrate the competitive performance of our algorithm: it significantly outperforms several other fast solvers for high-dimensional penalized quantile regression. Supplementary materials for this article are available online.
Original language | English (US) |
---|---|
Pages (from-to) | 319-331 |
Number of pages | 13 |
Journal | Technometrics |
Volume | 60 |
Issue number | 3 |
DOIs | |
State | Published - Jul 3 2018 |
Bibliographical note
Funding Information:This work is supported in part by NSF grant DMS-1505111, the 111 Project of China (B16002), National Science Foundation of China grants 11431002 and 11671029, and the Hong Kong Research Grants Council General Research Fund (14205314).
Keywords
- Alternating direction method of multipliers
- Lasso
- Nonconvex penalty
- Quantile regression
- Variable selection