In high-dimensional data analysis, feature selection becomes one effective means for dimension reduction, which proceeds with parameter estimation. Concerning accuracy of selection and estimation, we study nonconvex constrained and regularized likelihoods in the presence of nuisance parameters. Theoretically, we show that constrained L0 likelihood and its computational surrogate are optimal in that they achieve feature selection consistency and sharp parameter estimation, under one necessary condition required for any method to be selection consistent and to achieve sharp parameter estimation. It permits up to exponentially many candidate features. Computationally, we develop difference convex methods to implement the computational surrogate through prime and dual subproblems. These results establish a central role of L0 constrained and regularized likelihoods in feature selection and parameter estimation involving selection. As applications of the general method and theory, we perform feature selection in linear regression and logistic regression, and estimate a precision matrix in Gaussian graphical models. In these situations, we gain a new theoretical insight and obtain favorable numerical results. Finally, we discuss an application to predict the metastasis status of breast cancer patients with their gene expression profiles. This article has online supplementary material.
Bibliographical noteFunding Information:
Xiaotong Shen and Yunzhang Zhu, School of Statistics, University of Minnesota, Minneapolis, MN 55455. Wei Pan, Division of Biostatistics, University of Minnesota, Minneapolis, MN 55455. Research supported in part by NSF grant DMS-0906616, NIH grants 1R01GM081535, 2R01GM081535, HL65462, and R01HL105397. The authors thank the editor, the associate editor, and anonymous referees for helpful comments and suggestions.
Copyright 2012 Elsevier B.V., All rights reserved.
- (p,n)- asymptotics
- Continuous but nonsmooth minimization
- Coordinate descent
- General likelihood
- Graphical models