Adaptive first-order methods for general sparse inverse covariance selection

Research output: Contribution to journalArticlepeer-review

25 Scopus citations


In this paper we consider estimating sparse inverse covariance of a Gaussian graphical model whose conditional independence is assumed to be partially known. Similarly as in [A. d'Aspremont, O. Banerjee, and L. El Ghaoui, SIAM J. Matrix Anal. Appl., 30 (2008), pp. 56-66; M. Yuan and Y. Lin, Biometrika, 94 (2007), pp. 19-35], we formulate it as an l1-norm penalized maximum likelihood estimation problem. Further, we propose an algorithm framework, and develop two first-order methods, that is, the adaptive spectral projected gradient (ASPG) method and the adaptive Nesterov's smooth (ANS) method, for solving this estimation problem. Finally, we compare the performance of these two methods with glasso [J. Friedman, T. Hastie, and R. Tibshirani, Biostatistics, 9 (2008), pp. 432-441; J. Friedman, T. Hastie, and R. Tibshirani, Glasso: Graphical Lasso for R, Software package, Department of Statistics, Stanford University, Stanford, CA, 2007] on a set of randomly generated instances. Our computational results demonstrate that our methods are capable of solving problems of size at least a thousand and number of constraints of nearly a half million within a reasonable amount of time, and moreover, that the ASPG method generally outperforms the ANS method and glasso.

Original languageEnglish (US)
Pages (from-to)2000-2016
Number of pages17
JournalSIAM Journal on Matrix Analysis and Applications
Issue number4
StatePublished - 2009
Externally publishedYes


  • Adaptive Nesterov's smooth method
  • Adaptive spectral projected gradient method
  • Sparse inverse covariance selection


Dive into the research topics of 'Adaptive first-order methods for general sparse inverse covariance selection'. Together they form a unique fingerprint.

Cite this