The bayesian covariance lasso

Zakaria S. Khondker, Hongtu Zhu, Haitao Chu, Weili Lin, Joseph G. Ibrahim

Research output: Contribution to journalArticlepeer-review

32 Scopus citations

Abstract

Estimation of sparse covariance matrices and their inverse subject to positive definiteness constraints has drawn a lot of attention in recent years. Frequentist methods have utilized penalized likelihood methods, whereas Bayesian approaches rely on matrix decompositions or Wishart priors for shrinkage. In this paper we propose a new method, called the Bayesian Covariance Lasso (BCLASSO), for the shrinkage estimation of a precision (covariance) matrix. We consider a class of priors for the precision matrix that leads to the popular frequentist penalties as special cases, develop a Bayes estimator for the precision matrix, and propose an efficient sampling scheme that does not precalculate boundaries for positive definiteness. The proposed method is permutation invariant and performs shrinkage and estimation simultaneously for non-full rank data. Simulations show that the proposed BCLASSO performs similarly as frequentist methods for non-full rank data.

Original languageEnglish (US)
Pages (from-to)243-259
Number of pages17
JournalStatistics and its Interface
Volume6
Issue number2
DOIs
StatePublished - 2013

Keywords

  • Bayesian covariance lasso
  • Network exploration
  • Nonfull rank data
  • Penalized likelihood
  • Precision matrix

Fingerprint

Dive into the research topics of 'The bayesian covariance lasso'. Together they form a unique fingerprint.

Cite this