Markov Chain Monte Carlo Convergence Diagnostics: A Comparative Review

Mary Kathryn Cowles, Bradley P. Carlin

Research output: Contribution to journalArticlepeer-review

1438 Scopus citations

Abstract

A critical issue for users of Markov chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise for the future but to date has yielded relatively little of practical use in applied work. Consequently, most MCMC users address the convergence problem by applying diagnostic tools to the output produced by running their samplers. After giving a brief overview of the area, we provide an expository review of 13 convergence diagnostics, describing the theoretical basis and practical implementation of each. We then compare their performance in two simple models and conclude that all of the methods can fail to detect the sorts of convergence failure that they were designed to identify. We thus recommend a combination of strategies aimed at evaluating and accelerating MCMC sampler convergence, including applying diagnostic procedures to a small number of parallel chains, monitoring autocorrelations and cross-correlations, and modifying parameterizations or sampling algorithms appropriately. We emphasize, however, that it is not possible to say with certainty that a finite sample from an MCMC algorithm is representative of an underlying stationary distribution.

Original languageEnglish (US)
Pages (from-to)883-904
Number of pages22
JournalJournal of the American Statistical Association
Volume91
Issue number434
DOIs
StatePublished - Jun 1 1996

Keywords

  • Autocorrelation
  • Gibbs sampler
  • Metropolis-Hastings algorithm

Fingerprint

Dive into the research topics of 'Markov Chain Monte Carlo Convergence Diagnostics: A Comparative Review'. Together they form a unique fingerprint.

Cite this