The quantitative evaluation of functional neuroimaging experiments: The NPAIRS data analysis framework

Stephen C. Strother, Jon Anderson, Lars Kai Hansen, Ulrik Kjems, Rafal Kustra, John Sidtis, Sally Frutiger, Suraj Muley, Stephen LaConte, David Rottenberg

Research output: Contribution to journalArticlepeer-review

200 Scopus citations

Abstract

We introduce a data-analysis framework and performance metrics for evaluating and optimizing the interaction between activation tasks, experimental designs, and the methodological choices and tools for data acquisition, preprocessing, data analysis, and extraction of statistical parametric maps (SPMs). Our NPAIRS (nonparametric prediction, activation, influence, and reproducibility resampling) framework provides an alternative to simulations and ROC curves by using real PET and fMRI data sets to examine the relationship between prediction accuracy and the signal-to-noise ratios (SNRs) associated with reproducible SPMs. Using cross-validation resampling we plot training-test set predictions of the experimental design variables (e.g., brain-state labels) versus ducibility SNR metrics for the associated SPMs. We demonstrate the utility of this framework across the wide range of performance metrics obtained from [15O]water PET studies of 12 age- and sex-matched data sets performing different motor tasks (8 subjects/set). For the 12 data sets we apply NPAIRS with both univariate and multivariate data-analysis approaches to: (1) demonstrate that this framework may be used to obtain reproducible SPMs from any data-analysis approach on a common Z-score scale (rSPM{Z}); (2) demonstrate that the histogram of a rSPM{Z} image may be modeled as the sum of a data-analysis-dependent noise distribution and a task-dependent, Gaussian signal distribution that scales monotonically with our reproducibility performance metric; (3) explore the relation between prediction and reproducibility performance metrics with an emphasis on bias-variance tradeoffs for flexible, multivariate models; and (4) measure the broad range of reproducibility SNRs and the significant influence of individual subjects. A companion paper describes learning curves for four of these 12 data sets, which describe an alternative mutual-information prediction metric and NPAIRS reproducibility as a function of training-set sizes from 2 to 18 subjects. We propose the NPAIRS framework as a validation tool for testing and optimizing methodological choices and tools in functional neuroimaging.

Original languageEnglish (US)
Pages (from-to)747-771
Number of pages25
JournalNeuroImage
Volume15
Issue number4
DOIs
StatePublished - Apr 2002

Bibliographical note

Funding Information:
This work was partly supported by NIH Grant NS33179 and Human Brain Project Grant P20 MN57180 and by the Danish Research Councils for the Natural and Technical Sciences through the Danish Computational Neural Network Center (CONNECT) and the Technology Center Through Highly Oriented Research (THOR). We thank Jeih-San Liow and Dana Daly for their assistance with data collection and analysis; Nick Lange, Jan Larsen, Niels Mørch, and Finn Nielsen for many helpful and enlightening discussions; and the anonymous reviewers for suggestions that significantly improved the paper.

Copyright:
Copyright 2018 Elsevier B.V., All rights reserved.

Keywords

  • Cross-validation
  • Data analysis
  • Multisubject PET and fMRI studies
  • Multivariate
  • Prediction error
  • Reproducibility
  • Resampling
  • Univariate

Fingerprint Dive into the research topics of 'The quantitative evaluation of functional neuroimaging experiments: The NPAIRS data analysis framework'. Together they form a unique fingerprint.

Cite this