In designing cognitive neuroscience experiments, resource limitations induce a fundamental trade-off between sampling variation across individual brains and sampling variation across experimental conditions. Here, we argue that extensive sampling of experimental conditions is essential for understanding how human brains process complex stimuli, that a model of how any one brain does this is likely to generalize to most other brains, and that introducing large numbers of subjects into an analysis pool is likely to introduce unnecessary and undesirable variance. Thus, contrary to conventional wisdom, we believe that sampling many individuals provides relatively few benefits and that extensive sampling of a limited number of subjects is more productive for revealing general principles. Furthermore, an emphasis on depth in individual brains is well-suited for capitalizing on the improvements in resolution and signal-to-noise ratio that are being achieved in modern neuroscientific measurement techniques.
Bibliographical noteFunding Information:
We thank L. Dowdle, B. Hutchinson, and J. Winawer for comments on the manuscript. This work was supported by NSF IIS-1822683 and NSF IIS-1822929 .