Visual perception involves the grouping of individual elements into coherent patterns, such as object representations, that reduce the descriptive complexity of a visual scene. The computational and physiological bases of this perceptual remain poorly understood. We discuss recent fMRI evidence from our laboratory where we measured activity in a higher object processing area (LOC), and in primary visual cortex (V1) in response to visual elements that were either grouped into objects or randomly arranged. We observed significant activity increases in the LOC and concurrent reductions of activity in V1 when elements formed coherent shapes, suggesting that activity in early visual areas is reduced as a result of grouping processes performed in higher areas. In light of these results we review related empirical findings of context-dependent changes in activity, recent neurophysiology research related to cortical feedback, and computational models that incorporate feedback operations. We suggest that feedback from high-level visual areas reduces activity in lower areas in order to simplify the description of a visual image - consistent with both predictive coding models of perception and probabilistic notions of 'explaining away.'
Bibliographical noteFunding Information:
Portions of this work were reported earlier in Murray et al. (2002) and at Human Brain Mapping (Shen, Kersten, and Ugurbil, 1999), ARVO (Kersten, Shen, Ugurbil, and Schrater, 1999) and Soc. Neurosci. (Murray Olshausen, and Woods, 2001) conferences. Supported by NIH R01 EY015261, NIH P41 RR08079, pre-doctoral NRSA MH-12791 and post-doctoral NRSA EY015342-01 (S.O.M.), NSF SBR-9631682 (D.K.), NIH MH-57921 (B.A.O.), NIH MH-41544 and VA Research Service (D.L.W.). We thank Peter Battaglia for helpful comments.
- Predictive coding
- Probabilistic models