During natural vision, humans categorize the scenes they encounter: an office, the beach, and so on. These categories are informed by knowledge of theway that objects co-occur in natural scenes. How does the human brain aggregate information about objects to represent scene categoriesα To explore this issue, we used statistical learning methods to learn categories that objectively capture the co-occurrence statistics of objects in a large collection of natural scenes. Using the learned categories, we modeled fMRI brain signals evoked in human subjects when viewing images of scenes.We find that evoked activity across much of anterior visual cortex is explained by the learned categories. Furthermore, a decoder based on these scene categories accurately predicts the categories and objects comprising novel scenes from brain activity evoked by those scenes. These results suggest that the human brain represents scene categories that capture the co-occurrence statistics of objects in the world
Bibliographical noteFunding Information:
This work was supported by grants to J.L.G. from the National Eye Institute (EY019684), the National Institute of Mental Health (MH66990, and the National Science Foundation Center for the Science of Information (CCF-0939370). We thank An Vu for data collection assistance and Tom Griffiths, Shinji Nishimoto, Tolga Cukur, Mark Lescoarte, Michael Oliver, Alex Huth, James Gao, Natalia Bilenko, Anwar Nunez, Ben Dichter, and Melanie Miller for helpful discussions and comments.