Global context plays an important, but poorly understood, role in visual tasks. This study demonstrates that a robust memory for visual context exists to guide spatial attention. Global context was operationalized as the spatial layout of objects in visual search displays. Half of the configurations were repeated across blocks throughout the entire session, and targets appeared within consistent locations in these arrays. Targets appearing in learned configurations were detected more quickly. This newly discovered form of search facilitation is termed contextual cueing. Contextual cueing is driven by incidentally learned associations between spatial configurations (context) and target locations. This benefit was obtained despite chance performance for recognizing the configurations, suggesting that the memory for context was implicit. The results show how implicit learning and memory of visual context can guide spatial attention towards task-relevant aspects of a scene.
Bibliographical noteFunding Information:
We thank Adam Anderson, Patrick Cavanagh, Ken Nakayama, Molly Potter, Ron Rensink, Dan Simons, Jeremy Wolfe, and numerous other colleagues for helpful discussions. Woo-kyoung Ahn and Ingrid Olson provided helpful comments on an earlier draft of this article. This paper has also benefited greatly from constructive feedback from Gordon Logan, Mike Stadler, and our other reviewers. We thank Joanie Sanchez for her assistance in running Experiment 1. This research was supported by a Social Science Faculty Research Award from Yale University. Portions of this research were presented at the Annual Meeting of the Association for Research in Ophthalmology and Vision, Fort Lauderdale, FL, in May, 1997, and at the Annual Meeting of the Psychonomic Society, Philadelphia, PA, in November, 1997.