Top-down attentional guidance based on implicit learning of visual covariation

Marvin M. Chun, Yuhong Jiang

Research output: Contribution to journalArticlepeer-review

369 Scopus citations

Abstract

The visual environment is extremely rich and complex, producing information overload for the visual system. But the environment also embodies structure in the form of redundancies and regularities that may serve to reduce complexity. How do perceivers internalize this complex informational structure? We present new evidence of visual learning that illustrates how observers learn how objects and events covary in the visual world. This information serves to guide visual processes such as object recognition and search. Our first experiment demonstrates that search and object recognition are facilitated by learned associations (covariation) between novel visual shapes. Our second experiment shows that regularities in dynamic visual environments can also be learned to guide search behavior. In both experiments, learning occurred incidentally and the memory representations were implicit. These experiments show how top-down visual knowledge, acquired through implicit learning, constrains what to expect and guides where to attend and look.

Original languageEnglish (US)
Pages (from-to)360-365
Number of pages6
JournalPsychological Science
Volume10
Issue number4
DOIs
StatePublished - Jul 1999

Bibliographical note

Funding Information:
This research was supported in part by Grant BCS-9817349 from the National Science Foundation. We thank Ron Rensink and Carol Seger for their helpful comments on an earlier version of this manuscript.

Fingerprint

Dive into the research topics of 'Top-down attentional guidance based on implicit learning of visual covariation'. Together they form a unique fingerprint.

Cite this