Spatial Context Learning Survives Interference From Working Memory Load

Timothy J. Vickery, Rachel S. Sussman, Yuhong V. Jiang

    Research output: Contribution to journalArticlepeer-review

    47 Scopus citations

    Abstract

    The human visual system is constantly confronted with an overwhelming amount of information, only a subset of which can be processed in complete detail. Attention and implicit learning are two important mechanisms that optimize vision. This study addressed the relationship between these two mechanisms. Specifically we asked, Is implicit learning of spatial context affected by the amount of working memory load devoted to an irrelevant task? We tested observers in visual search tasks where search displays occasionally repeated. Observers became faster when searching repeated displays than unrepeated ones, showing contextual cuing. We found that the size of contextual cuing was unaffected by whether observers learned repeated displays under unitary attention or when their attention was divided using working memory manipulations. These results held when working memory was loaded by colors, dot patterns, individual dot locations, or multiple potential targets. We conclude that spatial context learning is robust to interference from manipulations that limit the availability of attention and working memory.

    Original languageEnglish (US)
    Pages (from-to)1358-1371
    Number of pages14
    JournalJournal of Experimental Psychology: Human Perception and Performance
    Volume36
    Issue number6
    DOIs
    StatePublished - Dec 2010

    Keywords

    • Contextual cuing
    • Divided attention
    • Implicit learning
    • Visual search
    • Visual working memory

    Fingerprint

    Dive into the research topics of 'Spatial Context Learning Survives Interference From Working Memory Load'. Together they form a unique fingerprint.

    Cite this