Reconciling Reinforcement Learning Models With Behavioral Extinction and Renewal: Implications for Addiction, Relapse, and Problem Gambling

David Redish, Steve Jensen, Adam Johnson, Zeb Kurth-Nelson

Research output: Contribution to journalArticlepeer-review

242 Scopus citations

Abstract

Because learned associations are quickly renewed following extinction, the extinction process must include processes other than unlearning. However, reinforcement learning models, such as the temporal difference reinforcement learning (TDRL) model, treat extinction as an unlearning of associated value and are thus unable to capture renewal. TDRL models are based on the hypothesis that dopamine carries a reward prediction error signal; these models predict reward by driving that reward error to zero. The authors construct a TDRL model that can accommodate extinction and renewal through two simple processes: (a) a TDRL process that learns the value of situation-action pairs and (b) a situation recognition process that categorizes the observed cues into situations. This model has implications for dysfunctional states, including relapse after addiction and problem gambling.

Original languageEnglish (US)
Pages (from-to)784-805
Number of pages22
JournalPsychological Review
Volume114
Issue number3
DOIs
StatePublished - Jul 2007

Keywords

  • dopamine
  • problem gambling
  • reinstantiation
  • temporal difference reinforcement learning (TDRL)

Fingerprint

Dive into the research topics of 'Reconciling Reinforcement Learning Models With Behavioral Extinction and Renewal: Implications for Addiction, Relapse, and Problem Gambling'. Together they form a unique fingerprint.

Cite this