Abstract
Addictive drugs have been hypothesized to access the same neurophysiological mechanisms as natural learning systems. These natural learning systems can be modeled through temporal-difference reinforcement learning (TDRL), which requires a reward-error signal that has been hypothesized to be carried by dopamine. TDRL learns to predict reward by driving that reward-error signal to zero. By adding a noncompensable drug-induced dopamine increase to a TDRL model, a computational model of addiction is constructed that over-selects actions leading to drug receipt. The model provides an explanation for important aspects of the addiction literature and provides a theoretic viewpoint with which to address other aspects.
Original language | English (US) |
---|---|
Pages (from-to) | 1944-1947 |
Number of pages | 4 |
Journal | Science |
Volume | 306 |
Issue number | 5703 |
DOIs | |
State | Published - Dec 10 2004 |