Hippocampal replay contributes to within session learning in a temporal difference reinforcement learning model

Adam Johnson, A. David Redish

Research output: Contribution to journalArticlepeer-review

64 Scopus citations

Abstract

Temporal difference reinforcement learning (TDRL) algorithms, hypothesized to partially explain basal ganglia functionality, learn more slowly than real animals. Modified TDRL algorithms (e.g. the Dyna-Q family) learn faster than standard TDRL by practicing experienced sequences offline. We suggest that the replay phenomenon, in which ensembles of hippocampal neurons replay previously experienced firing sequences during subsequent rest and sleep, may provide practice sequences to improve the speed of TDRL learning, even within a single session. We test the plausibility of this hypothesis in a computational model of a multiple-T choice-task. Rats show two learning rates on this task: a fast decrease in errors and a slow development of a stereotyped path. Adding developing replay to the model accelerates learning the correct path, but slows down the stereotyping of that path. These models provide testable predictions relating the effects of hippocampal inactivation as well as hippocampal replay on this task.

Original languageEnglish (US)
Pages (from-to)1163-1171
Number of pages9
JournalNeural Networks
Volume18
Issue number9
DOIs
StatePublished - Nov 2005

Bibliographical note

Funding Information:
We thank Jadin Jackson, Zeb Kurth-Nelson, Beth Masimore, Neil Schmitzer-Torbert, and Giuseppe Cortese for helpful discussions and comments on the manuscript. This work was supported by NIH (MH68029) and by fellowships from 3M and from the Center for Cognitive Sciences (grant number T32HD007151).

Fingerprint

Dive into the research topics of 'Hippocampal replay contributes to within session learning in a temporal difference reinforcement learning model'. Together they form a unique fingerprint.

Cite this