Abstract
Temporal difference reinforcement learning (TDRL) algorithms, hypothesized to partially explain basal ganglia functionality, learn more slowly than real animals. Modified TDRL algorithms (e.g. the Dyna-Q family) learn faster than standard TDRL by practicing experienced sequences offline. We suggest that the replay phenomenon, in which ensembles of hippocampal neurons replay previously experienced firing sequences during subsequent rest and sleep, may provide practice sequences to improve the speed of TDRL learning, even within a single session. We test the plausibility of this hypothesis in a computational model of a multiple-T choice-task. Rats show two learning rates on this task: a fast decrease in errors and a slow development of a stereotyped path. Adding developing replay to the model accelerates learning the correct path, but slows down the stereotyping of that path. These models provide testable predictions relating the effects of hippocampal inactivation as well as hippocampal replay on this task.
Original language | English (US) |
---|---|
Pages (from-to) | 1163-1171 |
Number of pages | 9 |
Journal | Neural Networks |
Volume | 18 |
Issue number | 9 |
DOIs | |
State | Published - Nov 2005 |
Bibliographical note
Funding Information:We thank Jadin Jackson, Zeb Kurth-Nelson, Beth Masimore, Neil Schmitzer-Torbert, and Giuseppe Cortese for helpful discussions and comments on the manuscript. This work was supported by NIH (MH68029) and by fellowships from 3M and from the Center for Cognitive Sciences (grant number T32HD007151).