Alterations in choice behavior by manipulations of world model

C. S. Green, C. Benson, D. Kersten, P. Schrater

Research output: Contribution to journalArticlepeer-review

74 Scopus citations

Abstract

How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes fromwhich the value of actions can be predicted. Here we show that (i) "probability matching" - a consistent example of suboptimal choice behavior seen in humans -occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning.

Original languageEnglish (US)
Pages (from-to)16401-16406
Number of pages6
JournalProceedings of the National Academy of Sciences of the United States of America
Volume107
Issue number37
DOIs
StatePublished - Sep 14 2010

Keywords

  • Decision making
  • Probability matching
  • Reinforcement learning

Fingerprint

Dive into the research topics of 'Alterations in choice behavior by manipulations of world model'. Together they form a unique fingerprint.

Cite this