Social learning with Bayesian agents and random decision making

Yunlong Wang, Petar M. Djuric

Research output: Contribution to journalArticlepeer-review

19 Scopus citations

Abstract

In this paper, we study the effect of randomness in the decision making of agents on social learning. In the addressed system, the agents make decisions sequentially about the true state of nature. Each agent observes a signal produced according to one of two hypotheses, which represents the state of nature. The signals of all the agents are generated independently from the same state. The agents also know the decisions of all the previous agents in the network. The randomness in this paper is modeled by a policy that amounts to random mapping of the beliefs of the agents to the action space. We propose that the agents learn from the decisions of the previous agents and update their beliefs by using the Bayesian theory. We define the concept of social belief about the truthfulness of the two hypotheses and provide results on the convergence of the social belief. We also prove that with the proposed random policy, information cascade can be avoided and asymptotic learning occurs. We apply the random policy to data models that represent the observations by a distribution belonging to the exponential family. We then provide performance and convergence analysis of the proposed method as well as simulation results that include comparisons with deterministic and hybrid policies.

Original languageEnglish (US)
Article number7081738
Pages (from-to)3241-3250
Number of pages10
JournalIEEE Transactions on Signal Processing
Volume63
Issue number12
DOIs
StatePublished - Jun 15 2015

Bibliographical note

Publisher Copyright:
© 1991-2012 IEEE.

Keywords

  • Bayesian learning
  • Social learning
  • asymptotic learning
  • information cascade
  • random decision making

Fingerprint

Dive into the research topics of 'Social learning with Bayesian agents and random decision making'. Together they form a unique fingerprint.

Cite this