In this paper, we consider the problem of social learning in a network of agents where the agents make decisions repeatedly on two hypotheses. At every time slot, each agent in the system obtains sequentially a private observation that is independently generated under one of the hypotheses, and makes a decision on choosing the true hypothesis. The private belief of each agent is the posterior of one of the hypotheses conditioned on its private observations and the latest decision of the other agents. This private belief is used for the agent's decision making. We present a Bayesian learning scheme for the agents that exploits information from the other decisions. We show that an information loop in decisions can be avoided in the studied system. We demonstrate the performance of the method by computer simulations.