Despite the success of single-agent reinforcement learning, multi-agent reinforcement learning (MARL) remains challenging due to complex interactions between agents. Motivated by decentralized applications such as sensor networks, swarm robotics, and power grids, we study policy evaluation in MARL, where agents with jointly observed state-action pairs and private local rewards collaborate to learn the value of a given policy. In this paper, we propose a double averaging scheme, where each agent iteratively performs averaging over both space and time to incorporate neighboring gradient information and local reward information, respectively. We prove that the proposed algorithm converges to the optimal solution at a global geometric rate. In particular, such an algorithm is built upon a primal-dual reformulation of the mean squared projected Bellman error minimization problem, which gives rise to a decentralized convex-concave saddle-point problem. To the best of our knowledge, the proposed double averaging primal-dual optimization algorithm is the first to achieve fast finite-time convergence on decentralized convex-concave saddle-point problems.
|Original language||English (US)|
|Number of pages||12|
|Journal||Advances in Neural Information Processing Systems|
|State||Published - 2018|
|Event||32nd Conference on Neural Information Processing Systems, NeurIPS 2018 - Montreal, Canada|
Duration: Dec 2 2018 → Dec 8 2018
Bibliographical noteFunding Information:
Acknowledgement The authors would like to thank for the useful comments from three anonymous reviewers. HTW’s work was supported by the grant NSF CCF-BSF 1714672. MH’s work has been supported in part by NSF-CMMI 1727757, and AFOSR 15RT0767.