Multi-agent reinforcement learning via double averaging primal-dual optimization

Hoi To Wai, Zhuoran Yang, Mingyi Hong, Zhaoran Wang

Research output: Contribution to journalConference articlepeer-review

16 Scopus citations

Abstract

Despite the success of single-agent reinforcement learning, multi-agent reinforcement learning (MARL) remains challenging due to complex interactions between agents. Motivated by decentralized applications such as sensor networks, swarm robotics, and power grids, we study policy evaluation in MARL, where agents with jointly observed state-action pairs and private local rewards collaborate to learn the value of a given policy. In this paper, we propose a double averaging scheme, where each agent iteratively performs averaging over both space and time to incorporate neighboring gradient information and local reward information, respectively. We prove that the proposed algorithm converges to the optimal solution at a global geometric rate. In particular, such an algorithm is built upon a primal-dual reformulation of the mean squared projected Bellman error minimization problem, which gives rise to a decentralized convex-concave saddle-point problem. To the best of our knowledge, the proposed double averaging primal-dual optimization algorithm is the first to achieve fast finite-time convergence on decentralized convex-concave saddle-point problems.

Original languageEnglish (US)
Pages (from-to)9649-9660
Number of pages12
JournalAdvances in Neural Information Processing Systems
Volume2018-December
StatePublished - 2018
Event32nd Conference on Neural Information Processing Systems, NeurIPS 2018 - Montreal, Canada
Duration: Dec 2 2018Dec 8 2018

Bibliographical note

Funding Information:
Acknowledgement The authors would like to thank for the useful comments from three anonymous reviewers. HTW’s work was supported by the grant NSF CCF-BSF 1714672. MH’s work has been supported in part by NSF-CMMI 1727757, and AFOSR 15RT0767.

Fingerprint Dive into the research topics of 'Multi-agent reinforcement learning via double averaging primal-dual optimization'. Together they form a unique fingerprint.

Cite this