Dynamic games arise when multiple agents with differing objectives choose control inputs to a dynamic system. However, compared to single-agent control problems, the computational methods for dynamic games are relatively limited. Only very specialized dynamic games can be solved exactly, so approximation algorithms are required. In this paper, we show how to extend a recursive Newton algorithm and differential dynamic programming (DDP) to the case of full-information non-zero sum dynamic games. We show that the iterates of Newton's method and DDP are sufficiently close for DDP to inherit the quadratic convergence rate of Newton's method.
|Original language||English (US)|
|Title of host publication||2019 IEEE 58th Conference on Decision and Control, CDC 2019|
|Publisher||Institute of Electrical and Electronics Engineers Inc.|
|Number of pages||6|
|State||Published - Dec 2019|
|Event||58th IEEE Conference on Decision and Control, CDC 2019 - Nice, France|
Duration: Dec 11 2019 → Dec 13 2019
|Name||Proceedings of the IEEE Conference on Decision and Control|
|Conference||58th IEEE Conference on Decision and Control, CDC 2019|
|Period||12/11/19 → 12/13/19|
Bibliographical notePublisher Copyright:
© 2019 IEEE.