A deep reinforcement learning algorithm for the power order optimization allocation of AGC in interconnected power grids

Lei Xi, Lipeng Zhou, Lang Liu, Dongliang Duan, Yanchun Xu, Liuqing Yang, Shouxiang Wang

Research output: Contribution to journalArticlepeer-review

30 Scopus citations

Abstract

The integration of distributed generations (solar power, wind power), energy storage devices, and electric vehicles, causes unpredictable disturbances in power grids. It has become a top priority to coordinate the distributed generations, loads, and energy storages in order to better facilitate the utilization of new energy. Therefore, a novel algorithm based on deep reinforcement learning, namely the deep PDWoLF-PHC (policy dynamics based win or learn fast-policy hill climbing) network (DPDPN), is proposed to allocate power order among the various generators. The proposed algorithm combines the decision mechanism of reinforcement learning with the prediction mechanism of a deep neural network to obtain the optimal coordinated control for the source-grid-load. Consequently it solves the problem brought by stochastic disturbances and improves the utilization rate of new energy. Simulations are conducted with the case of the improved IEEE two-area and a case in the Guangdong power grid. Results show that the adaptability and control performance of the power system are improved using the proposed algorithm as compared with using other existing strategies.

Original languageEnglish (US)
Article number9056999
Pages (from-to)712-723
Number of pages12
JournalCSEE Journal of Power and Energy Systems
Volume6
Issue number3
DOIs
StatePublished - Sep 2020

Bibliographical note

Publisher Copyright:
© 2015 CSEE.

Keywords

  • Automatic generation control
  • DPDPN
  • deep reinforcement learning
  • power order allocation

Fingerprint

Dive into the research topics of 'A deep reinforcement learning algorithm for the power order optimization allocation of AGC in interconnected power grids'. Together they form a unique fingerprint.

Cite this