Underwater optical wireless sensor networks (UOWSNs) have been attracting many interests for the advantages of high transmission rate, ultrawide bandwidth, and low latency. However, due to limited energy resources and highly dynamic topology caused by the water flow movement, it is challenging to provide a low-consumption and reliable routing in UOWSNs. To tackle this issue, in this article, we propose an efficient routing protocol based on multiagent reinforcement learning, termed as DMARL, for UOWSNs. The network is first modeled as a distributed multiagent system, and residual energy and link quality are considered into the routing protocol design to improve the adaptation to a dynamic environment and the support of prolonging network life. Additionally, two optimization strategies are proposed to accelerate the convergence of the reinforcement learning algorithm. On the basis, a reward mechanism is provided for the distributed system. The simulation results show that the DMARL-based routing protocol has low energy consumption and high packet delivery ratio (over 90%), and it is suitable for networks where the average number of neighbor nodes is less than 14.
Bibliographical noteFunding Information:
Manuscript received November 5, 2019; revised March 2, 2020; accepted April 12, 2020. Date of publication April 23, 2020; date of current version October 9, 2020. This work was supported in part by the National Natural Science Foundation of China under Grant 61471177 and Grant 61901302, and in part by the Open Research Fund from Shenzhen Research Institute of Big Data under Grant 2019ORF01014. (Corresponding author: Xiaoya Hu.) Xinge Li is with the School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China (e-mail: email@example.com).
© 2014 IEEE.
- Dynamic environment
- energy consumption
- multiagent reinforcement learning (MARL)
- routing protocol
- underwater optical wireless sensor network (UOWSN)