The high dense power transmission line (DPTL), as transmission hubs, are characterized by complex line layouts and difficulty in inspection. We have designed a DPTL inspection network, in which obstacle-traversable inspection robots (OTIR) move on transmission lines and act as sensing and computing nodes to achieve full-coverage inspection of transmission lines. To improve inspection efficiency, the network integrates Mobile Edge Computing (MEC) and Device-to-Device (D2D) communication technologies to facilitate task offloading. We formulate an optimization problem for task offloading to achieve energy consumption balance and minimization objectives. Considering the dynamic and time-varying environment, we use the Lyapunov dynamic energy queue optimization scheme to transform the stochastic optimization problem into independent decisions for each time slot. In addition, we propose a multi-agent reinforcement learning algorithm based on an adaptive reward mechanism. This algorithm integrates digital twin technology to real-time perceive the dynamic states of the DPTL network, optimizing task scheduling through collaborative communication, thereby improving the overall efficiency and stability of the system. The simulation results demonstrate that the proposed algorithm exhibits good convergence performance. Compared with adaptive reward multi-agent advantage actor-critic(AR-MAA2C) and the deep deterministic policy gradient (DDPG) algorithms, the total inspection energy consumption of OTIRs is reduced by approximately 10.4% and 13.7%, respectively. Moreover, energy utilization efficiency is improved by about 5.6% and 7.3%, and the energy consumption balance reaches nearly 0.98, indicating that the proposed method effectively achieves high energy optimization and load balancing.
扫码关注我们
求助内容:
应助结果提醒方式:
