Besides autonomous driving, remote driving is a typical example of leveraging communications to eliminate high-risk driving situations in which drivers are fatigued. However, remote driving remains challenging, particularly in urban areas where buildings’ absorption and reflection may significantly hamper control signals. This paper proposes MUFO, a deep-reinforcement-learning-based multi-UAV flight-optimization framework whose objectives are twofold: (i) path planning to determine optimal UAV trajectories that sustain stable links for remote-driving vehicles, and (ii) efficient deployment to minimize the number of UAVs and their energy consumption while guaranteeing service continuity with minimum data rate. First, the coverage and flight cost issues are defined in a multi-objective optimization problem with constraints on UAV energy and collision avoidance. Based on a built-in map of weak signal areas, a novel technique is proposed: a multi-agent deep deterministic policy gradient (MADDPG) scheme. The goal is to determine the best flying strategy for the UAVs to fly over weak signal areas, enhance signal strengths, and relay connectivity when the remote vehicles arrive there. The simulation results show that MADDPG in MUFO outperforms state-of-the-art deep learning methods and searches by up to 8% of deployment efficiency (energy savings, number of deployed UAVs), particularly when there is a high density of ground traffic jam areas and UAVs are required to hover at those areas for an unexpected additional time. MUFO’s strength is that it considerably improves the deployment efficiency of UAVs via cumulative learning from many trials or completed missions.
扫码关注我们
求助内容:
应助结果提醒方式:
