Unmanned Aerial Vehicles (UAVs) play a critical role in data collection for a wide range of Internet of Things (IoT) applications across remote, urban, and marine environments. In large-scale deployments, UAVs often face complex decision-making challenges, for which Deep Reinforcement Learning (DRL) has emerged as a promising solution. This paper presents a comprehensive review of research on UAV-assisted IoT utilizing DRL, covering key research questions relating to DRL algorithm variants, deployment objectives, architectural features, integrated technologies, UAV roles, optimization constraints, energy management strategies, and performance metrics. Findings indicate that value-based and actor-critic algorithms are the most commonly employed, targeting objectives such as path planning, transmit power control, scheduling, velocity and altitude control, and charging optimization. Other architectural considerations include clustering, security, obstacle avoidance, buffered sensors, and multi-UAV coordination. Beyond data collection, UAVs are also used for tasks such as device selection, data aggregation, and sensor charging, with energy management primarily achieved through charging and energy harvesting techniques. Performance is typically assessed using metrics like energy efficiency, throughput, latency, packet loss, and Age of Information (AoI). The paper concludes by outlining several promising research directions and open challenges critical to the successful deployment of UAVs as aerial communication platforms, especially in IoT data collection. By organizing existing work across key themes and outlining promising future directions, this review offers a valuable reference for researchers and technology professionals alike.
扫码关注我们
求助内容:
应助结果提醒方式:
