Anuratha Kesavan, Nandhini Jembu Mohanram, Soshya Joshi, Uma Sankar
{"title":"基于深度强化学习的无人飞行器计算卸载,用于灾害管理","authors":"Anuratha Kesavan, Nandhini Jembu Mohanram, Soshya Joshi, Uma Sankar","doi":"10.2478/jee-2024-0013","DOIUrl":null,"url":null,"abstract":"\n The emergence of Internet of Things enabled with mobile computing has the applications in the field of unmanned aerial vehicle (UAV) development. The development of mobile edge computational offloading in UAV is dependent on low latency applications such as disaster management, Forest fire control and remote operations. The task completion efficiency is improved by means of using edge intelligence algorithm and the optimal offloading policy is constructed on the application of deep reinforcement learning (DRL) in order to fulfill the target demand and to ease the transmission delay. The joint optimization curtails the weighted sum of average energy consumption and execution delay. This edge intelligence algorithm combined with DRL network exploits computing operation to increase the probability that at least one of the tracking and data transmission is usable. The proposed joint optimization significantly performs well in terms of execution delay, offloading cost and effective convergence over the prevailing methodologies proposed for UAV development. The proposed DRL enables the UAV to real-time decisions based on the disaster scenario and computing resources availability.","PeriodicalId":508697,"journal":{"name":"Journal of Electrical Engineering","volume":"74 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep reinforcement learning based computing offloading in unmanned aerial vehicles for disaster management\",\"authors\":\"Anuratha Kesavan, Nandhini Jembu Mohanram, Soshya Joshi, Uma Sankar\",\"doi\":\"10.2478/jee-2024-0013\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n The emergence of Internet of Things enabled with mobile computing has the applications in the field of unmanned aerial vehicle (UAV) development. The development of mobile edge computational offloading in UAV is dependent on low latency applications such as disaster management, Forest fire control and remote operations. The task completion efficiency is improved by means of using edge intelligence algorithm and the optimal offloading policy is constructed on the application of deep reinforcement learning (DRL) in order to fulfill the target demand and to ease the transmission delay. The joint optimization curtails the weighted sum of average energy consumption and execution delay. This edge intelligence algorithm combined with DRL network exploits computing operation to increase the probability that at least one of the tracking and data transmission is usable. The proposed joint optimization significantly performs well in terms of execution delay, offloading cost and effective convergence over the prevailing methodologies proposed for UAV development. The proposed DRL enables the UAV to real-time decisions based on the disaster scenario and computing resources availability.\",\"PeriodicalId\":508697,\"journal\":{\"name\":\"Journal of Electrical Engineering\",\"volume\":\"74 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Electrical Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2478/jee-2024-0013\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Electrical Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2478/jee-2024-0013","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep reinforcement learning based computing offloading in unmanned aerial vehicles for disaster management
The emergence of Internet of Things enabled with mobile computing has the applications in the field of unmanned aerial vehicle (UAV) development. The development of mobile edge computational offloading in UAV is dependent on low latency applications such as disaster management, Forest fire control and remote operations. The task completion efficiency is improved by means of using edge intelligence algorithm and the optimal offloading policy is constructed on the application of deep reinforcement learning (DRL) in order to fulfill the target demand and to ease the transmission delay. The joint optimization curtails the weighted sum of average energy consumption and execution delay. This edge intelligence algorithm combined with DRL network exploits computing operation to increase the probability that at least one of the tracking and data transmission is usable. The proposed joint optimization significantly performs well in terms of execution delay, offloading cost and effective convergence over the prevailing methodologies proposed for UAV development. The proposed DRL enables the UAV to real-time decisions based on the disaster scenario and computing resources availability.