时变场景下基于深度强化学习的无人机三维轨迹设计

Qingya Li, Li Guo, Chao Dong, Xidong Mu
{"title":"时变场景下基于深度强化学习的无人机三维轨迹设计","authors":"Qingya Li, Li Guo, Chao Dong, Xidong Mu","doi":"10.1145/3507971.3507982","DOIUrl":null,"url":null,"abstract":"A joint framework is proposed for the 3D trajectory design of an unmanned aerial vehicle (UAV) as an flying base station under the time-varying scenarios of users’ mobility and communication request probability changes. The problem of 3D trajectory design is formulated for maximizing the throughput during a UAV’s flying period while satisfying the rate requirement of all ground users (GUEs). Specifically, we consider that GUEs change their positions and communication request probabilities at each time slot; the UAV needs to predict these changes so that it can design its 3D trajectory in advance to achieve the optimization target. In an effort to solve this pertinent problem, an echo state network (ESN) based prediction algorithm is first proposed for predicting the positions and communication request probabilities of GUEs. Based on these predictions, a Deep Reinforcement Learning (DRL) method is then invoked for finding the optimal deployment locations of UAV in each time slots. The proposed method 1) uses ESN based predictions to represent a part of DRL agent’s state; 2) designs the action and reward for DRL agent to learn the environment and its dynamics; 3) makes optimal strategy under the guidance of a double deep Q network (DDQN). The simulation results show that the UAV can dynamically adjust its trajectory to adapt to time-varying scenarios through our proposed algorithm and throughput gains of about 10.68% are achieved.","PeriodicalId":439757,"journal":{"name":"Proceedings of the 7th International Conference on Communication and Information Processing","volume":"158 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"3D Trajectory Design of UAV Based on Deep Reinforcement Learning in Time-varying Scenes\",\"authors\":\"Qingya Li, Li Guo, Chao Dong, Xidong Mu\",\"doi\":\"10.1145/3507971.3507982\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A joint framework is proposed for the 3D trajectory design of an unmanned aerial vehicle (UAV) as an flying base station under the time-varying scenarios of users’ mobility and communication request probability changes. The problem of 3D trajectory design is formulated for maximizing the throughput during a UAV’s flying period while satisfying the rate requirement of all ground users (GUEs). Specifically, we consider that GUEs change their positions and communication request probabilities at each time slot; the UAV needs to predict these changes so that it can design its 3D trajectory in advance to achieve the optimization target. In an effort to solve this pertinent problem, an echo state network (ESN) based prediction algorithm is first proposed for predicting the positions and communication request probabilities of GUEs. Based on these predictions, a Deep Reinforcement Learning (DRL) method is then invoked for finding the optimal deployment locations of UAV in each time slots. The proposed method 1) uses ESN based predictions to represent a part of DRL agent’s state; 2) designs the action and reward for DRL agent to learn the environment and its dynamics; 3) makes optimal strategy under the guidance of a double deep Q network (DDQN). The simulation results show that the UAV can dynamically adjust its trajectory to adapt to time-varying scenarios through our proposed algorithm and throughput gains of about 10.68% are achieved.\",\"PeriodicalId\":439757,\"journal\":{\"name\":\"Proceedings of the 7th International Conference on Communication and Information Processing\",\"volume\":\"158 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 7th International Conference on Communication and Information Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3507971.3507982\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 7th International Conference on Communication and Information Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3507971.3507982","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

针对用户机动性和通信请求概率变化的时变场景,提出了一种无人机作为飞行基站的三维轨迹设计联合框架。为了在满足所有地面用户的速率要求的情况下,最大限度地提高无人机在飞行期间的吞吐量,提出了三维轨迹设计问题。具体来说,我们考虑在每个时隙中,移动目标改变其位置和通信请求概率;无人机需要预测这些变化,以便提前设计其三维轨迹以实现优化目标。为了解决这一问题,首次提出了一种基于回声状态网络(ESN)的预测算法,用于预测目标的位置和通信请求概率。基于这些预测,然后调用深度强化学习(DRL)方法来寻找无人机在每个时隙中的最佳部署位置。提出的方法1)使用基于ESN的预测来表示DRL代理状态的一部分;2)设计DRL agent学习环境及其动态的行为和奖励;3)在双深度Q网络(DDQN)的指导下制定最优策略。仿真结果表明,该算法可使无人机动态调整轨迹以适应时变场景,吞吐量增益约为10.68%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
3D Trajectory Design of UAV Based on Deep Reinforcement Learning in Time-varying Scenes
A joint framework is proposed for the 3D trajectory design of an unmanned aerial vehicle (UAV) as an flying base station under the time-varying scenarios of users’ mobility and communication request probability changes. The problem of 3D trajectory design is formulated for maximizing the throughput during a UAV’s flying period while satisfying the rate requirement of all ground users (GUEs). Specifically, we consider that GUEs change their positions and communication request probabilities at each time slot; the UAV needs to predict these changes so that it can design its 3D trajectory in advance to achieve the optimization target. In an effort to solve this pertinent problem, an echo state network (ESN) based prediction algorithm is first proposed for predicting the positions and communication request probabilities of GUEs. Based on these predictions, a Deep Reinforcement Learning (DRL) method is then invoked for finding the optimal deployment locations of UAV in each time slots. The proposed method 1) uses ESN based predictions to represent a part of DRL agent’s state; 2) designs the action and reward for DRL agent to learn the environment and its dynamics; 3) makes optimal strategy under the guidance of a double deep Q network (DDQN). The simulation results show that the UAV can dynamically adjust its trajectory to adapt to time-varying scenarios through our proposed algorithm and throughput gains of about 10.68% are achieved.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Dynamic Path Planning of UAV Based on Pheromone Diffusion Ant Colony Algorithm Access Control Design Based on User Role Type in Telemedicine System Using Ethereum Blockchain Identifying Giant Clams Species using Machine Learning Techniques Blockchain based Distributed Oracle in Time Sensitive Scenario A Reliable Digital Watermarking Algorithm Based On DCT-SVD Algorithm
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1