多飞机冲突解决的图强化学习

IF 14 1区 工程技术 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE Transactions on Intelligent Vehicles Pub Date : 2024-02-12 DOI:10.1109/TIV.2024.3364652
Yumeng Li;Yunhe Zhang;Tong Guo;Yu Liu;Yisheng Lv;Wenbo Du
{"title":"多飞机冲突解决的图强化学习","authors":"Yumeng Li;Yunhe Zhang;Tong Guo;Yu Liu;Yisheng Lv;Wenbo Du","doi":"10.1109/TIV.2024.3364652","DOIUrl":null,"url":null,"abstract":"The escalating density of airspace has led to sharply increased conflicts between aircraft. Efficient and scalable conflict resolution methods are crucial to mitigate collision risks. Existing learning-based methods become less effective as the scale of aircraft increases due to their redundant information representations. In this paper, to accommodate the increased airspace density, a novel graph reinforcement learning (GRL) method is presented to efficiently learn deconfliction strategies. A time-evolving conflict graph is exploited to represent the local state of individual aircraft and the global spatiotemporal relationships between them. Equipped with the conflict graph, GRL can efficiently learn deconfliction strategies by selectively aggregating aircraft state information through a multi-head attention-boosted graph neural network. Furthermore, a temporal regularization mechanism is proposed to enhance learning stability in highly dynamic environments. Comprehensive experimental studies have been conducted on an OpenAI Gym-based flight simulator. Compared with the existing state-of-the-art learning-based methods, the results demonstrate that GRL can save much training time while achieving significantly better deconfliction strategies in terms of safety and efficiency metrics. In addition, GRL has a strong power of scalability and robustness with increasing aircraft scale.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 3","pages":"4529-4540"},"PeriodicalIF":14.0000,"publicationDate":"2024-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Graph Reinforcement Learning for Multi-Aircraft Conflict Resolution\",\"authors\":\"Yumeng Li;Yunhe Zhang;Tong Guo;Yu Liu;Yisheng Lv;Wenbo Du\",\"doi\":\"10.1109/TIV.2024.3364652\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The escalating density of airspace has led to sharply increased conflicts between aircraft. Efficient and scalable conflict resolution methods are crucial to mitigate collision risks. Existing learning-based methods become less effective as the scale of aircraft increases due to their redundant information representations. In this paper, to accommodate the increased airspace density, a novel graph reinforcement learning (GRL) method is presented to efficiently learn deconfliction strategies. A time-evolving conflict graph is exploited to represent the local state of individual aircraft and the global spatiotemporal relationships between them. Equipped with the conflict graph, GRL can efficiently learn deconfliction strategies by selectively aggregating aircraft state information through a multi-head attention-boosted graph neural network. Furthermore, a temporal regularization mechanism is proposed to enhance learning stability in highly dynamic environments. Comprehensive experimental studies have been conducted on an OpenAI Gym-based flight simulator. Compared with the existing state-of-the-art learning-based methods, the results demonstrate that GRL can save much training time while achieving significantly better deconfliction strategies in terms of safety and efficiency metrics. In addition, GRL has a strong power of scalability and robustness with increasing aircraft scale.\",\"PeriodicalId\":36532,\"journal\":{\"name\":\"IEEE Transactions on Intelligent Vehicles\",\"volume\":\"9 3\",\"pages\":\"4529-4540\"},\"PeriodicalIF\":14.0000,\"publicationDate\":\"2024-02-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Intelligent Vehicles\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10432995/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Intelligent Vehicles","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10432995/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

空域密度的不断上升导致飞机之间的冲突急剧增加。高效且可扩展的冲突解决方法对于降低碰撞风险至关重要。现有的基于学习的方法由于其冗余信息表征,随着飞机规模的增加,其有效性也会降低。为了适应空域密度的增加,本文提出了一种新颖的图强化学习(GRL)方法,以有效地学习消除冲突的策略。该方法利用一个随时间演变的冲突图来表示单个飞机的局部状态以及它们之间的全局时空关系。借助冲突图,GRL 可以通过多头注意力增强图神经网络选择性地聚合飞机状态信息,从而高效地学习消除冲突策略。此外,还提出了一种时间正则化机制,以增强高动态环境下的学习稳定性。在基于 OpenAI Gym 的飞行模拟器上进行了全面的实验研究。结果表明,与现有的基于学习的先进方法相比,GRL 可以节省大量的训练时间,同时在安全和效率指标方面实现明显更好的解冲突策略。此外,随着飞机规模的扩大,GRL 具有很强的可扩展性和鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Graph Reinforcement Learning for Multi-Aircraft Conflict Resolution
The escalating density of airspace has led to sharply increased conflicts between aircraft. Efficient and scalable conflict resolution methods are crucial to mitigate collision risks. Existing learning-based methods become less effective as the scale of aircraft increases due to their redundant information representations. In this paper, to accommodate the increased airspace density, a novel graph reinforcement learning (GRL) method is presented to efficiently learn deconfliction strategies. A time-evolving conflict graph is exploited to represent the local state of individual aircraft and the global spatiotemporal relationships between them. Equipped with the conflict graph, GRL can efficiently learn deconfliction strategies by selectively aggregating aircraft state information through a multi-head attention-boosted graph neural network. Furthermore, a temporal regularization mechanism is proposed to enhance learning stability in highly dynamic environments. Comprehensive experimental studies have been conducted on an OpenAI Gym-based flight simulator. Compared with the existing state-of-the-art learning-based methods, the results demonstrate that GRL can save much training time while achieving significantly better deconfliction strategies in terms of safety and efficiency metrics. In addition, GRL has a strong power of scalability and robustness with increasing aircraft scale.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Intelligent Vehicles
IEEE Transactions on Intelligent Vehicles Mathematics-Control and Optimization
CiteScore
12.10
自引率
13.40%
发文量
177
期刊介绍: The IEEE Transactions on Intelligent Vehicles (T-IV) is a premier platform for publishing peer-reviewed articles that present innovative research concepts, application results, significant theoretical findings, and application case studies in the field of intelligent vehicles. With a particular emphasis on automated vehicles within roadway environments, T-IV aims to raise awareness of pressing research and application challenges. Our focus is on providing critical information to the intelligent vehicle community, serving as a dissemination vehicle for IEEE ITS Society members and others interested in learning about the state-of-the-art developments and progress in research and applications related to intelligent vehicles. Join us in advancing knowledge and innovation in this dynamic field.
期刊最新文献
Table of Contents Introducing IEEE Collabratec The Autonomous Right of Way: Smart Governance for Smart Mobility With Intelligent Vehicles TechRxiv: Share Your Preprint Research with the World! Blank
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1