二对二超视距空战的混合奖励多智能体近端策略优化方法

IF 2.1 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE IEEE Canadian Journal of Electrical and Computer Engineering Pub Date : 2024-09-23 DOI:10.1109/ICJECE.2024.3451965
Haojie Peng;Weihua Li;Sifan Dai;Ruihai Chen
{"title":"二对二超视距空战的混合奖励多智能体近端策略优化方法","authors":"Haojie Peng;Weihua Li;Sifan Dai;Ruihai Chen","doi":"10.1109/ICJECE.2024.3451965","DOIUrl":null,"url":null,"abstract":"With recent advances in airborne weapons, modern air combats tend to be accomplished in the beyond-visual-range (BVR) phase. Multiaircraft cooperation is also required to adapt to the complexities of modern air combats. The scale of the traditional rule-based expert system will become incredible in this case. In view of this, a mixed-reward multiagent proximal policy optimization (MRMAPPO) method is proposed in this article that is used to help train cooperative BVR air combat tactics via adversarial self-play. First, a two-on-two BVR air combat simulation platform is established, and the combat game is modeled as a Markov game. Second, centralized training with decentralized execution architecture is established. Multiple actors are involved in the architecture, each corresponding to a policy that generates a specified kind of command, e.g., the maneuvering and firing command. Moreover, in order to accelerate training as well as enhance the stability of the training process, four optimization mechanisms are introduced. The experimental section discusses how the effectiveness of the MRMAPPO is verified with comparative and ablation experiments, along with several air combat tactics that emerge in the training process.","PeriodicalId":100619,"journal":{"name":"IEEE Canadian Journal of Electrical and Computer Engineering","volume":"47 4","pages":"206-217"},"PeriodicalIF":2.1000,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Mixed-Reward Multiagent Proximal Policy Optimization Method for Two-on-Two Beyond-Visual-Range Air Combat\",\"authors\":\"Haojie Peng;Weihua Li;Sifan Dai;Ruihai Chen\",\"doi\":\"10.1109/ICJECE.2024.3451965\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With recent advances in airborne weapons, modern air combats tend to be accomplished in the beyond-visual-range (BVR) phase. Multiaircraft cooperation is also required to adapt to the complexities of modern air combats. The scale of the traditional rule-based expert system will become incredible in this case. In view of this, a mixed-reward multiagent proximal policy optimization (MRMAPPO) method is proposed in this article that is used to help train cooperative BVR air combat tactics via adversarial self-play. First, a two-on-two BVR air combat simulation platform is established, and the combat game is modeled as a Markov game. Second, centralized training with decentralized execution architecture is established. Multiple actors are involved in the architecture, each corresponding to a policy that generates a specified kind of command, e.g., the maneuvering and firing command. Moreover, in order to accelerate training as well as enhance the stability of the training process, four optimization mechanisms are introduced. The experimental section discusses how the effectiveness of the MRMAPPO is verified with comparative and ablation experiments, along with several air combat tactics that emerge in the training process.\",\"PeriodicalId\":100619,\"journal\":{\"name\":\"IEEE Canadian Journal of Electrical and Computer Engineering\",\"volume\":\"47 4\",\"pages\":\"206-217\"},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2024-09-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Canadian Journal of Electrical and Computer Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10688404/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Canadian Journal of Electrical and Computer Engineering","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10688404/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

摘要

随着机载武器的发展,现代空战趋向于在超视距(BVR)阶段完成。多机协同也需要适应现代空战的复杂性。在这种情况下,传统的基于规则的专家系统的规模将变得难以置信。鉴于此,本文提出了一种混合奖励的多智能体近端策略优化(MRMAPPO)方法,通过对抗性自我博弈来帮助训练协同BVR空战战术。首先,建立了二对二BVR空战仿真平台,将空战博弈建模为马尔可夫博弈;其次,建立集中训练、分散执行的体系结构。体系结构中涉及多个参与者,每个参与者对应于生成特定类型命令的策略,例如,机动和发射命令。为了加快训练速度,提高训练过程的稳定性,引入了四种优化机制。实验部分讨论了MRMAPPO的有效性如何通过对比和烧蚀实验进行验证,以及在训练过程中出现的几种空战战术。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Mixed-Reward Multiagent Proximal Policy Optimization Method for Two-on-Two Beyond-Visual-Range Air Combat
With recent advances in airborne weapons, modern air combats tend to be accomplished in the beyond-visual-range (BVR) phase. Multiaircraft cooperation is also required to adapt to the complexities of modern air combats. The scale of the traditional rule-based expert system will become incredible in this case. In view of this, a mixed-reward multiagent proximal policy optimization (MRMAPPO) method is proposed in this article that is used to help train cooperative BVR air combat tactics via adversarial self-play. First, a two-on-two BVR air combat simulation platform is established, and the combat game is modeled as a Markov game. Second, centralized training with decentralized execution architecture is established. Multiple actors are involved in the architecture, each corresponding to a policy that generates a specified kind of command, e.g., the maneuvering and firing command. Moreover, in order to accelerate training as well as enhance the stability of the training process, four optimization mechanisms are introduced. The experimental section discusses how the effectiveness of the MRMAPPO is verified with comparative and ablation experiments, along with several air combat tactics that emerge in the training process.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.70
自引率
0.00%
发文量
0
期刊最新文献
Front Cover Table of Contents IEEE Canadian Journal of Electrical and Computer Engineering An Adaptive Edge Computing Infrastructure for Internet of Medical Things Applications Intelligent Energy Management for Multistorey Building With Photovoltaic-Based Electric Vehicle Charging Infrastructure
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1