稀疏奖励下多代理轨道追逐-逃避博弈的冲动机动策略

IF 5 1区 工程技术 Q1 ENGINEERING, AEROSPACE Aerospace Science and Technology Pub Date : 2024-09-29 DOI:10.1016/j.ast.2024.109618
Hongbo Wang, Yao Zhang
{"title":"稀疏奖励下多代理轨道追逐-逃避博弈的冲动机动策略","authors":"Hongbo Wang,&nbsp;Yao Zhang","doi":"10.1016/j.ast.2024.109618","DOIUrl":null,"url":null,"abstract":"<div><div>To address the subjectivity of dense reward designs for the orbital pursuit-evasion game with multiple optimization objectives, this paper proposes the reinforcement learning method with a hierarchical network structure to guide game strategies under sparse rewards. Initially, to overcome the convergence challenges in the reinforcement learning training process under sparse rewards, a hierarchical network structure is proposed based on the hindsight experience replay. Subsequently, considering the strict constraints imposed by orbital dynamics on spacecraft state space, the reachable domain method is introduced to refine the subgoal space in the hierarchical network, further facilitating the achievement of subgoals. Finally, by adopting the centralized training-layered execution approach, a complete multi-agent reinforcement learning method with the hierarchical network structure is established, enabling networks at each level to learn effectively in parallel within sparse reward environments. Numerical simulations indicate that, under the single-agent reinforcement learning framework, the proposed method exhibits superior stability in the late training stage and enhances exploration efficiency in the early stage by 38.89% to 55.56% to the baseline method. Under the multi-agent reinforcement learning framework, as the relative distance decreases, the subgoals generated by the hierarchical network transition from long-term to short-term, aligning with human behavioral logic.</div></div>","PeriodicalId":50955,"journal":{"name":"Aerospace Science and Technology","volume":"155 ","pages":"Article 109618"},"PeriodicalIF":5.0000,"publicationDate":"2024-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Impulsive maneuver strategy for multi-agent orbital pursuit-evasion game under sparse rewards\",\"authors\":\"Hongbo Wang,&nbsp;Yao Zhang\",\"doi\":\"10.1016/j.ast.2024.109618\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>To address the subjectivity of dense reward designs for the orbital pursuit-evasion game with multiple optimization objectives, this paper proposes the reinforcement learning method with a hierarchical network structure to guide game strategies under sparse rewards. Initially, to overcome the convergence challenges in the reinforcement learning training process under sparse rewards, a hierarchical network structure is proposed based on the hindsight experience replay. Subsequently, considering the strict constraints imposed by orbital dynamics on spacecraft state space, the reachable domain method is introduced to refine the subgoal space in the hierarchical network, further facilitating the achievement of subgoals. Finally, by adopting the centralized training-layered execution approach, a complete multi-agent reinforcement learning method with the hierarchical network structure is established, enabling networks at each level to learn effectively in parallel within sparse reward environments. Numerical simulations indicate that, under the single-agent reinforcement learning framework, the proposed method exhibits superior stability in the late training stage and enhances exploration efficiency in the early stage by 38.89% to 55.56% to the baseline method. Under the multi-agent reinforcement learning framework, as the relative distance decreases, the subgoals generated by the hierarchical network transition from long-term to short-term, aligning with human behavioral logic.</div></div>\",\"PeriodicalId\":50955,\"journal\":{\"name\":\"Aerospace Science and Technology\",\"volume\":\"155 \",\"pages\":\"Article 109618\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2024-09-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Aerospace Science and Technology\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1270963824007478\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, AEROSPACE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Aerospace Science and Technology","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1270963824007478","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, AEROSPACE","Score":null,"Total":0}
引用次数: 0

摘要

针对具有多重优化目标的轨道追逐-逃避博弈中密集奖励设计的主观性,本文提出了具有分层网络结构的强化学习方法,以指导稀疏奖励下的博弈策略。首先,为了克服稀疏奖励下强化学习训练过程中的收敛难题,本文提出了一种基于事后经验回放的分层网络结构。随后,考虑到轨道动力学对航天器状态空间的严格约束,引入可达域方法来细化分层网络中的子目标空间,进一步促进子目标的实现。最后,通过采用集中训练-分层执行的方法,建立了具有分层网络结构的完整的多代理强化学习方法,使各层次网络在稀疏奖励环境中有效地并行学习。数值模拟表明,在单代理强化学习框架下,所提出的方法在后期训练阶段表现出卓越的稳定性,在早期阶段的探索效率比基线方法提高了 38.89% 至 55.56%。在多代理强化学习框架下,随着相对距离的减小,分层网络生成的子目标会从长期过渡到短期,符合人类的行为逻辑。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Impulsive maneuver strategy for multi-agent orbital pursuit-evasion game under sparse rewards
To address the subjectivity of dense reward designs for the orbital pursuit-evasion game with multiple optimization objectives, this paper proposes the reinforcement learning method with a hierarchical network structure to guide game strategies under sparse rewards. Initially, to overcome the convergence challenges in the reinforcement learning training process under sparse rewards, a hierarchical network structure is proposed based on the hindsight experience replay. Subsequently, considering the strict constraints imposed by orbital dynamics on spacecraft state space, the reachable domain method is introduced to refine the subgoal space in the hierarchical network, further facilitating the achievement of subgoals. Finally, by adopting the centralized training-layered execution approach, a complete multi-agent reinforcement learning method with the hierarchical network structure is established, enabling networks at each level to learn effectively in parallel within sparse reward environments. Numerical simulations indicate that, under the single-agent reinforcement learning framework, the proposed method exhibits superior stability in the late training stage and enhances exploration efficiency in the early stage by 38.89% to 55.56% to the baseline method. Under the multi-agent reinforcement learning framework, as the relative distance decreases, the subgoals generated by the hierarchical network transition from long-term to short-term, aligning with human behavioral logic.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Aerospace Science and Technology
Aerospace Science and Technology 工程技术-工程:宇航
CiteScore
10.30
自引率
28.60%
发文量
654
审稿时长
54 days
期刊介绍: Aerospace Science and Technology publishes articles of outstanding scientific quality. Each article is reviewed by two referees. The journal welcomes papers from a wide range of countries. This journal publishes original papers, review articles and short communications related to all fields of aerospace research, fundamental and applied, potential applications of which are clearly related to: • The design and the manufacture of aircraft, helicopters, missiles, launchers and satellites • The control of their environment • The study of various systems they are involved in, as supports or as targets. Authors are invited to submit papers on new advances in the following topics to aerospace applications: • Fluid dynamics • Energetics and propulsion • Materials and structures • Flight mechanics • Navigation, guidance and control • Acoustics • Optics • Electromagnetism and radar • Signal and image processing • Information processing • Data fusion • Decision aid • Human behaviour • Robotics and intelligent systems • Complex system engineering. Etc.
期刊最新文献
Drag dependency aspects in Hyperloop aerodynamics Quasi-static compression response of a novel multi-step auxetic honeycomb with tunable transition strain Experimental and numerical characterization of E-glass/epoxy plain woven fabric composites containing void defects Effects of the rotor tip gap on the aerodynamic and aeroacoustic performance of a ducted rotor in hover Crashworthiness and stiffness improvement of a variable cross-section hollow BCC lattice reinforced with metal strips
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1