航天器控制应用中的强化学习:进展、前景和挑战

IF 7.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Annual Reviews in Control Pub Date : 2022-01-01 DOI:10.1016/j.arcontrol.2022.07.004
Massimo Tipaldi , Raffaele Iervolino , Paolo Roberto Massenio
{"title":"航天器控制应用中的强化学习:进展、前景和挑战","authors":"Massimo Tipaldi ,&nbsp;Raffaele Iervolino ,&nbsp;Paolo Roberto Massenio","doi":"10.1016/j.arcontrol.2022.07.004","DOIUrl":null,"url":null,"abstract":"<div><p>This paper presents and analyzes Reinforcement Learning (RL) based approaches to solve spacecraft control<span> problems. Different application fields are considered, e.g., guidance, navigation and control systems for spacecraft landing on celestial bodies, constellation orbital control, and maneuver planning in orbit transfers. It is discussed how RL solutions can address the emerging needs of designing spacecraft with highly autonomous on-board capabilities and implementing controllers (i.e., RL agents) robust to system uncertainties and adaptive to changing environments. For each application field, the RL framework core elements (e.g., the reward function, the RL algorithm and the environment model used for the RL agent training) are discussed with the aim of providing some guidelines in the formulation of spacecraft control problems via a RL framework. At the same time, the adoption of RL in real space projects is also analyzed. Different open points are identified and discussed, e.g., the availability of high-fidelity simulators for the RL agent training and the verification of RL-based solutions. This way, recommendations for future work are proposed with the aim of reducing the technological gap between the solutions proposed by the academic community and the needs/requirements of the space industry.</span></p></div>","PeriodicalId":50750,"journal":{"name":"Annual Reviews in Control","volume":"54 ","pages":"Pages 1-23"},"PeriodicalIF":7.3000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":"{\"title\":\"Reinforcement learning in spacecraft control applications: Advances, prospects, and challenges\",\"authors\":\"Massimo Tipaldi ,&nbsp;Raffaele Iervolino ,&nbsp;Paolo Roberto Massenio\",\"doi\":\"10.1016/j.arcontrol.2022.07.004\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>This paper presents and analyzes Reinforcement Learning (RL) based approaches to solve spacecraft control<span> problems. Different application fields are considered, e.g., guidance, navigation and control systems for spacecraft landing on celestial bodies, constellation orbital control, and maneuver planning in orbit transfers. It is discussed how RL solutions can address the emerging needs of designing spacecraft with highly autonomous on-board capabilities and implementing controllers (i.e., RL agents) robust to system uncertainties and adaptive to changing environments. For each application field, the RL framework core elements (e.g., the reward function, the RL algorithm and the environment model used for the RL agent training) are discussed with the aim of providing some guidelines in the formulation of spacecraft control problems via a RL framework. At the same time, the adoption of RL in real space projects is also analyzed. Different open points are identified and discussed, e.g., the availability of high-fidelity simulators for the RL agent training and the verification of RL-based solutions. This way, recommendations for future work are proposed with the aim of reducing the technological gap between the solutions proposed by the academic community and the needs/requirements of the space industry.</span></p></div>\",\"PeriodicalId\":50750,\"journal\":{\"name\":\"Annual Reviews in Control\",\"volume\":\"54 \",\"pages\":\"Pages 1-23\"},\"PeriodicalIF\":7.3000,\"publicationDate\":\"2022-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"14\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Annual Reviews in Control\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S136757882200089X\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annual Reviews in Control","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S136757882200089X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 14

摘要

提出并分析了基于强化学习(RL)的航天器控制问题求解方法。考虑了航天器在天体着陆的制导、导航和控制系统、星座轨道控制以及轨道转移中的机动规划等不同的应用领域。讨论了RL解决方案如何满足设计具有高度自主机载能力的航天器和实现对系统不确定性具有鲁棒性并适应不断变化的环境的控制器(即RL代理)的新需求。针对每个应用领域,讨论了RL框架的核心要素(如奖励函数、RL算法和用于RL代理训练的环境模型),目的是为通过RL框架制定航天器控制问题提供一些指导。同时,对RL在实际空间工程中的应用进行了分析。本文确定并讨论了不同的开放点,例如,用于强化学习代理训练的高保真模拟器的可用性以及基于强化学习的解决方案的验证。这样,就提出了对今后工作的建议,目的是缩小学术界提出的解决办法与空间工业的需要/要求之间的技术差距。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Reinforcement learning in spacecraft control applications: Advances, prospects, and challenges

This paper presents and analyzes Reinforcement Learning (RL) based approaches to solve spacecraft control problems. Different application fields are considered, e.g., guidance, navigation and control systems for spacecraft landing on celestial bodies, constellation orbital control, and maneuver planning in orbit transfers. It is discussed how RL solutions can address the emerging needs of designing spacecraft with highly autonomous on-board capabilities and implementing controllers (i.e., RL agents) robust to system uncertainties and adaptive to changing environments. For each application field, the RL framework core elements (e.g., the reward function, the RL algorithm and the environment model used for the RL agent training) are discussed with the aim of providing some guidelines in the formulation of spacecraft control problems via a RL framework. At the same time, the adoption of RL in real space projects is also analyzed. Different open points are identified and discussed, e.g., the availability of high-fidelity simulators for the RL agent training and the verification of RL-based solutions. This way, recommendations for future work are proposed with the aim of reducing the technological gap between the solutions proposed by the academic community and the needs/requirements of the space industry.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Annual Reviews in Control
Annual Reviews in Control 工程技术-自动化与控制系统
CiteScore
19.00
自引率
2.10%
发文量
53
审稿时长
36 days
期刊介绍: The field of Control is changing very fast now with technology-driven “societal grand challenges” and with the deployment of new digital technologies. The aim of Annual Reviews in Control is to provide comprehensive and visionary views of the field of Control, by publishing the following types of review articles: Survey Article: Review papers on main methodologies or technical advances adding considerable technical value to the state of the art. Note that papers which purely rely on mechanistic searches and lack comprehensive analysis providing a clear contribution to the field will be rejected. Vision Article: Cutting-edge and emerging topics with visionary perspective on the future of the field or how it will bridge multiple disciplines, and Tutorial research Article: Fundamental guides for future studies.
期刊最新文献
Editorial Board Analysis and design of model predictive control frameworks for dynamic operation—An overview Advances in controller design of pacemakers for pacing control: A comprehensive review Recent advances in path integral control for trajectory optimization: An overview in theoretical and algorithmic perspectives Analyzing stability in 2D systems via LMIs: From pioneering to recent contributions
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1