Massimo Tipaldi , Raffaele Iervolino , Paolo Roberto Massenio
{"title":"航天器控制应用中的强化学习:进展、前景和挑战","authors":"Massimo Tipaldi , Raffaele Iervolino , Paolo Roberto Massenio","doi":"10.1016/j.arcontrol.2022.07.004","DOIUrl":null,"url":null,"abstract":"<div><p>This paper presents and analyzes Reinforcement Learning (RL) based approaches to solve spacecraft control<span> problems. Different application fields are considered, e.g., guidance, navigation and control systems for spacecraft landing on celestial bodies, constellation orbital control, and maneuver planning in orbit transfers. It is discussed how RL solutions can address the emerging needs of designing spacecraft with highly autonomous on-board capabilities and implementing controllers (i.e., RL agents) robust to system uncertainties and adaptive to changing environments. For each application field, the RL framework core elements (e.g., the reward function, the RL algorithm and the environment model used for the RL agent training) are discussed with the aim of providing some guidelines in the formulation of spacecraft control problems via a RL framework. At the same time, the adoption of RL in real space projects is also analyzed. Different open points are identified and discussed, e.g., the availability of high-fidelity simulators for the RL agent training and the verification of RL-based solutions. This way, recommendations for future work are proposed with the aim of reducing the technological gap between the solutions proposed by the academic community and the needs/requirements of the space industry.</span></p></div>","PeriodicalId":50750,"journal":{"name":"Annual Reviews in Control","volume":"54 ","pages":"Pages 1-23"},"PeriodicalIF":7.3000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":"{\"title\":\"Reinforcement learning in spacecraft control applications: Advances, prospects, and challenges\",\"authors\":\"Massimo Tipaldi , Raffaele Iervolino , Paolo Roberto Massenio\",\"doi\":\"10.1016/j.arcontrol.2022.07.004\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>This paper presents and analyzes Reinforcement Learning (RL) based approaches to solve spacecraft control<span> problems. Different application fields are considered, e.g., guidance, navigation and control systems for spacecraft landing on celestial bodies, constellation orbital control, and maneuver planning in orbit transfers. It is discussed how RL solutions can address the emerging needs of designing spacecraft with highly autonomous on-board capabilities and implementing controllers (i.e., RL agents) robust to system uncertainties and adaptive to changing environments. For each application field, the RL framework core elements (e.g., the reward function, the RL algorithm and the environment model used for the RL agent training) are discussed with the aim of providing some guidelines in the formulation of spacecraft control problems via a RL framework. At the same time, the adoption of RL in real space projects is also analyzed. Different open points are identified and discussed, e.g., the availability of high-fidelity simulators for the RL agent training and the verification of RL-based solutions. This way, recommendations for future work are proposed with the aim of reducing the technological gap between the solutions proposed by the academic community and the needs/requirements of the space industry.</span></p></div>\",\"PeriodicalId\":50750,\"journal\":{\"name\":\"Annual Reviews in Control\",\"volume\":\"54 \",\"pages\":\"Pages 1-23\"},\"PeriodicalIF\":7.3000,\"publicationDate\":\"2022-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"14\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Annual Reviews in Control\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S136757882200089X\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annual Reviews in Control","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S136757882200089X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Reinforcement learning in spacecraft control applications: Advances, prospects, and challenges
This paper presents and analyzes Reinforcement Learning (RL) based approaches to solve spacecraft control problems. Different application fields are considered, e.g., guidance, navigation and control systems for spacecraft landing on celestial bodies, constellation orbital control, and maneuver planning in orbit transfers. It is discussed how RL solutions can address the emerging needs of designing spacecraft with highly autonomous on-board capabilities and implementing controllers (i.e., RL agents) robust to system uncertainties and adaptive to changing environments. For each application field, the RL framework core elements (e.g., the reward function, the RL algorithm and the environment model used for the RL agent training) are discussed with the aim of providing some guidelines in the formulation of spacecraft control problems via a RL framework. At the same time, the adoption of RL in real space projects is also analyzed. Different open points are identified and discussed, e.g., the availability of high-fidelity simulators for the RL agent training and the verification of RL-based solutions. This way, recommendations for future work are proposed with the aim of reducing the technological gap between the solutions proposed by the academic community and the needs/requirements of the space industry.
期刊介绍:
The field of Control is changing very fast now with technology-driven “societal grand challenges” and with the deployment of new digital technologies. The aim of Annual Reviews in Control is to provide comprehensive and visionary views of the field of Control, by publishing the following types of review articles:
Survey Article: Review papers on main methodologies or technical advances adding considerable technical value to the state of the art. Note that papers which purely rely on mechanistic searches and lack comprehensive analysis providing a clear contribution to the field will be rejected.
Vision Article: Cutting-edge and emerging topics with visionary perspective on the future of the field or how it will bridge multiple disciplines, and
Tutorial research Article: Fundamental guides for future studies.