Sibo Zhao, Jianwen Zhu, Weimin Bao, Xiaoping Li, Haifeng Sun
{"title":"基于元深度强化学习的多约束制导机动突防策略","authors":"Sibo Zhao, Jianwen Zhu, Weimin Bao, Xiaoping Li, Haifeng Sun","doi":"10.3390/drones7100626","DOIUrl":null,"url":null,"abstract":"In response to the issue of UAV escape guidance, this study proposed a unified intelligent control strategy synthesizing optimal guidance and meta deep reinforcement learning (DRL). Optimal control with minor energy consumption was introduced to meet terminal latitude, longitude, and altitude. Maneuvering escape was realized by adding longitudinal and lateral maneuver overloads. The Maneuver command decision model is calculated based on soft-actor–critic (SAC) networks. Meta-learning was introduced to enhance the autonomous escape capability, which improves the performance of applications in time-varying scenarios not encountered in the training process. In order to obtain training samples at a faster speed, this study used the prediction method to solve reward values, avoiding a large number of numerical integrations. The simulation results demonstrated that the proposed intelligent strategy can achieve highly precise guidance and effective escape.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"53 1","pages":"0"},"PeriodicalIF":4.4000,"publicationDate":"2023-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"A Multi-Constraint Guidance and Maneuvering Penetration Strategy via Meta Deep Reinforcement Learning\",\"authors\":\"Sibo Zhao, Jianwen Zhu, Weimin Bao, Xiaoping Li, Haifeng Sun\",\"doi\":\"10.3390/drones7100626\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In response to the issue of UAV escape guidance, this study proposed a unified intelligent control strategy synthesizing optimal guidance and meta deep reinforcement learning (DRL). Optimal control with minor energy consumption was introduced to meet terminal latitude, longitude, and altitude. Maneuvering escape was realized by adding longitudinal and lateral maneuver overloads. The Maneuver command decision model is calculated based on soft-actor–critic (SAC) networks. Meta-learning was introduced to enhance the autonomous escape capability, which improves the performance of applications in time-varying scenarios not encountered in the training process. In order to obtain training samples at a faster speed, this study used the prediction method to solve reward values, avoiding a large number of numerical integrations. The simulation results demonstrated that the proposed intelligent strategy can achieve highly precise guidance and effective escape.\",\"PeriodicalId\":36448,\"journal\":{\"name\":\"Drones\",\"volume\":\"53 1\",\"pages\":\"0\"},\"PeriodicalIF\":4.4000,\"publicationDate\":\"2023-10-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Drones\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3390/drones7100626\",\"RegionNum\":2,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"REMOTE SENSING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Drones","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/drones7100626","RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"REMOTE SENSING","Score":null,"Total":0}
A Multi-Constraint Guidance and Maneuvering Penetration Strategy via Meta Deep Reinforcement Learning
In response to the issue of UAV escape guidance, this study proposed a unified intelligent control strategy synthesizing optimal guidance and meta deep reinforcement learning (DRL). Optimal control with minor energy consumption was introduced to meet terminal latitude, longitude, and altitude. Maneuvering escape was realized by adding longitudinal and lateral maneuver overloads. The Maneuver command decision model is calculated based on soft-actor–critic (SAC) networks. Meta-learning was introduced to enhance the autonomous escape capability, which improves the performance of applications in time-varying scenarios not encountered in the training process. In order to obtain training samples at a faster speed, this study used the prediction method to solve reward values, avoiding a large number of numerical integrations. The simulation results demonstrated that the proposed intelligent strategy can achieve highly precise guidance and effective escape.