Na Dong , Shoufu Liu , Andrew W.H. Ip , Kai Leung Yung , Zhongke Gao , Rongshun Juan , Yanhui Wang
{"title":"End-to-end autonomous underwater vehicle path following control method based on improved soft actor–critic for deep space exploration","authors":"Na Dong , Shoufu Liu , Andrew W.H. Ip , Kai Leung Yung , Zhongke Gao , Rongshun Juan , Yanhui Wang","doi":"10.1016/j.jii.2025.100792","DOIUrl":null,"url":null,"abstract":"<div><div>The vast extraterrestrial ocean is becoming a hotspot for deep space exploration of life in the future. Considering autonomous underwater vehicle (AUV) has a larger range of activities and greater flexibility, it plays an important role in extraterrestrial ocean research. To solve the problems in path following tasks of AUV, such as high training cost and poor exploration ability, an end-to-end AUV path following control method based on an improved soft actor–critic (SAC) algorithm is designed in this paper, leveraging the advancements in deep reinforcement learning (DRL) to enhance performance and efficiency. It uses sensor information to understand the environment and its state to output the policy to complete the adaptive action. Policies that consider long-term effects can be learned through continuous interaction with the environment, which is helpful in improving adaptability and enhancing the robustness of AUV control. A non-policy sampling method is designed to improve the utilization efficiency of experience transitions in the replay buffer, accelerate convergence, and enhance its stability. A reward function on the current position and heading angle of AUV is designed to avoid the situation of sparse reward leading to slow learning or ineffective learning of agents. In the meantime, we use the continuous action space instead of the discrete action space to make the real-time control of the AUV more accurate. Finally, it is tested on the gazebo simulation platform, and the results confirm that reinforcement learning is effective in AUV control, and the method proposed in this paper has faster and better following performance than traditional reinforcement learning methods.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"45 ","pages":"Article 100792"},"PeriodicalIF":10.4000,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Industrial Information Integration","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2452414X25000160","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
The vast extraterrestrial ocean is becoming a hotspot for deep space exploration of life in the future. Considering autonomous underwater vehicle (AUV) has a larger range of activities and greater flexibility, it plays an important role in extraterrestrial ocean research. To solve the problems in path following tasks of AUV, such as high training cost and poor exploration ability, an end-to-end AUV path following control method based on an improved soft actor–critic (SAC) algorithm is designed in this paper, leveraging the advancements in deep reinforcement learning (DRL) to enhance performance and efficiency. It uses sensor information to understand the environment and its state to output the policy to complete the adaptive action. Policies that consider long-term effects can be learned through continuous interaction with the environment, which is helpful in improving adaptability and enhancing the robustness of AUV control. A non-policy sampling method is designed to improve the utilization efficiency of experience transitions in the replay buffer, accelerate convergence, and enhance its stability. A reward function on the current position and heading angle of AUV is designed to avoid the situation of sparse reward leading to slow learning or ineffective learning of agents. In the meantime, we use the continuous action space instead of the discrete action space to make the real-time control of the AUV more accurate. Finally, it is tested on the gazebo simulation platform, and the results confirm that reinforcement learning is effective in AUV control, and the method proposed in this paper has faster and better following performance than traditional reinforcement learning methods.
期刊介绍:
The Journal of Industrial Information Integration focuses on the industry's transition towards industrial integration and informatization, covering not only hardware and software but also information integration. It serves as a platform for promoting advances in industrial information integration, addressing challenges, issues, and solutions in an interdisciplinary forum for researchers, practitioners, and policy makers.
The Journal of Industrial Information Integration welcomes papers on foundational, technical, and practical aspects of industrial information integration, emphasizing the complex and cross-disciplinary topics that arise in industrial integration. Techniques from mathematical science, computer science, computer engineering, electrical and electronic engineering, manufacturing engineering, and engineering management are crucial in this context.