Jiachen Yang , Meng Xi , Jiabao Wen , Yang Li , Houbing Herbert Song
{"title":"基于强化学习和边缘计算的水下智能互联网车辆路径规划系统","authors":"Jiachen Yang , Meng Xi , Jiabao Wen , Yang Li , Houbing Herbert Song","doi":"10.1016/j.dcan.2022.05.005","DOIUrl":null,"url":null,"abstract":"<div><p>The Autonomous Underwater Glider (AUG) is a kind of prevailing underwater intelligent internet vehicle and occupies a dominant position in industrial applications, in which path planning is an essential problem. Due to the complexity and variability of the ocean, accurate environment modeling and flexible path planning algorithms are pivotal challenges. The traditional models mainly utilize mathematical functions, which are not complete and reliable. Most existing path planning algorithms depend on the environment and lack flexibility. To overcome these challenges, we propose a path planning system for underwater intelligent internet vehicles. It applies digital twins and sensor data to map the real ocean environment to a virtual digital space, which provides a comprehensive and reliable environment for path simulation. We design a value-based reinforcement learning path planning algorithm and explore the optimal network structure parameters. The path simulation is controlled by a closed-loop model integrated into the terminal vehicle through edge computing. The integration of state input enriches the learning of neural networks and helps to improve generalization and flexibility. The task-related reward function promotes the rapid convergence of the training. The experimental results prove that our reinforcement learning based path planning algorithm has great flexibility and can effectively adapt to a variety of different ocean conditions.</p></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":null,"pages":null},"PeriodicalIF":7.5000,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2352864822000967/pdfft?md5=6d47224900e557afd8fc3eeadffa37f3&pid=1-s2.0-S2352864822000967-main.pdf","citationCount":"0","resultStr":"{\"title\":\"A digital twins enabled underwater intelligent internet vehicle path planning system via reinforcement learning and edge computing\",\"authors\":\"Jiachen Yang , Meng Xi , Jiabao Wen , Yang Li , Houbing Herbert Song\",\"doi\":\"10.1016/j.dcan.2022.05.005\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The Autonomous Underwater Glider (AUG) is a kind of prevailing underwater intelligent internet vehicle and occupies a dominant position in industrial applications, in which path planning is an essential problem. Due to the complexity and variability of the ocean, accurate environment modeling and flexible path planning algorithms are pivotal challenges. The traditional models mainly utilize mathematical functions, which are not complete and reliable. Most existing path planning algorithms depend on the environment and lack flexibility. To overcome these challenges, we propose a path planning system for underwater intelligent internet vehicles. It applies digital twins and sensor data to map the real ocean environment to a virtual digital space, which provides a comprehensive and reliable environment for path simulation. We design a value-based reinforcement learning path planning algorithm and explore the optimal network structure parameters. The path simulation is controlled by a closed-loop model integrated into the terminal vehicle through edge computing. The integration of state input enriches the learning of neural networks and helps to improve generalization and flexibility. The task-related reward function promotes the rapid convergence of the training. The experimental results prove that our reinforcement learning based path planning algorithm has great flexibility and can effectively adapt to a variety of different ocean conditions.</p></div>\",\"PeriodicalId\":48631,\"journal\":{\"name\":\"Digital Communications and Networks\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2024-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2352864822000967/pdfft?md5=6d47224900e557afd8fc3eeadffa37f3&pid=1-s2.0-S2352864822000967-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Digital Communications and Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2352864822000967\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"TELECOMMUNICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Communications and Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2352864822000967","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
A digital twins enabled underwater intelligent internet vehicle path planning system via reinforcement learning and edge computing
The Autonomous Underwater Glider (AUG) is a kind of prevailing underwater intelligent internet vehicle and occupies a dominant position in industrial applications, in which path planning is an essential problem. Due to the complexity and variability of the ocean, accurate environment modeling and flexible path planning algorithms are pivotal challenges. The traditional models mainly utilize mathematical functions, which are not complete and reliable. Most existing path planning algorithms depend on the environment and lack flexibility. To overcome these challenges, we propose a path planning system for underwater intelligent internet vehicles. It applies digital twins and sensor data to map the real ocean environment to a virtual digital space, which provides a comprehensive and reliable environment for path simulation. We design a value-based reinforcement learning path planning algorithm and explore the optimal network structure parameters. The path simulation is controlled by a closed-loop model integrated into the terminal vehicle through edge computing. The integration of state input enriches the learning of neural networks and helps to improve generalization and flexibility. The task-related reward function promotes the rapid convergence of the training. The experimental results prove that our reinforcement learning based path planning algorithm has great flexibility and can effectively adapt to a variety of different ocean conditions.
期刊介绍:
Digital Communications and Networks is a prestigious journal that emphasizes on communication systems and networks. We publish only top-notch original articles and authoritative reviews, which undergo rigorous peer-review. We are proud to announce that all our articles are fully Open Access and can be accessed on ScienceDirect. Our journal is recognized and indexed by eminent databases such as the Science Citation Index Expanded (SCIE) and Scopus.
In addition to regular articles, we may also consider exceptional conference papers that have been significantly expanded. Furthermore, we periodically release special issues that focus on specific aspects of the field.
In conclusion, Digital Communications and Networks is a leading journal that guarantees exceptional quality and accessibility for researchers and scholars in the field of communication systems and networks.