Zhiqiang Wu, Yebo Yin , Jie Liu, De Zhang, Jie Chen, Wei Jiang
{"title":"A Novel Path Planning Approach for Mobile Robot in Radioactive Environment Based on Improved Deep Q Network Algorithm","authors":"Zhiqiang Wu, Yebo Yin , Jie Liu, De Zhang, Jie Chen, Wei Jiang","doi":"10.3390/sym15112048","DOIUrl":null,"url":null,"abstract":"The path planning problem of nuclear environment robots refers to finding a collision-free path under the constraints of path length and an accumulated radiation dose. To solve this problem, the Improved Dueling Deep Double Q Network algorithm (ID3QN) based on asymmetric neural network structure was proposed. To address the issues of overestimation and low sample utilization in the traditional Deep Q Network (DQN) algorithm, we optimized the neural network structure and used the double network to estimate action values. We also improved the action selection mechanism, adopted a priority experience replay mechanism, and redesigned the reward function. To evaluate the efficiency of the proposed algorithm, we designed simple and complex radioactive grid environments for comparison. We compared the ID3QN algorithm with traditional algorithms and some deep reinforcement learning algorithms. The simulation results indicate that in the simple radioactive grid environment, the ID3QN algorithm outperforms traditional algorithms such as A*, GA, and ACO in terms of path length and accumulated radiation dosage. Compared to other deep reinforcement learning algorithms, including DQN and some improved DQN algorithms, the ID3QN algorithm reduced the path length by 15.6%, decreased the accumulated radiation dose by 23.5%, and converged approximately 2300 episodes faster. In the complex radioactive grid environment, the ID3QN algorithm also outperformed the A*, GA, ACO, and other deep reinforcement learning algorithms in terms of path length and an accumulated radiation dose. Furthermore, the ID3QN algorithm can plan an obstacle-free optimal path with a low radiation dose even in complex environments. These results demonstrate that the ID3QN algorithm is an effective approach for solving robot path planning problems in nuclear environments, thereby enhancing the safety and reliability of robots in such environments.","PeriodicalId":48874,"journal":{"name":"Symmetry-Basel","volume":"33 5","pages":"0"},"PeriodicalIF":2.2000,"publicationDate":"2023-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Symmetry-Basel","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/sym15112048","RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
The path planning problem of nuclear environment robots refers to finding a collision-free path under the constraints of path length and an accumulated radiation dose. To solve this problem, the Improved Dueling Deep Double Q Network algorithm (ID3QN) based on asymmetric neural network structure was proposed. To address the issues of overestimation and low sample utilization in the traditional Deep Q Network (DQN) algorithm, we optimized the neural network structure and used the double network to estimate action values. We also improved the action selection mechanism, adopted a priority experience replay mechanism, and redesigned the reward function. To evaluate the efficiency of the proposed algorithm, we designed simple and complex radioactive grid environments for comparison. We compared the ID3QN algorithm with traditional algorithms and some deep reinforcement learning algorithms. The simulation results indicate that in the simple radioactive grid environment, the ID3QN algorithm outperforms traditional algorithms such as A*, GA, and ACO in terms of path length and accumulated radiation dosage. Compared to other deep reinforcement learning algorithms, including DQN and some improved DQN algorithms, the ID3QN algorithm reduced the path length by 15.6%, decreased the accumulated radiation dose by 23.5%, and converged approximately 2300 episodes faster. In the complex radioactive grid environment, the ID3QN algorithm also outperformed the A*, GA, ACO, and other deep reinforcement learning algorithms in terms of path length and an accumulated radiation dose. Furthermore, the ID3QN algorithm can plan an obstacle-free optimal path with a low radiation dose even in complex environments. These results demonstrate that the ID3QN algorithm is an effective approach for solving robot path planning problems in nuclear environments, thereby enhancing the safety and reliability of robots in such environments.
期刊介绍:
Symmetry (ISSN 2073-8994), an international and interdisciplinary scientific journal, publishes reviews, regular research papers and short notes. Our aim is to encourage scientists to publish their experimental and theoretical research in as much detail as possible. There is no restriction on the length of the papers. Full experimental and/or methodical details must be provided, so that results can be reproduced.