{"title":"ETQ-learning: an improved Q-learning algorithm for path planning","authors":"Huanwei Wang, Jing Jing, Qianlv Wang, Hongqi He, Xuyan Qi, Rui Lou","doi":"10.1007/s11370-024-00544-3","DOIUrl":null,"url":null,"abstract":"<p>Path planning algorithm has always been the core of intelligent robot research; a good path planning algorithm can significantly enhance the efficiency of robots in executing tasks. As the application scenarios for intelligent robots continue to diversify, their adaptability to the environment has become a key focus in current path planning algorithm research. As one of the classic reinforcement learning algorithms, Q-learning (QL) algorithm has its inherent advantages in adapting to the environment, but it also faces various challenges and shortcomings. These issues are primarily centered around suboptimal path planning, slow convergence speed, weak generalization capability and poor obstacle avoidance performance. In order to solve these issues in the QL algorithm, we have carried out the following work. (1) We redesign the reward mechanism of QL algorithm. The traditional Q-learning algorithm’s reward mechanism is simple to implement but lacks directionality. We propose a combined reward mechanism of \"static assignment + dynamic adjustment.\" This mechanism can address the issue of random path selection and ultimately lead to optimal path planning. (2) We redesign the greedy strategy of QL algorithm. In the traditional Q-learning algorithm, the greedy factor in the strategy is either randomly generated or set manually, which limits its applicability to some extent. It is difficult to effectively applied to different physical environments and scenarios, which is the fundamental reason for the poor generalization capability of the algorithm. We propose a dynamic adjustment of the greedy factor, known as the <span>\\(\\varepsilon -acc-increasing\\)</span> greedy strategy, which significantly improves the efficiency of Q-learning algorithm and enhances its generalization capability so that the algorithm has a wider range of application scenarios. (3) We introduce a concept to enhance the algorithm’s obstacle avoidance performance. We design the expansion distance, which pre-sets a \"collision buffer\" between the obstacle and agent to enhance the algorithm’s obstacle avoidance performance.\n</p>","PeriodicalId":48813,"journal":{"name":"Intelligent Service Robotics","volume":"172 1","pages":""},"PeriodicalIF":2.3000,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligent Service Robotics","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11370-024-00544-3","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0
Abstract
Path planning algorithm has always been the core of intelligent robot research; a good path planning algorithm can significantly enhance the efficiency of robots in executing tasks. As the application scenarios for intelligent robots continue to diversify, their adaptability to the environment has become a key focus in current path planning algorithm research. As one of the classic reinforcement learning algorithms, Q-learning (QL) algorithm has its inherent advantages in adapting to the environment, but it also faces various challenges and shortcomings. These issues are primarily centered around suboptimal path planning, slow convergence speed, weak generalization capability and poor obstacle avoidance performance. In order to solve these issues in the QL algorithm, we have carried out the following work. (1) We redesign the reward mechanism of QL algorithm. The traditional Q-learning algorithm’s reward mechanism is simple to implement but lacks directionality. We propose a combined reward mechanism of "static assignment + dynamic adjustment." This mechanism can address the issue of random path selection and ultimately lead to optimal path planning. (2) We redesign the greedy strategy of QL algorithm. In the traditional Q-learning algorithm, the greedy factor in the strategy is either randomly generated or set manually, which limits its applicability to some extent. It is difficult to effectively applied to different physical environments and scenarios, which is the fundamental reason for the poor generalization capability of the algorithm. We propose a dynamic adjustment of the greedy factor, known as the \(\varepsilon -acc-increasing\) greedy strategy, which significantly improves the efficiency of Q-learning algorithm and enhances its generalization capability so that the algorithm has a wider range of application scenarios. (3) We introduce a concept to enhance the algorithm’s obstacle avoidance performance. We design the expansion distance, which pre-sets a "collision buffer" between the obstacle and agent to enhance the algorithm’s obstacle avoidance performance.
期刊介绍:
The journal directs special attention to the emerging significance of integrating robotics with information technology and cognitive science (such as ubiquitous and adaptive computing,information integration in a distributed environment, and cognitive modelling for human-robot interaction), which spurs innovation toward a new multi-dimensional robotic service to humans. The journal intends to capture and archive this emerging yet significant advancement in the field of intelligent service robotics. The journal will publish original papers of innovative ideas and concepts, new discoveries and improvements, as well as novel applications and business models which are related to the field of intelligent service robotics described above and are proven to be of high quality. The areas that the Journal will cover include, but are not limited to: Intelligent robots serving humans in daily life or in a hazardous environment, such as home or personal service robots, entertainment robots, education robots, medical robots, healthcare and rehabilitation robots, and rescue robots (Service Robotics); Intelligent robotic functions in the form of embedded systems for applications to, for example, intelligent space, intelligent vehicles and transportation systems, intelligent manufacturing systems, and intelligent medical facilities (Embedded Robotics); The integration of robotics with network technologies, generating such services and solutions as distributed robots, distance robotic education-aides, and virtual laboratories or museums (Networked Robotics).