Shripad V. Deshpande, Harikrishnan R, Rahee Walambe
{"title":"基于 POMDP 的轮式移动机器人路径规划概率决策","authors":"Shripad V. Deshpande, Harikrishnan R, Rahee Walambe","doi":"10.1016/j.cogr.2024.06.001","DOIUrl":null,"url":null,"abstract":"<div><p>Path Planning in a collaborative mobile robot system has been a research topic for many years. Uncertainty in robot states, actions, and environmental conditions makes finding the optimum path for navigation highly challenging for the robot. To achieve robust behavior for mobile robots in the presence of static and dynamic obstacles, it is pertinent that the robot employs a path-finding mechanism that is based on the probabilistic perception of the uncertainty in various parameters governing its movement. Partially Observable Markov Decision Process (POMDP) is being used by many researchers as a proven methodology for handling uncertainty. The POMDP framework requires manually setting up the state transition matrix, the observation matrix, and the reward values. This paper describes an approach for creating the POMDP model and demonstrates its working by simulating it on two mobile robots destined on a collision course. Selective test cases are run on the two robots with three categories – MDP (POMDP with belief state spread of 1), POMDP with distribution spread of belief state over ten observations, and distribution spread across two observations. Uncertainty in the sensor data is simulated with varying levels of up to 10 %. The results are compared and analyzed. It is demonstrated that when the observation probability spread is increased from 2 to 10, collision reduces from 34 % to 22 %, indicating that the system's robustness increases by 12 % with only a marginal increase of 3.4 % in the computational complexity.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"4 ","pages":"Pages 104-115"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667241324000077/pdfft?md5=ccfa806c0ae32c5aba224cbf968b6b8d&pid=1-s2.0-S2667241324000077-main.pdf","citationCount":"0","resultStr":"{\"title\":\"POMDP-based probabilistic decision making for path planning in wheeled mobile robot\",\"authors\":\"Shripad V. Deshpande, Harikrishnan R, Rahee Walambe\",\"doi\":\"10.1016/j.cogr.2024.06.001\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Path Planning in a collaborative mobile robot system has been a research topic for many years. Uncertainty in robot states, actions, and environmental conditions makes finding the optimum path for navigation highly challenging for the robot. To achieve robust behavior for mobile robots in the presence of static and dynamic obstacles, it is pertinent that the robot employs a path-finding mechanism that is based on the probabilistic perception of the uncertainty in various parameters governing its movement. Partially Observable Markov Decision Process (POMDP) is being used by many researchers as a proven methodology for handling uncertainty. The POMDP framework requires manually setting up the state transition matrix, the observation matrix, and the reward values. This paper describes an approach for creating the POMDP model and demonstrates its working by simulating it on two mobile robots destined on a collision course. Selective test cases are run on the two robots with three categories – MDP (POMDP with belief state spread of 1), POMDP with distribution spread of belief state over ten observations, and distribution spread across two observations. Uncertainty in the sensor data is simulated with varying levels of up to 10 %. The results are compared and analyzed. It is demonstrated that when the observation probability spread is increased from 2 to 10, collision reduces from 34 % to 22 %, indicating that the system's robustness increases by 12 % with only a marginal increase of 3.4 % in the computational complexity.</p></div>\",\"PeriodicalId\":100288,\"journal\":{\"name\":\"Cognitive Robotics\",\"volume\":\"4 \",\"pages\":\"Pages 104-115\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2667241324000077/pdfft?md5=ccfa806c0ae32c5aba224cbf968b6b8d&pid=1-s2.0-S2667241324000077-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognitive Robotics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2667241324000077\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Robotics","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667241324000077","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
POMDP-based probabilistic decision making for path planning in wheeled mobile robot
Path Planning in a collaborative mobile robot system has been a research topic for many years. Uncertainty in robot states, actions, and environmental conditions makes finding the optimum path for navigation highly challenging for the robot. To achieve robust behavior for mobile robots in the presence of static and dynamic obstacles, it is pertinent that the robot employs a path-finding mechanism that is based on the probabilistic perception of the uncertainty in various parameters governing its movement. Partially Observable Markov Decision Process (POMDP) is being used by many researchers as a proven methodology for handling uncertainty. The POMDP framework requires manually setting up the state transition matrix, the observation matrix, and the reward values. This paper describes an approach for creating the POMDP model and demonstrates its working by simulating it on two mobile robots destined on a collision course. Selective test cases are run on the two robots with three categories – MDP (POMDP with belief state spread of 1), POMDP with distribution spread of belief state over ten observations, and distribution spread across two observations. Uncertainty in the sensor data is simulated with varying levels of up to 10 %. The results are compared and analyzed. It is demonstrated that when the observation probability spread is increased from 2 to 10, collision reduces from 34 % to 22 %, indicating that the system's robustness increases by 12 % with only a marginal increase of 3.4 % in the computational complexity.