Lan Zhu, Huan-Ting Lin, Xia Chen, Wei Liang, Zhen Cheng, Dongheng Shao, Hui Yu, Y. Zheng, Weicheng Ma
{"title":"Indoor Robot Localization Based on Visual Perception and on Particle Filter Algorithm of Increasing Priority Particles","authors":"Lan Zhu, Huan-Ting Lin, Xia Chen, Wei Liang, Zhen Cheng, Dongheng Shao, Hui Yu, Y. Zheng, Weicheng Ma","doi":"10.1145/3558819.3565461","DOIUrl":null,"url":null,"abstract":"The indoor positioning of the robot is a prerequisite for the robot to complete various tasks indoors. Human's own visual perception positioning is to provide self positioning and navigation after the brain analyzes and judges the information of various objects and the relative distance of various objects through the eyes. This paper innovatively allows the robot to imitate the habit of human beings in indoor visual perception and positioning, and uses the depth camera to recognize the distance information and the object recognition function of the yolov3 model. In the mapping stage, the global three-dimensional coordinates of the objects that can be recognized by the depth camera are marked. So that the robot can use the three-sided ranging method to locate in the actual positioning, and combine the data of wheel odometer and IMU. Using the particle filter algorithm that increases the priority particles, the robot can imitate the human's visual perception positioning indoors. Compared with other methods that need to analyze and match too many feature points for visual positioning, the amount of data stored in the early map construction in this paper is less, and the robot can be repositioned more quickly after encountering robot kidnapping and hijacking. Algorithms are more in line with human thinking and have stronger robustness and spatial portability.","PeriodicalId":373484,"journal":{"name":"Proceedings of the 7th International Conference on Cyber Security and Information Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 7th International Conference on Cyber Security and Information Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3558819.3565461","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The indoor positioning of the robot is a prerequisite for the robot to complete various tasks indoors. Human's own visual perception positioning is to provide self positioning and navigation after the brain analyzes and judges the information of various objects and the relative distance of various objects through the eyes. This paper innovatively allows the robot to imitate the habit of human beings in indoor visual perception and positioning, and uses the depth camera to recognize the distance information and the object recognition function of the yolov3 model. In the mapping stage, the global three-dimensional coordinates of the objects that can be recognized by the depth camera are marked. So that the robot can use the three-sided ranging method to locate in the actual positioning, and combine the data of wheel odometer and IMU. Using the particle filter algorithm that increases the priority particles, the robot can imitate the human's visual perception positioning indoors. Compared with other methods that need to analyze and match too many feature points for visual positioning, the amount of data stored in the early map construction in this paper is less, and the robot can be repositioned more quickly after encountering robot kidnapping and hijacking. Algorithms are more in line with human thinking and have stronger robustness and spatial portability.