{"title":"基于点云和图像融合的三维语义地图构建","authors":"Huijun Li, Hailong Zhao, Bin Ye, Yu Zhang","doi":"10.1049/csy2.12078","DOIUrl":null,"url":null,"abstract":"<p>Accurate and robust positioning and mapping are the core functions of autonomous mobile robots, and the ability to analyse and understand scenes is also an important criterion for the intelligence of autonomous mobile robots. In the outdoor environment, most robots rely on GPS positioning. When the signal is weak, the positioning error will interfere with the mapping results, making the semantic map construction less robust. This research mainly designs a semantic map construction system that does not rely on GPS signals for large outdoor scenes. It mainly designs a feature extraction scheme based on the sampling characteristics of Livox-AVIA solid-state LiDAR. The factor graph optimisation model of frame pose and inertial measurement unit (IMU) pre-integrated pose, using a sliding window to fuse solid-state LiDAR and IMU data, fuse laser inertial odometry and camera target detection results, refer to the closest point distance and curvature for semantic information. The point cloud is used for semantic segmentation to realise the construction of a 3D semantic map in outdoor scenes. The experiment verifies that laser inertial navigation odometry based on factor map optimisation has better positioning accuracy and lower overall cumulative error at turning, and the 3D semantic map obtained on this basis performs well.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":null,"pages":null},"PeriodicalIF":1.5000,"publicationDate":"2023-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12078","citationCount":"1","resultStr":"{\"title\":\"3D semantic map construction based on point cloud and image fusion\",\"authors\":\"Huijun Li, Hailong Zhao, Bin Ye, Yu Zhang\",\"doi\":\"10.1049/csy2.12078\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Accurate and robust positioning and mapping are the core functions of autonomous mobile robots, and the ability to analyse and understand scenes is also an important criterion for the intelligence of autonomous mobile robots. In the outdoor environment, most robots rely on GPS positioning. When the signal is weak, the positioning error will interfere with the mapping results, making the semantic map construction less robust. This research mainly designs a semantic map construction system that does not rely on GPS signals for large outdoor scenes. It mainly designs a feature extraction scheme based on the sampling characteristics of Livox-AVIA solid-state LiDAR. The factor graph optimisation model of frame pose and inertial measurement unit (IMU) pre-integrated pose, using a sliding window to fuse solid-state LiDAR and IMU data, fuse laser inertial odometry and camera target detection results, refer to the closest point distance and curvature for semantic information. The point cloud is used for semantic segmentation to realise the construction of a 3D semantic map in outdoor scenes. The experiment verifies that laser inertial navigation odometry based on factor map optimisation has better positioning accuracy and lower overall cumulative error at turning, and the 3D semantic map obtained on this basis performs well.</p>\",\"PeriodicalId\":34110,\"journal\":{\"name\":\"IET Cybersystems and Robotics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2023-02-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12078\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IET Cybersystems and Robotics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1049/csy2.12078\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Cybersystems and Robotics","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/csy2.12078","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
3D semantic map construction based on point cloud and image fusion
Accurate and robust positioning and mapping are the core functions of autonomous mobile robots, and the ability to analyse and understand scenes is also an important criterion for the intelligence of autonomous mobile robots. In the outdoor environment, most robots rely on GPS positioning. When the signal is weak, the positioning error will interfere with the mapping results, making the semantic map construction less robust. This research mainly designs a semantic map construction system that does not rely on GPS signals for large outdoor scenes. It mainly designs a feature extraction scheme based on the sampling characteristics of Livox-AVIA solid-state LiDAR. The factor graph optimisation model of frame pose and inertial measurement unit (IMU) pre-integrated pose, using a sliding window to fuse solid-state LiDAR and IMU data, fuse laser inertial odometry and camera target detection results, refer to the closest point distance and curvature for semantic information. The point cloud is used for semantic segmentation to realise the construction of a 3D semantic map in outdoor scenes. The experiment verifies that laser inertial navigation odometry based on factor map optimisation has better positioning accuracy and lower overall cumulative error at turning, and the 3D semantic map obtained on this basis performs well.