{"title":"基于视觉的激光传感器无人机实时三维制图","authors":"Jinqiao Shi, B. He, Liwei Zhang, Jianwei Zhang","doi":"10.1109/IROS.2016.7759666","DOIUrl":null,"url":null,"abstract":"Real-time 3D mapping with MAV (Micro Aerial Vehicle) in GPS-denied environment is a challenging problem. In this paper, we present an effective vision-based 3D mapping system with 2D laser-scanner. All algorithms necessary for this system are on-board. In this system, two cameras work together with the laser-scanner for motion estimation. The distance of the points detected by laser-scanner are transformed and treated as the depth of image features, which improves the robustness and accuracy of the pose estimation. The output of visual odometry is used as an initial pose in the Iterative Closest Point (ICP) algorithm and the motion trajectory is optimized by the registration result. We finally get the MAV's state by fusing IMU with the pose estimation from mapping process. This method maximizes the utility of the point clouds information and overcomes the scale problem of lacking depth information in the monocular visual odometry. The results of the experiments prove that this method has good characteristics in real-time and accuracy.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Vision-based real-time 3D mapping for UAV with laser sensor\",\"authors\":\"Jinqiao Shi, B. He, Liwei Zhang, Jianwei Zhang\",\"doi\":\"10.1109/IROS.2016.7759666\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Real-time 3D mapping with MAV (Micro Aerial Vehicle) in GPS-denied environment is a challenging problem. In this paper, we present an effective vision-based 3D mapping system with 2D laser-scanner. All algorithms necessary for this system are on-board. In this system, two cameras work together with the laser-scanner for motion estimation. The distance of the points detected by laser-scanner are transformed and treated as the depth of image features, which improves the robustness and accuracy of the pose estimation. The output of visual odometry is used as an initial pose in the Iterative Closest Point (ICP) algorithm and the motion trajectory is optimized by the registration result. We finally get the MAV's state by fusing IMU with the pose estimation from mapping process. This method maximizes the utility of the point clouds information and overcomes the scale problem of lacking depth information in the monocular visual odometry. The results of the experiments prove that this method has good characteristics in real-time and accuracy.\",\"PeriodicalId\":296337,\"journal\":{\"name\":\"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IROS.2016.7759666\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IROS.2016.7759666","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Vision-based real-time 3D mapping for UAV with laser sensor
Real-time 3D mapping with MAV (Micro Aerial Vehicle) in GPS-denied environment is a challenging problem. In this paper, we present an effective vision-based 3D mapping system with 2D laser-scanner. All algorithms necessary for this system are on-board. In this system, two cameras work together with the laser-scanner for motion estimation. The distance of the points detected by laser-scanner are transformed and treated as the depth of image features, which improves the robustness and accuracy of the pose estimation. The output of visual odometry is used as an initial pose in the Iterative Closest Point (ICP) algorithm and the motion trajectory is optimized by the registration result. We finally get the MAV's state by fusing IMU with the pose estimation from mapping process. This method maximizes the utility of the point clouds information and overcomes the scale problem of lacking depth information in the monocular visual odometry. The results of the experiments prove that this method has good characteristics in real-time and accuracy.