{"title":"基于摄影测量中的估计参数,用 QR 码整合多个密集点云,以减少计算时间","authors":"Keita Nakamura, Keita Baba, Yutaka Watanobe, Toshihide Hanari, Taku Matsumoto, Takashi Imabuchi, Kuniaki Kawabata","doi":"10.1007/s10015-024-00966-3","DOIUrl":null,"url":null,"abstract":"<div><p>This paper describes a method for integrating multiple dense point clouds using a shared landmark to generate a single real-scale integrated result for photogrammetry. It is difficult to integrate high-density point clouds reconstructed by photogrammetry because the scale differs with each photogrammetry. To solve this problem, this study places a QR code of known sizes, which is a shared landmark, in the reconstruction target environment and divides the reconstruction target environment based on the position of the QR code that is placed. Then, photogrammetry is performed for each divided environment to obtain each high-density point cloud. Finally, we propose a method of scaling each high-density point cloud based on the size of the QR code and aligning each high-density point cloud as a single high-point cloud by partial-to-partial registration. To verify the effectiveness of the method, this paper compares the results obtained by applying all images to photogrammetry with those obtained by the proposed method in terms of accuracy and computation time. In this verification, ideal images generated by simulation and images obtained in real environments are applied to photogrammetry. We clarify the relationship between the number of divided environments, the accuracy of the reconstruction result, and the computation time required for the reconstruction.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":null,"pages":null},"PeriodicalIF":0.8000,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10015-024-00966-3.pdf","citationCount":"0","resultStr":"{\"title\":\"Integration of multiple dense point clouds based on estimated parameters in photogrammetry with QR code for reducing computation time\",\"authors\":\"Keita Nakamura, Keita Baba, Yutaka Watanobe, Toshihide Hanari, Taku Matsumoto, Takashi Imabuchi, Kuniaki Kawabata\",\"doi\":\"10.1007/s10015-024-00966-3\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>This paper describes a method for integrating multiple dense point clouds using a shared landmark to generate a single real-scale integrated result for photogrammetry. It is difficult to integrate high-density point clouds reconstructed by photogrammetry because the scale differs with each photogrammetry. To solve this problem, this study places a QR code of known sizes, which is a shared landmark, in the reconstruction target environment and divides the reconstruction target environment based on the position of the QR code that is placed. Then, photogrammetry is performed for each divided environment to obtain each high-density point cloud. Finally, we propose a method of scaling each high-density point cloud based on the size of the QR code and aligning each high-density point cloud as a single high-point cloud by partial-to-partial registration. To verify the effectiveness of the method, this paper compares the results obtained by applying all images to photogrammetry with those obtained by the proposed method in terms of accuracy and computation time. In this verification, ideal images generated by simulation and images obtained in real environments are applied to photogrammetry. We clarify the relationship between the number of divided environments, the accuracy of the reconstruction result, and the computation time required for the reconstruction.</p></div>\",\"PeriodicalId\":46050,\"journal\":{\"name\":\"Artificial Life and Robotics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.8000,\"publicationDate\":\"2024-09-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s10015-024-00966-3.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Life and Robotics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10015-024-00966-3\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ROBOTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Life and Robotics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s10015-024-00966-3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ROBOTICS","Score":null,"Total":0}
Integration of multiple dense point clouds based on estimated parameters in photogrammetry with QR code for reducing computation time
This paper describes a method for integrating multiple dense point clouds using a shared landmark to generate a single real-scale integrated result for photogrammetry. It is difficult to integrate high-density point clouds reconstructed by photogrammetry because the scale differs with each photogrammetry. To solve this problem, this study places a QR code of known sizes, which is a shared landmark, in the reconstruction target environment and divides the reconstruction target environment based on the position of the QR code that is placed. Then, photogrammetry is performed for each divided environment to obtain each high-density point cloud. Finally, we propose a method of scaling each high-density point cloud based on the size of the QR code and aligning each high-density point cloud as a single high-point cloud by partial-to-partial registration. To verify the effectiveness of the method, this paper compares the results obtained by applying all images to photogrammetry with those obtained by the proposed method in terms of accuracy and computation time. In this verification, ideal images generated by simulation and images obtained in real environments are applied to photogrammetry. We clarify the relationship between the number of divided environments, the accuracy of the reconstruction result, and the computation time required for the reconstruction.