基于改进立方体投影模型的全景图像匹配利用cnn

Remote. Sens. Pub Date : 2023-07-05 DOI:10.3390/rs15133411
Tian Gao, Chaozhen Lan, Longhao Wang, Wenjun Huang, Fushan Yao, Zijun Wei
{"title":"基于改进立方体投影模型的全景图像匹配利用cnn","authors":"Tian Gao, Chaozhen Lan, Longhao Wang, Wenjun Huang, Fushan Yao, Zijun Wei","doi":"10.3390/rs15133411","DOIUrl":null,"url":null,"abstract":"Three-dimensional (3D) scene reconstruction plays an important role in digital cities, virtual reality, and simultaneous localization and mapping (SLAM). In contrast to perspective images, a single panoramic image can contain the complete scene information because of the wide field of view. The extraction and matching of image feature points is a critical and difficult part of 3D scene reconstruction using panoramic images. We attempted to solve this problem using convolutional neural networks (CNNs). Compared with traditional feature extraction and matching algorithms, the SuperPoint (SP) and SuperGlue (SG) algorithms have advantages for handling images with distortions. However, the rich content of panoramic images leads to a significant disadvantage of these algorithms with regard to time loss. To address this problem, we introduce the Improved Cube Projection Model: First, the panoramic image is projected into split-frame perspective images with significant overlap in six directions. Second, the SP and SG algorithms are used to process the six split-frame images in parallel for feature extraction and matching. Finally, matching points are mapped back to the panoramic image through coordinate inverse mapping. Experimental results in multiple environments indicated that the algorithm can not only guarantee the number of feature points extracted and the accuracy of feature point extraction but can also significantly reduce the computation time compared to other commonly used algorithms.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Leveraging CNNs for Panoramic Image Matching Based on Improved Cube Projection Model\",\"authors\":\"Tian Gao, Chaozhen Lan, Longhao Wang, Wenjun Huang, Fushan Yao, Zijun Wei\",\"doi\":\"10.3390/rs15133411\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Three-dimensional (3D) scene reconstruction plays an important role in digital cities, virtual reality, and simultaneous localization and mapping (SLAM). In contrast to perspective images, a single panoramic image can contain the complete scene information because of the wide field of view. The extraction and matching of image feature points is a critical and difficult part of 3D scene reconstruction using panoramic images. We attempted to solve this problem using convolutional neural networks (CNNs). Compared with traditional feature extraction and matching algorithms, the SuperPoint (SP) and SuperGlue (SG) algorithms have advantages for handling images with distortions. However, the rich content of panoramic images leads to a significant disadvantage of these algorithms with regard to time loss. To address this problem, we introduce the Improved Cube Projection Model: First, the panoramic image is projected into split-frame perspective images with significant overlap in six directions. Second, the SP and SG algorithms are used to process the six split-frame images in parallel for feature extraction and matching. Finally, matching points are mapped back to the panoramic image through coordinate inverse mapping. Experimental results in multiple environments indicated that the algorithm can not only guarantee the number of feature points extracted and the accuracy of feature point extraction but can also significantly reduce the computation time compared to other commonly used algorithms.\",\"PeriodicalId\":20944,\"journal\":{\"name\":\"Remote. Sens.\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Remote. Sens.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3390/rs15133411\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Remote. Sens.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/rs15133411","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

三维场景重建在数字城市、虚拟现实和同步定位与制图(SLAM)中发挥着重要作用。与透视图像相比,全景图像由于视野广阔,可以包含完整的场景信息。图像特征点的提取与匹配是利用全景图像重建三维场景的关键和难点。我们尝试使用卷积神经网络(cnn)来解决这个问题。与传统的特征提取和匹配算法相比,SuperPoint (SP)和SuperGlue (SG)算法在处理畸变图像方面具有优势。然而,由于全景图像的内容丰富,导致这些算法在时间损失方面存在明显的缺点。为了解决这一问题,我们引入了改进的立方体投影模型:首先,将全景图像投影成六个方向上有明显重叠的分帧透视图像。其次,采用SP和SG算法对6幅分帧图像进行并行处理,进行特征提取和匹配;最后,通过坐标逆映射将匹配点映射回全景图像。在多种环境下的实验结果表明,与其他常用算法相比,该算法不仅可以保证提取特征点的数量和提取的准确性,而且可以显著减少计算时间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Leveraging CNNs for Panoramic Image Matching Based on Improved Cube Projection Model
Three-dimensional (3D) scene reconstruction plays an important role in digital cities, virtual reality, and simultaneous localization and mapping (SLAM). In contrast to perspective images, a single panoramic image can contain the complete scene information because of the wide field of view. The extraction and matching of image feature points is a critical and difficult part of 3D scene reconstruction using panoramic images. We attempted to solve this problem using convolutional neural networks (CNNs). Compared with traditional feature extraction and matching algorithms, the SuperPoint (SP) and SuperGlue (SG) algorithms have advantages for handling images with distortions. However, the rich content of panoramic images leads to a significant disadvantage of these algorithms with regard to time loss. To address this problem, we introduce the Improved Cube Projection Model: First, the panoramic image is projected into split-frame perspective images with significant overlap in six directions. Second, the SP and SG algorithms are used to process the six split-frame images in parallel for feature extraction and matching. Finally, matching points are mapped back to the panoramic image through coordinate inverse mapping. Experimental results in multiple environments indicated that the algorithm can not only guarantee the number of feature points extracted and the accuracy of feature point extraction but can also significantly reduce the computation time compared to other commonly used algorithms.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Influences of Different Factors on Gravity Wave Activity in the Lower Stratosphere of the Indian Region Estimating Sugarcane Aboveground Biomass and Carbon Stock Using the Combined Time Series of Sentinel Data with Machine Learning Algorithms Dynamic Screening Strategy Based on Feature Graphs for UAV Object and Group Re-Identification The Expanding of Proglacial Lake Amplified the Frontal Ablation of Jiongpu Co Glacier since 1985 Investigation of Light-Scattering Properties of Non-Spherical Sea Salt Aerosol Particles at Varying Levels of Relative Humidity
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1