VNI-Net: Vector neurons-based rotation-invariant descriptor for LiDAR place recognition

IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-10-01 DOI:10.1016/j.isprsjprs.2024.09.011
Gengxuan Tian , Junqiao Zhao , Yingfeng Cai , Fenglin Zhang , Xufei Wang , Chen Ye , Sisi Zlatanova , Tiantian Feng
{"title":"VNI-Net: Vector neurons-based rotation-invariant descriptor for LiDAR place recognition","authors":"Gengxuan Tian ,&nbsp;Junqiao Zhao ,&nbsp;Yingfeng Cai ,&nbsp;Fenglin Zhang ,&nbsp;Xufei Wang ,&nbsp;Chen Ye ,&nbsp;Sisi Zlatanova ,&nbsp;Tiantian Feng","doi":"10.1016/j.isprsjprs.2024.09.011","DOIUrl":null,"url":null,"abstract":"<div><div>Despite the emergence of various LiDAR-based place recognition methods, the challenge of place recognition failure due to rotation remains critical. Existing studies have attempted to address this limitation through specific training strategies involving data augment and rotation-invariant networks. However, augmenting 3D rotations (<span><math><mrow><mi>SO</mi><mrow><mo>(</mo><mn>3</mn><mo>)</mo></mrow></mrow></math></span>) is impractical for the former, while the latter primarily focuses on the reduced problem of 2D rotation (<span><math><mrow><mi>SO</mi><mrow><mo>(</mo><mn>2</mn><mo>)</mo></mrow></mrow></math></span>) invariance. Existing methods targeting <span><math><mrow><mi>SO</mi><mrow><mo>(</mo><mn>3</mn><mo>)</mo></mrow></mrow></math></span> rotation invariance suffer from limitations in discriminative capability. In this paper, we propose a novel approach (VNI-Net) based on the Vector Neurons Network (VNN) to achieve <span><math><mrow><mi>SO</mi><mrow><mo>(</mo><mn>3</mn><mo>)</mo></mrow></mrow></math></span> rotation invariance. Our method begins by extracting rotation-equivariant features from neighboring points and projecting these low-dimensional features into a high-dimensional space using VNN. We then compute both Euclidean and cosine distances in the rotation-equivariant feature space to obtain rotation-invariant features. Finally, we aggregate these features using generalized-mean (GeM) pooling to generate the global descriptor. To mitigate the significant information loss associated with formulating rotation-invariant features, we propose computing distances between features at different layers within the Euclidean space neighborhood. This approach significantly enhances the discriminability of the descriptors while maintaining computational efficiency. We conduct experiments across multiple publicly available datasets captured with vehicle-mounted, drone-mounted LiDAR sensors and handheld. VNI-Net outperforms baseline methods by up to 15.3% in datasets with rotation, while achieving comparable results with state-of-the-art place recognition methods in datasets with less rotation. Our code is open-sourced at <span><span>https://github.com/tiev-tongji/VNI-Net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 506-517"},"PeriodicalIF":10.6000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0924271624003496","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GEOGRAPHY, PHYSICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Despite the emergence of various LiDAR-based place recognition methods, the challenge of place recognition failure due to rotation remains critical. Existing studies have attempted to address this limitation through specific training strategies involving data augment and rotation-invariant networks. However, augmenting 3D rotations (SO(3)) is impractical for the former, while the latter primarily focuses on the reduced problem of 2D rotation (SO(2)) invariance. Existing methods targeting SO(3) rotation invariance suffer from limitations in discriminative capability. In this paper, we propose a novel approach (VNI-Net) based on the Vector Neurons Network (VNN) to achieve SO(3) rotation invariance. Our method begins by extracting rotation-equivariant features from neighboring points and projecting these low-dimensional features into a high-dimensional space using VNN. We then compute both Euclidean and cosine distances in the rotation-equivariant feature space to obtain rotation-invariant features. Finally, we aggregate these features using generalized-mean (GeM) pooling to generate the global descriptor. To mitigate the significant information loss associated with formulating rotation-invariant features, we propose computing distances between features at different layers within the Euclidean space neighborhood. This approach significantly enhances the discriminability of the descriptors while maintaining computational efficiency. We conduct experiments across multiple publicly available datasets captured with vehicle-mounted, drone-mounted LiDAR sensors and handheld. VNI-Net outperforms baseline methods by up to 15.3% in datasets with rotation, while achieving comparable results with state-of-the-art place recognition methods in datasets with less rotation. Our code is open-sourced at https://github.com/tiev-tongji/VNI-Net.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
VNI-Net:基于向量神经元的旋转不变描述符,用于激光雷达地点识别
尽管出现了各种基于激光雷达的地点识别方法,但由于旋转而导致的地点识别失败仍然是一个严峻的挑战。现有研究试图通过涉及数据增强和旋转不变网络的特定训练策略来解决这一局限性。然而,对于前者来说,增强三维旋转(SO(3))是不切实际的,而后者则主要关注二维旋转(SO(2))不变性的简化问题。针对 SO(3) 旋转不变性的现有方法在判别能力方面存在局限性。在本文中,我们提出了一种基于矢量神经元网络(VNN)的新方法(VNI-Net)来实现 SO(3) 旋转不变性。我们的方法首先从相邻点中提取旋转不变性特征,然后使用 VNN 将这些低维特征投影到高维空间中。然后,我们计算旋转不变特征空间中的欧氏距离和余弦距离,以获得旋转不变特征。最后,我们使用广义均值(GeM)池法汇总这些特征,生成全局描述符。为了减少与制定旋转不变特征相关的重大信息损失,我们建议计算欧几里得空间邻域内不同层特征之间的距离。这种方法在保持计算效率的同时,大大提高了描述符的可辨别性。我们在多个公开可用的数据集上进行了实验,这些数据集由车载、无人机安装的激光雷达传感器和手持设备采集。在有旋转的数据集上,VNI-Net 的性能比基线方法高出 15.3%,而在旋转较少的数据集上,VNI-Net 与最先进的地点识别方法取得了不相上下的结果。我们的代码开源于 https://github.com/tiev-tongji/VNI-Net。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
ISPRS Journal of Photogrammetry and Remote Sensing
ISPRS Journal of Photogrammetry and Remote Sensing 工程技术-成像科学与照相技术
CiteScore
21.00
自引率
6.30%
发文量
273
审稿时长
40 days
期刊介绍: The ISPRS Journal of Photogrammetry and Remote Sensing (P&RS) serves as the official journal of the International Society for Photogrammetry and Remote Sensing (ISPRS). It acts as a platform for scientists and professionals worldwide who are involved in various disciplines that utilize photogrammetry, remote sensing, spatial information systems, computer vision, and related fields. The journal aims to facilitate communication and dissemination of advancements in these disciplines, while also acting as a comprehensive source of reference and archive. P&RS endeavors to publish high-quality, peer-reviewed research papers that are preferably original and have not been published before. These papers can cover scientific/research, technological development, or application/practical aspects. Additionally, the journal welcomes papers that are based on presentations from ISPRS meetings, as long as they are considered significant contributions to the aforementioned fields. In particular, P&RS encourages the submission of papers that are of broad scientific interest, showcase innovative applications (especially in emerging fields), have an interdisciplinary focus, discuss topics that have received limited attention in P&RS or related journals, or explore new directions in scientific or professional realms. It is preferred that theoretical papers include practical applications, while papers focusing on systems and applications should include a theoretical background.
期刊最新文献
ACMatch: Improving context capture for two-view correspondence learning via adaptive convolution MIWC: A multi-temporal image weighted composition method for satellite-derived bathymetry in shallow waters A universal adapter in segmentation models for transferable landslide mapping Contrastive learning for real SAR image despeckling B3-CDG: A pseudo-sample diffusion generator for bi-temporal building binary change detection
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1