VIPeR: Visual Incremental Place Recognition With Adaptive Mining and Continual Learning

IF 5.3 2区 计算机科学 Q2 ROBOTICS IEEE Robotics and Automation Letters Pub Date : 2025-02-05 DOI:10.1109/LRA.2025.3539093
Yuhang Ming;Minyang Xu;Xingrui Yang;Weicai Ye;Weihan Wang;Yong Peng;Weichen Dai;Wanzeng Kong
{"title":"VIPeR: Visual Incremental Place Recognition With Adaptive Mining and Continual Learning","authors":"Yuhang Ming;Minyang Xu;Xingrui Yang;Weicai Ye;Weihan Wang;Yong Peng;Weichen Dai;Wanzeng Kong","doi":"10.1109/LRA.2025.3539093","DOIUrl":null,"url":null,"abstract":"Visual place recognition (VPR) is essential to many autonomous systems. Existing VPR methods demonstrate attractive performance at the cost of limited generalizability. When deployed in unseen environments, these methods exhibit significant performance drops. Targeting this issue, we present VIPeR, a novel approach for visual incremental place recognition with the ability to adapt to new environments while retaining the performance of previous ones. We first introduce an adaptive mining strategy that balances the performance within a single environment and the generalizability across multiple environments. Then, to prevent catastrophic forgetting in continual learning, we design a novel multi-stage memory bank for explicit rehearsal. Additionally, we propose a probabilistic knowledge distillation to explicitly safeguard the previously learned knowledge. We evaluate our proposed VIPeR on three large-scale datasets—Oxford Robotcar, Nordland, and TartanAir. For comparison, we first set a baseline performance with naive finetuning. Then, several more recent continual learning methods are compared. Our VIPeR achieves better performance in almost all aspects with the biggest improvement of 13.85% in average performance.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 3","pages":"3038-3045"},"PeriodicalIF":5.3000,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Robotics and Automation Letters","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10873856/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0

Abstract

Visual place recognition (VPR) is essential to many autonomous systems. Existing VPR methods demonstrate attractive performance at the cost of limited generalizability. When deployed in unseen environments, these methods exhibit significant performance drops. Targeting this issue, we present VIPeR, a novel approach for visual incremental place recognition with the ability to adapt to new environments while retaining the performance of previous ones. We first introduce an adaptive mining strategy that balances the performance within a single environment and the generalizability across multiple environments. Then, to prevent catastrophic forgetting in continual learning, we design a novel multi-stage memory bank for explicit rehearsal. Additionally, we propose a probabilistic knowledge distillation to explicitly safeguard the previously learned knowledge. We evaluate our proposed VIPeR on three large-scale datasets—Oxford Robotcar, Nordland, and TartanAir. For comparison, we first set a baseline performance with naive finetuning. Then, several more recent continual learning methods are compared. Our VIPeR achieves better performance in almost all aspects with the biggest improvement of 13.85% in average performance.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于自适应挖掘和持续学习的视觉增量位置识别
视觉位置识别(VPR)是许多自主系统的关键。现有的VPR方法以有限的可泛化性为代价,证明了良好的性能。当部署在不可见的环境中时,这些方法表现出明显的性能下降。针对这个问题,我们提出了VIPeR,一种新的视觉增量位置识别方法,能够适应新的环境,同时保持原有环境的性能。我们首先介绍了一种自适应挖掘策略,该策略平衡了单个环境中的性能和跨多个环境的通用性。然后,为了防止持续学习中的灾难性遗忘,我们设计了一种新的多阶段记忆库用于显式排练。此外,我们提出了一种概率知识蒸馏来明确地保护先前学习的知识。我们在三个大型数据集——oxford robocar、Nordland和TartanAir上评估了我们提出的VIPeR。为了进行比较,我们首先通过初始调优设置一个基准性能。然后,比较了几种最新的持续学习方法。我们的VIPeR在几乎所有方面都取得了更好的性能,平均性能的最大提升为13.85%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Robotics and Automation Letters
IEEE Robotics and Automation Letters Computer Science-Computer Science Applications
CiteScore
9.60
自引率
15.40%
发文量
1428
期刊介绍: The scope of this journal is to publish peer-reviewed articles that provide a timely and concise account of innovative research ideas and application results, reporting significant theoretical findings and application case studies in areas of robotics and automation.
期刊最新文献
WearaCob: A Unified Bidirectional Framework for Adaptive Synergy Between Wearable and Collaborative Robotics NMPC-Augmented Visual Navigation and Safe Learning Control for Large-Scale Mobile Robots Sandwich Jamming-Based Variable Stiffness Structures With User-Defined Degrees of Freedom for Soft Wearable Devices Sequential Probabilistic Descriptor via Uncertainty-Aware Multi-Modal Fusion for Safety-Critical Place Recognition AVO-QP: Task-Adaptive Real-Time Obstacle Avoidance for Redundant Manipulators on Edge Platforms
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1