Structured Pruning for Efficient Visual Place Recognition

Oliver Grainge, Michael Milford, Indu Bodala, Sarvapali D. Ramchurn, Shoaib Ehsan
{"title":"Structured Pruning for Efficient Visual Place Recognition","authors":"Oliver Grainge, Michael Milford, Indu Bodala, Sarvapali D. Ramchurn, Shoaib Ehsan","doi":"arxiv-2409.07834","DOIUrl":null,"url":null,"abstract":"Visual Place Recognition (VPR) is fundamental for the global re-localization\nof robots and devices, enabling them to recognize previously visited locations\nbased on visual inputs. This capability is crucial for maintaining accurate\nmapping and localization over large areas. Given that VPR methods need to\noperate in real-time on embedded systems, it is critical to optimize these\nsystems for minimal resource consumption. While the most efficient VPR\napproaches employ standard convolutional backbones with fixed descriptor\ndimensions, these often lead to redundancy in the embedding space as well as in\nthe network architecture. Our work introduces a novel structured pruning\nmethod, to not only streamline common VPR architectures but also to\nstrategically remove redundancies within the feature embedding space. This dual\nfocus significantly enhances the efficiency of the system, reducing both map\nand model memory requirements and decreasing feature extraction and retrieval\nlatencies. Our approach has reduced memory usage and latency by 21% and 16%,\nrespectively, across models, while minimally impacting recall@1 accuracy by\nless than 1%. This significant improvement enhances real-time applications on\nedge devices with negligible accuracy loss.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07834","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Visual Place Recognition (VPR) is fundamental for the global re-localization of robots and devices, enabling them to recognize previously visited locations based on visual inputs. This capability is crucial for maintaining accurate mapping and localization over large areas. Given that VPR methods need to operate in real-time on embedded systems, it is critical to optimize these systems for minimal resource consumption. While the most efficient VPR approaches employ standard convolutional backbones with fixed descriptor dimensions, these often lead to redundancy in the embedding space as well as in the network architecture. Our work introduces a novel structured pruning method, to not only streamline common VPR architectures but also to strategically remove redundancies within the feature embedding space. This dual focus significantly enhances the efficiency of the system, reducing both map and model memory requirements and decreasing feature extraction and retrieval latencies. Our approach has reduced memory usage and latency by 21% and 16%, respectively, across models, while minimally impacting recall@1 accuracy by less than 1%. This significant improvement enhances real-time applications on edge devices with negligible accuracy loss.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
高效视觉地点识别的结构化修剪
视觉地点识别(VPR)是机器人和设备进行全球再定位的基础,它使机器人和设备能够根据视觉输入识别以前访问过的地点。这种能力对于保持大面积精确测量和定位至关重要。鉴于 VPR 方法需要在嵌入式系统上实时运行,因此优化系统以减少资源消耗至关重要。虽然最高效的 VPR 方法采用了具有固定描述维度的标准卷积骨干,但这些方法往往会导致嵌入空间和网络架构的冗余。我们的工作引入了一种新颖的结构化剪枝方法,不仅简化了常见的 VPR 架构,还从战略上消除了特征嵌入空间中的冗余。这种双管齐下的方法大大提高了系统的效率,减少了地图和模型的内存需求,降低了特征提取和检索的延迟。我们的方法在各种模型中将内存使用量和延迟时间分别降低了 21% 和 16%,同时将召回@1 准确率的影响降到了 1%以下。这一重大改进增强了边缘设备上的实时应用,其准确性损失可以忽略不计。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Massively Multi-Person 3D Human Motion Forecasting with Scene Context Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution Precise Forecasting of Sky Images Using Spatial Warping JEAN: Joint Expression and Audio-guided NeRF-based Talking Face Generation Applications of Knowledge Distillation in Remote Sensing: A Survey
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1