NYU-VPR: Long-Term Visual Place Recognition Benchmark with View Direction and Data Anonymization Influences.

Diwei Sheng, Yuxiang Chai, Xinru Li, Chen Feng, Jianzhe Lin, Claudio Silva, John-Ross Rizzo
{"title":"NYU-VPR: Long-Term Visual Place Recognition Benchmark with View Direction and Data Anonymization Influences.","authors":"Diwei Sheng,&nbsp;Yuxiang Chai,&nbsp;Xinru Li,&nbsp;Chen Feng,&nbsp;Jianzhe Lin,&nbsp;Claudio Silva,&nbsp;John-Ross Rizzo","doi":"10.1109/iros51168.2021.9636640","DOIUrl":null,"url":null,"abstract":"<p><p>Visual place recognition (VPR) is critical in not only localization and mapping for autonomous driving vehicles, but also assistive navigation for the visually impaired population. To enable a long-term VPR system on a large scale, several challenges need to be addressed. First, different applications could require different image view directions, such as front views for self-driving cars while side views for the low vision people. Second, VPR in metropolitan scenes can often cause privacy concerns due to the imaging of pedestrian and vehicle identity information, calling for the need for data anonymization before VPR queries and database construction. Both factors could lead to VPR performance variations that are not well understood yet. To study their influences, we present the NYU-VPR dataset that contains more than 200,000 images over a 2km×2km area near the New York University campus, taken within the whole year of 2016. We present benchmark results on several popular VPR algorithms showing that side views are significantly more challenging for current VPR methods while the influence of data anonymization is almost negligible, together with our hypothetical explanations and in-depth analysis.</p>","PeriodicalId":74523,"journal":{"name":"Proceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":" ","pages":"9773-9779"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9394449/pdf/nihms-1827810.pdf","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE/RSJ International Conference on Intelligent Robots and Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/iros51168.2021.9636640","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2021/12/16 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

Abstract

Visual place recognition (VPR) is critical in not only localization and mapping for autonomous driving vehicles, but also assistive navigation for the visually impaired population. To enable a long-term VPR system on a large scale, several challenges need to be addressed. First, different applications could require different image view directions, such as front views for self-driving cars while side views for the low vision people. Second, VPR in metropolitan scenes can often cause privacy concerns due to the imaging of pedestrian and vehicle identity information, calling for the need for data anonymization before VPR queries and database construction. Both factors could lead to VPR performance variations that are not well understood yet. To study their influences, we present the NYU-VPR dataset that contains more than 200,000 images over a 2km×2km area near the New York University campus, taken within the whole year of 2016. We present benchmark results on several popular VPR algorithms showing that side views are significantly more challenging for current VPR methods while the influence of data anonymization is almost negligible, together with our hypothetical explanations and in-depth analysis.

Abstract Image

Abstract Image

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
NYU-VPR:视点方向和数据匿名化影响下的长期视觉位置识别基准。
视觉位置识别(VPR)不仅对自动驾驶车辆的定位和地图绘制至关重要,而且对视障人群的辅助导航也至关重要。为了实现大规模的长期VPR系统,需要解决几个挑战。首先,不同的应用程序可能需要不同的图像视图方向,例如自动驾驶汽车的前视图,而低视力人群的侧视图。其次,VPR在城域场景中由于对行人和车辆身份信息进行成像,往往会引起隐私问题,需要在VPR查询和数据库构建之前对数据进行匿名化处理。这两个因素都可能导致VPR性能的变化,目前还没有得到很好的理解。为了研究它们的影响,我们提供了NYU-VPR数据集,其中包含2016年全年在纽约大学校园附近2km×2km区域拍摄的20多万张图像。我们提供了几种流行的VPR算法的基准测试结果,表明侧视图对于当前的VPR方法来说更具挑战性,而数据匿名化的影响几乎可以忽略不计,以及我们的假设解释和深入分析。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
FBG-based Shape-Sensing to Enable Lateral Deflection Methods of Autonomous Needle Insertion. An Energetic Approach to Task-Invariant Ankle Exoskeleton Control. Controlling Powered Prosthesis Kinematics over Continuous Transitions Between Walk and Stair Ascent. Effects of Personalization on Gait-State Tracking Performance Using Extended Kalman Filters. Improving Amputee Endurance over Activities of Daily Living with a Robotic Knee-Ankle Prosthesis: A Case Study.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1