Machine Learning Techniques for Vehicle Matching with Non-Overlapping Visual Features

Samuel Thornton, S. Dey
{"title":"Machine Learning Techniques for Vehicle Matching with Non-Overlapping Visual Features","authors":"Samuel Thornton, S. Dey","doi":"10.1109/CAVS51000.2020.9334562","DOIUrl":null,"url":null,"abstract":"Emerging Vehicle-to-Everything (V2X) technologies promise to improve the perception of streets by enabling data sharing like camera views between multiple vehicles. However, to ensure accuracy of such enhanced perception, the problem of vehicle matching becomes important; the goal of a vehicle matching system is to identify if images of vehicles seen by different cameras correspond to the same vehicle. Such a system is necessary to avoid duplicate detections for a vehicle seen by multiple cameras and to avoid detections being discarded due to a false match being made. One of the most challenging scenarios in vehicle matching is when the camera positions have very large viewpoint differences, as will commonly be the case when the cameras are in geographically separate locations like in vehicles and street infrastructure. In these scenarios, traditional handcrafted features will not be sufficient to create these correspondences due to the lack of common visual features. In this paper we will examine the performance of random forests and neural networks as classifiers for both learned features and high level visual features when used for this vehicles matching problem. Additionally, a novel dataset of vehicles from cameras with very large viewpoint differences was recorded to validate our method; our preliminary results achieve high classification accuracy with low inference time which shows the feasibility of a real time vehicle matching system.","PeriodicalId":409507,"journal":{"name":"2020 IEEE 3rd Connected and Automated Vehicles Symposium (CAVS)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 3rd Connected and Automated Vehicles Symposium (CAVS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CAVS51000.2020.9334562","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Emerging Vehicle-to-Everything (V2X) technologies promise to improve the perception of streets by enabling data sharing like camera views between multiple vehicles. However, to ensure accuracy of such enhanced perception, the problem of vehicle matching becomes important; the goal of a vehicle matching system is to identify if images of vehicles seen by different cameras correspond to the same vehicle. Such a system is necessary to avoid duplicate detections for a vehicle seen by multiple cameras and to avoid detections being discarded due to a false match being made. One of the most challenging scenarios in vehicle matching is when the camera positions have very large viewpoint differences, as will commonly be the case when the cameras are in geographically separate locations like in vehicles and street infrastructure. In these scenarios, traditional handcrafted features will not be sufficient to create these correspondences due to the lack of common visual features. In this paper we will examine the performance of random forests and neural networks as classifiers for both learned features and high level visual features when used for this vehicles matching problem. Additionally, a novel dataset of vehicles from cameras with very large viewpoint differences was recorded to validate our method; our preliminary results achieve high classification accuracy with low inference time which shows the feasibility of a real time vehicle matching system.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于非重叠视觉特征的车辆匹配机器学习技术
新兴的车联网(V2X)技术有望通过实现多辆车之间的摄像头视图等数据共享,改善对街道的感知。然而,为了保证这种增强感知的准确性,车辆匹配问题变得非常重要。车辆匹配系统的目标是识别不同摄像头看到的车辆图像是否对应于同一辆车。这样的系统是必要的,以避免对多个摄像头看到的车辆进行重复检测,并避免由于错误匹配而丢弃检测。车辆匹配中最具挑战性的场景之一是当摄像机位置具有非常大的视点差异时,通常情况下,摄像机位于地理位置分开的位置,如车辆和街道基础设施。在这些场景中,由于缺乏常见的视觉特征,传统的手工制作的特征将不足以创建这些对应。在本文中,我们将研究随机森林和神经网络在用于车辆匹配问题时作为学习特征和高级视觉特征分类器的性能。此外,还记录了一个新的车辆数据集,这些数据集来自视点差异非常大的摄像机,以验证我们的方法;初步结果表明,在较短的推理时间内,分类精度较高,表明了实时车辆匹配系统的可行性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Prototyping EcoCAR Connected Vehicle Testing System Using DigiCAV Development Platform Title Page Extended H∞ Filter Adaptation Based on Innovation Sequence for Advanced Ego-Vehicle Motion Estimation Hybrid Model Based Pre-Crash Severity Estimation for Automated Driving A Methodology to Determine Test Scenarios for Sensor Constellation Evaluations
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1