{"title":"Machine Learning Techniques for Vehicle Matching with Non-Overlapping Visual Features","authors":"Samuel Thornton, S. Dey","doi":"10.1109/CAVS51000.2020.9334562","DOIUrl":null,"url":null,"abstract":"Emerging Vehicle-to-Everything (V2X) technologies promise to improve the perception of streets by enabling data sharing like camera views between multiple vehicles. However, to ensure accuracy of such enhanced perception, the problem of vehicle matching becomes important; the goal of a vehicle matching system is to identify if images of vehicles seen by different cameras correspond to the same vehicle. Such a system is necessary to avoid duplicate detections for a vehicle seen by multiple cameras and to avoid detections being discarded due to a false match being made. One of the most challenging scenarios in vehicle matching is when the camera positions have very large viewpoint differences, as will commonly be the case when the cameras are in geographically separate locations like in vehicles and street infrastructure. In these scenarios, traditional handcrafted features will not be sufficient to create these correspondences due to the lack of common visual features. In this paper we will examine the performance of random forests and neural networks as classifiers for both learned features and high level visual features when used for this vehicles matching problem. Additionally, a novel dataset of vehicles from cameras with very large viewpoint differences was recorded to validate our method; our preliminary results achieve high classification accuracy with low inference time which shows the feasibility of a real time vehicle matching system.","PeriodicalId":409507,"journal":{"name":"2020 IEEE 3rd Connected and Automated Vehicles Symposium (CAVS)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 3rd Connected and Automated Vehicles Symposium (CAVS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CAVS51000.2020.9334562","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Emerging Vehicle-to-Everything (V2X) technologies promise to improve the perception of streets by enabling data sharing like camera views between multiple vehicles. However, to ensure accuracy of such enhanced perception, the problem of vehicle matching becomes important; the goal of a vehicle matching system is to identify if images of vehicles seen by different cameras correspond to the same vehicle. Such a system is necessary to avoid duplicate detections for a vehicle seen by multiple cameras and to avoid detections being discarded due to a false match being made. One of the most challenging scenarios in vehicle matching is when the camera positions have very large viewpoint differences, as will commonly be the case when the cameras are in geographically separate locations like in vehicles and street infrastructure. In these scenarios, traditional handcrafted features will not be sufficient to create these correspondences due to the lack of common visual features. In this paper we will examine the performance of random forests and neural networks as classifiers for both learned features and high level visual features when used for this vehicles matching problem. Additionally, a novel dataset of vehicles from cameras with very large viewpoint differences was recorded to validate our method; our preliminary results achieve high classification accuracy with low inference time which shows the feasibility of a real time vehicle matching system.