Che-Tsung Lin, Long-Tai Chen, Pai-Wei Cheng, Yuan-fang Wang
{"title":"Robust and Efficient Tracking with Large Lens Distortion for Vehicular Technology Applications","authors":"Che-Tsung Lin, Long-Tai Chen, Pai-Wei Cheng, Yuan-fang Wang","doi":"10.1109/VTCFall.2016.7881204","DOIUrl":null,"url":null,"abstract":"Advances in video technology have enabled its wide adoption in the auto industry. Today, many vehicles are equipped with backup, front-looking, and side-looking cameras that allow the driver to easily monitor traffic around the vehicle for enhancing safety. One difficulty with performing automated image analysis using a vehicle's onboard video has to do with the significant lens distortion of these sensors to cover a large field of view around the vehicle. This paper reports our research on proposing a tracking scheme that improves the accuracy and denseness of object tracking in the presence of large lens distortion. The contribution of our research is 4-fold: (1) We evaluated a large collection of state-of-the-art trackers to understand their deficiency when applied to videos with large lens distortion, (2) we showed how to derive useful evaluation metrics from public-domain, real-world driving videos that do not come with ground-truth information on pixel tracking, (3) we identified many enhancement techniques that can potentially help improve the poor performance of current trackers on videos of large lens distortion, and (4) we performed a systematic study to validate the efficacy of these enhancement techniques and proposed a new tracker design that achieved substantial improvement over the state-of-the- art, in terms of both accuracy and density, based on a rigorous precision vs. recall analysis.","PeriodicalId":6484,"journal":{"name":"2016 IEEE 84th Vehicular Technology Conference (VTC-Fall)","volume":"6 1","pages":"1-7"},"PeriodicalIF":0.0000,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE 84th Vehicular Technology Conference (VTC-Fall)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VTCFall.2016.7881204","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Advances in video technology have enabled its wide adoption in the auto industry. Today, many vehicles are equipped with backup, front-looking, and side-looking cameras that allow the driver to easily monitor traffic around the vehicle for enhancing safety. One difficulty with performing automated image analysis using a vehicle's onboard video has to do with the significant lens distortion of these sensors to cover a large field of view around the vehicle. This paper reports our research on proposing a tracking scheme that improves the accuracy and denseness of object tracking in the presence of large lens distortion. The contribution of our research is 4-fold: (1) We evaluated a large collection of state-of-the-art trackers to understand their deficiency when applied to videos with large lens distortion, (2) we showed how to derive useful evaluation metrics from public-domain, real-world driving videos that do not come with ground-truth information on pixel tracking, (3) we identified many enhancement techniques that can potentially help improve the poor performance of current trackers on videos of large lens distortion, and (4) we performed a systematic study to validate the efficacy of these enhancement techniques and proposed a new tracker design that achieved substantial improvement over the state-of-the- art, in terms of both accuracy and density, based on a rigorous precision vs. recall analysis.