{"title":"Multi-Task Occlusion Learning for Real-Time Visual Object Tracking","authors":"Gozde Sahin, L. Itti","doi":"10.1109/ICIP42928.2021.9506239","DOIUrl":null,"url":null,"abstract":"Occlusion handling is one of the important challenges in the field of visual tracking, especially for real-time applications, where further processing for occlusion reasoning may not always be possible. In this paper, an occlusion-aware real-time object tracker is proposed, which enhances the baseline SiamRPN model with an additional branch that directly predicts the occlusion level of the object. Experimental results on GOT-10k and VOT benchmarks show that learning to predict occlusion levels end-to-end in this multi-task learning framework helps improve tracking accuracy, especially on frames that contain occlusions. Up to 7% improvement on EAO scores can be observed for occluded frames, which are only 11% of the data. The performance results over all frames also indicate the model does favorably compared to the other trackers.","PeriodicalId":314429,"journal":{"name":"2021 IEEE International Conference on Image Processing (ICIP)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Image Processing (ICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP42928.2021.9506239","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Occlusion handling is one of the important challenges in the field of visual tracking, especially for real-time applications, where further processing for occlusion reasoning may not always be possible. In this paper, an occlusion-aware real-time object tracker is proposed, which enhances the baseline SiamRPN model with an additional branch that directly predicts the occlusion level of the object. Experimental results on GOT-10k and VOT benchmarks show that learning to predict occlusion levels end-to-end in this multi-task learning framework helps improve tracking accuracy, especially on frames that contain occlusions. Up to 7% improvement on EAO scores can be observed for occluded frames, which are only 11% of the data. The performance results over all frames also indicate the model does favorably compared to the other trackers.