Hexi Li, Na Jiang, Chenxin Sun, Zhong Zhou, Wei Wu
{"title":"学习深度外观特征用于多目标跟踪","authors":"Hexi Li, Na Jiang, Chenxin Sun, Zhong Zhou, Wei Wu","doi":"10.1109/ICVRV.2017.00011","DOIUrl":null,"url":null,"abstract":"Multi-target tracking is a worthy studying issue in computer vision. For surveillance video, frequent occlusion and dense crowds complicate the issue. To resolve these difficulties, this paper proposes an effective algorithm of multi-target tracking in videos. Firstly, the faster Rcnn is proposed with the residual network to extract the objects of pedestrians in surveillance videos. The proposedment can effectively eliminate invalid target detection frames, separate peer targets and resist partial occlusions. Then, this paper put forward an accurate and efficient appearance-feature matching network model that is inspired by pedestrian re-identification theory. The deep learning feature-extraction module is composed of the stem Cnn and the Resnet blocks, therefore it can load res-50 caffemodel as pretraining model to increase the accuracy of the featureextraction. Meanwhile, the proposed network can decrease the time of train and test comparing with Resnet. Finally, the obtained multiple target tracking trajectories are further optimized by the strategy of occlusion distinction, deduplication and merging. The experiment results of the 2D MOT 2015 benchmark, KITTI dataset indicate that this proposed algorithm outperforms alternative multiple objects trackers in terms of multiple indicators.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Learning Deep Appearance Feature for Multi-target Tracking\",\"authors\":\"Hexi Li, Na Jiang, Chenxin Sun, Zhong Zhou, Wei Wu\",\"doi\":\"10.1109/ICVRV.2017.00011\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multi-target tracking is a worthy studying issue in computer vision. For surveillance video, frequent occlusion and dense crowds complicate the issue. To resolve these difficulties, this paper proposes an effective algorithm of multi-target tracking in videos. Firstly, the faster Rcnn is proposed with the residual network to extract the objects of pedestrians in surveillance videos. The proposedment can effectively eliminate invalid target detection frames, separate peer targets and resist partial occlusions. Then, this paper put forward an accurate and efficient appearance-feature matching network model that is inspired by pedestrian re-identification theory. The deep learning feature-extraction module is composed of the stem Cnn and the Resnet blocks, therefore it can load res-50 caffemodel as pretraining model to increase the accuracy of the featureextraction. Meanwhile, the proposed network can decrease the time of train and test comparing with Resnet. Finally, the obtained multiple target tracking trajectories are further optimized by the strategy of occlusion distinction, deduplication and merging. The experiment results of the 2D MOT 2015 benchmark, KITTI dataset indicate that this proposed algorithm outperforms alternative multiple objects trackers in terms of multiple indicators.\",\"PeriodicalId\":187934,\"journal\":{\"name\":\"2017 International Conference on Virtual Reality and Visualization (ICVRV)\",\"volume\":\"40 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 International Conference on Virtual Reality and Visualization (ICVRV)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICVRV.2017.00011\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICVRV.2017.00011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
摘要
多目标跟踪是计算机视觉中一个值得研究的问题。对于监控视频来说,频繁的遮挡和密集的人群使问题复杂化。为了解决这些问题,本文提出了一种有效的视频多目标跟踪算法。首先,利用残差网络提出了一种更快的Rcnn来提取监控视频中的行人目标。该方法可以有效地消除无效目标检测帧,分离对等目标,抵抗部分遮挡。然后,受行人再识别理论的启发,提出了一种准确高效的外观特征匹配网络模型。深度学习特征提取模块由stem Cnn和Resnet块组成,因此可以加载res-50 caffmodel作为预训练模型,以提高特征提取的准确性。同时,与Resnet相比,该网络减少了训练和测试的时间。最后,通过遮挡区分、重复数据删除和合并策略对得到的多目标跟踪轨迹进行进一步优化。2D MOT 2015基准KITTI数据集的实验结果表明,该算法在多指标方面优于备选多目标跟踪器。
Learning Deep Appearance Feature for Multi-target Tracking
Multi-target tracking is a worthy studying issue in computer vision. For surveillance video, frequent occlusion and dense crowds complicate the issue. To resolve these difficulties, this paper proposes an effective algorithm of multi-target tracking in videos. Firstly, the faster Rcnn is proposed with the residual network to extract the objects of pedestrians in surveillance videos. The proposedment can effectively eliminate invalid target detection frames, separate peer targets and resist partial occlusions. Then, this paper put forward an accurate and efficient appearance-feature matching network model that is inspired by pedestrian re-identification theory. The deep learning feature-extraction module is composed of the stem Cnn and the Resnet blocks, therefore it can load res-50 caffemodel as pretraining model to increase the accuracy of the featureextraction. Meanwhile, the proposed network can decrease the time of train and test comparing with Resnet. Finally, the obtained multiple target tracking trajectories are further optimized by the strategy of occlusion distinction, deduplication and merging. The experiment results of the 2D MOT 2015 benchmark, KITTI dataset indicate that this proposed algorithm outperforms alternative multiple objects trackers in terms of multiple indicators.