Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492766
Xin Wang, Chunyan Zhang, Guofang Lv, Chen Ning
Saliency detection in infrared images plays a critical role in large amounts of practical applications, such as infrared image compression, target detection and tracking. A novel saliency detection method in a single infrared image is proposed in this paper. First, a local sparse representation based approach is designed to calculate the initial saliency map for an input infrared image. Then, to further remove the background information in the initial saliency map, a novel method based on two-dimensional maximum entropy/minimum cross entropy and maximum standard deviation is proposed to predict the foreground. By subtracting the predicted foreground from the original infrared image, the background information can be obtained. Finally, the initial saliency map is refined through the background information. The presented method is evaluated on the real-life infrared images and the experimental results show that the proposed method achieves better performance compared to the state-of-the-art algorithms.
{"title":"A New Hybrid Approach for Saliency Detection in Infrared Images","authors":"Xin Wang, Chunyan Zhang, Guofang Lv, Chen Ning","doi":"10.1109/ICIVC.2018.8492766","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492766","url":null,"abstract":"Saliency detection in infrared images plays a critical role in large amounts of practical applications, such as infrared image compression, target detection and tracking. A novel saliency detection method in a single infrared image is proposed in this paper. First, a local sparse representation based approach is designed to calculate the initial saliency map for an input infrared image. Then, to further remove the background information in the initial saliency map, a novel method based on two-dimensional maximum entropy/minimum cross entropy and maximum standard deviation is proposed to predict the foreground. By subtracting the predicted foreground from the original infrared image, the background information can be obtained. Finally, the initial saliency map is refined through the background information. The presented method is evaluated on the real-life infrared images and the experimental results show that the proposed method achieves better performance compared to the state-of-the-art algorithms.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125679789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492877
Di Wu, Xue Du, Kaiyu Wang
In order to remove the complex and severe noise from sonar image more effectively, an image denoising approach based on sparse representation is carried out in this paper. To decompose and then reconstruct the sonar image on DCT dictionary with OMP is effective for additive noise removing. Then a logarithmic transformation was applied on the previous reconstructed image to make it adapt to sparse representation denoising model. Experiments are provided to demonstrate the performance of the proposed approach. Results show that this method is efficient in removing additive and multiplicative noise from the sonar image and is also particularly appealing in terms of both denoising effect and keeping details.
{"title":"An Effective Approach for Underwater Sonar Image Denoising Based on Sparse Representation","authors":"Di Wu, Xue Du, Kaiyu Wang","doi":"10.1109/ICIVC.2018.8492877","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492877","url":null,"abstract":"In order to remove the complex and severe noise from sonar image more effectively, an image denoising approach based on sparse representation is carried out in this paper. To decompose and then reconstruct the sonar image on DCT dictionary with OMP is effective for additive noise removing. Then a logarithmic transformation was applied on the previous reconstructed image to make it adapt to sparse representation denoising model. Experiments are provided to demonstrate the performance of the proposed approach. Results show that this method is efficient in removing additive and multiplicative noise from the sonar image and is also particularly appealing in terms of both denoising effect and keeping details.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116455279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492898
Zeqi Feng, Pengyu Liu, Ke-bin Jia, Kun Duan
Coding tree unit (CTU) partition technique provides excellent compression performance for HEVC at the expense of increased coding complexity. Therefore, a fast intra coding algorithm based CTU depth range prediction is proposed to reduce the complexity of HEVC intra coding herein. First, simple CTU s and complex CTU s are defined in line with their texture complexity, which are limited to different depth ranges. Then, the convolutional neural network architecture for HEVC intra depth range (HIDR-CNN) decision-making is proposed. It is used for CTU classification and depth range restriction. Last, the optimal CTU partition is achieved by recursive rate distortion (RD) cost calculation in the depth range. Experimental results show that the proposed algorithm can achieve average 27.54% encoding time reduction with negligible RD loss compared with HM 16.9. The proposed algorithm devotes to promote popularization of HEVC in realtime environments.
{"title":"HEVC Fast Intra Coding Based CTU Depth Range Prediction","authors":"Zeqi Feng, Pengyu Liu, Ke-bin Jia, Kun Duan","doi":"10.1109/ICIVC.2018.8492898","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492898","url":null,"abstract":"Coding tree unit (CTU) partition technique provides excellent compression performance for HEVC at the expense of increased coding complexity. Therefore, a fast intra coding algorithm based CTU depth range prediction is proposed to reduce the complexity of HEVC intra coding herein. First, simple CTU s and complex CTU s are defined in line with their texture complexity, which are limited to different depth ranges. Then, the convolutional neural network architecture for HEVC intra depth range (HIDR-CNN) decision-making is proposed. It is used for CTU classification and depth range restriction. Last, the optimal CTU partition is achieved by recursive rate distortion (RD) cost calculation in the depth range. Experimental results show that the proposed algorithm can achieve average 27.54% encoding time reduction with negligible RD loss compared with HM 16.9. The proposed algorithm devotes to promote popularization of HEVC in realtime environments.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128395273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492867
K. Guo, Hu Ye, Jianglong Zhou, Baogang Geng, Xiaolong Wu, Yunfei Li
In reverse engineering such as surface reconstruction, to solve the registration of point clouds of laser radar problem, a method based on moving least-squares was conducted to make feature extraction of target ball and then established linear equations to calculate the coordinate of the ball's center based on characteristic curves. Lastly, registration of point clouds was conducted based on four coordinates of the balls' centers. Experimental result shows that the method can improve the computing precision of the coordinate of the ball's center and the error of registration is in degree of millimeter based on moving least-squares. The accuracy is high and satisfies the engineering demand.
{"title":"Registration of Point Clouds with Feature Extraction Based on Moving Least-Squares","authors":"K. Guo, Hu Ye, Jianglong Zhou, Baogang Geng, Xiaolong Wu, Yunfei Li","doi":"10.1109/ICIVC.2018.8492867","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492867","url":null,"abstract":"In reverse engineering such as surface reconstruction, to solve the registration of point clouds of laser radar problem, a method based on moving least-squares was conducted to make feature extraction of target ball and then established linear equations to calculate the coordinate of the ball's center based on characteristic curves. Lastly, registration of point clouds was conducted based on four coordinates of the balls' centers. Experimental result shows that the method can improve the computing precision of the coordinate of the ball's center and the error of registration is in degree of millimeter based on moving least-squares. The accuracy is high and satisfies the engineering demand.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128881160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492793
P. Lin, Yongming Chen
This paper proposed an accurate, fast and reliable strawberry flower detection system for the automated strawberry flower yield estimation and harvesting. A state-of-the-art deep-level object detection framework of region-based convolutional neural network (R-CNN) was developed for improving the accuracy of detecting strawberry flowers in outdoor field. The networks were trained on 400 strawberry flower images and tested on 100 strawberry flower images. To capture features on multiple scales, three different region-based object detection methods including R-CNN, Fast R-CNN and Faster R-CNN were presented to represent the strawberry flower instances. The detection rate for R-CNN, Fast R-CNN and Faster R-CNN models were 63.4%, 76.7% and 86.1 %, respectively. Experimental results showed that the Faster R-CNN method archives better performance than R-CNN and Fast R-CNN and is less time consuming. We demonstrated the performance of the Faster RCNN framework even if strawberry flower are occluded by foliage, under shadow, or if there is some degree of overlap among strawberry flowers. Moreover, automatic yield estimation provides a viable solution for the current manual counting for yield estimation of fruits or flowers by workers which is very time consuming and expensive and also not practical for big fields.
{"title":"Detection of Strawberry Flowers in Outdoor Field by Deep Neural Network","authors":"P. Lin, Yongming Chen","doi":"10.1109/ICIVC.2018.8492793","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492793","url":null,"abstract":"This paper proposed an accurate, fast and reliable strawberry flower detection system for the automated strawberry flower yield estimation and harvesting. A state-of-the-art deep-level object detection framework of region-based convolutional neural network (R-CNN) was developed for improving the accuracy of detecting strawberry flowers in outdoor field. The networks were trained on 400 strawberry flower images and tested on 100 strawberry flower images. To capture features on multiple scales, three different region-based object detection methods including R-CNN, Fast R-CNN and Faster R-CNN were presented to represent the strawberry flower instances. The detection rate for R-CNN, Fast R-CNN and Faster R-CNN models were 63.4%, 76.7% and 86.1 %, respectively. Experimental results showed that the Faster R-CNN method archives better performance than R-CNN and Fast R-CNN and is less time consuming. We demonstrated the performance of the Faster RCNN framework even if strawberry flower are occluded by foliage, under shadow, or if there is some degree of overlap among strawberry flowers. Moreover, automatic yield estimation provides a viable solution for the current manual counting for yield estimation of fruits or flowers by workers which is very time consuming and expensive and also not practical for big fields.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133864760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492832
Anran Wang, X. Hao, Xu Zhang, Ancheng Wang, Peng Hu
The method of visual positioning can be mainly divided into fixed camera system and mobile camera system. In this paper, we propose a dynamic target positioning method based on ROI (regions of interest), which utilizes the deep learning method to detect targets and employs the fixed camera system to locate the targets. The ROI method proposed only process the region of target, which can reduce the time-consuming, and it can solve the problem that none or less feature points of the target is detected in 3D reconstruction. We make a dataset of the experimental car and use YOLOv2 to train the dataset, by which the training model of the experimental car is obtained; then the trained model is used to detect the experimental car in the video data which acquired by two USB cameras and get the ROI of the moving target. According to the triangulation method, only the ROI of the image data at the same time is reconstructed, and the average of the obtained coordinates as the position of the car at that moment. In the experiment, we use the positions obtained by optitrack system as the true values, and compare the positions got by the method of this paper (ROI method) with the true value. The experimental results show that the ROI method proposed can be used to locate the dynamic target with the positioning accuracy at the cm level.
{"title":"A Dynamic Target Visual Positioning Method Based on ROI","authors":"Anran Wang, X. Hao, Xu Zhang, Ancheng Wang, Peng Hu","doi":"10.1109/ICIVC.2018.8492832","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492832","url":null,"abstract":"The method of visual positioning can be mainly divided into fixed camera system and mobile camera system. In this paper, we propose a dynamic target positioning method based on ROI (regions of interest), which utilizes the deep learning method to detect targets and employs the fixed camera system to locate the targets. The ROI method proposed only process the region of target, which can reduce the time-consuming, and it can solve the problem that none or less feature points of the target is detected in 3D reconstruction. We make a dataset of the experimental car and use YOLOv2 to train the dataset, by which the training model of the experimental car is obtained; then the trained model is used to detect the experimental car in the video data which acquired by two USB cameras and get the ROI of the moving target. According to the triangulation method, only the ROI of the image data at the same time is reconstructed, and the average of the obtained coordinates as the position of the car at that moment. In the experiment, we use the positions obtained by optitrack system as the true values, and compare the positions got by the method of this paper (ROI method) with the true value. The experimental results show that the ROI method proposed can be used to locate the dynamic target with the positioning accuracy at the cm level.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"518 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133824686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492844
Tianli Guo, Yunqi Tang, Wei Guo
Shoeprint is an important trace evidence in forensic science. It can provide information about age, height and sex of suspects. In order to solve the problem of individual differences in the identification of planar-shoeprint experts doing, a planar shoeprint image segmentation algorithm based on Multiplicative Intrinsic Component Optimization is proposed in this context. After segmentation, pseudo-color can be selectively used to process the segmentation image. So the pattern and wear area of sole were automatically sketched. Experimental analysis shows that this method can effectively segment the shoeprint. This provides an objective and universal shoeprint identification method for criminal investigators to narrow the scope of investigation.
{"title":"Planar Shoeprint Segmentation Based on the Multiplicative Intrinsic Component Optimization","authors":"Tianli Guo, Yunqi Tang, Wei Guo","doi":"10.1109/ICIVC.2018.8492844","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492844","url":null,"abstract":"Shoeprint is an important trace evidence in forensic science. It can provide information about age, height and sex of suspects. In order to solve the problem of individual differences in the identification of planar-shoeprint experts doing, a planar shoeprint image segmentation algorithm based on Multiplicative Intrinsic Component Optimization is proposed in this context. After segmentation, pseudo-color can be selectively used to process the segmentation image. So the pattern and wear area of sole were automatically sketched. Experimental analysis shows that this method can effectively segment the shoeprint. This provides an objective and universal shoeprint identification method for criminal investigators to narrow the scope of investigation.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132227306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492737
Long Liu, Danyang Jing, Jie Ding
Panoramic visual tracking is very useful for numerous applications. However, distorted imaging of panoramic vision is prone to affect robustness and lose the target. A panoramic visual tracking method based on adaptive feature fusion is proposed in this paper. Size variation of the target trapezoid box during target movement is labelled. The linear model describing parameter variation of the trapezoid box is fitted. The target trapezoid region is extracted by the model and then refined through the affine transformation. Based on the particle filtering-based tracking framework, the fusion of color and shape is used as the main feature for target tracking. Particle weight is computed using the Bayesian fusion and recursion formula. Experimental results demonstrate the great superiority of the proposed algorithm over other methods in terms of tracking accuracy and anti-occlusion performance, showing that the proposed algorithm can considerably improve target tracking robustness of panoramic vision.
{"title":"Adaptive Extraction of Fused Feature for Panoramic Visual Tracking","authors":"Long Liu, Danyang Jing, Jie Ding","doi":"10.1109/ICIVC.2018.8492737","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492737","url":null,"abstract":"Panoramic visual tracking is very useful for numerous applications. However, distorted imaging of panoramic vision is prone to affect robustness and lose the target. A panoramic visual tracking method based on adaptive feature fusion is proposed in this paper. Size variation of the target trapezoid box during target movement is labelled. The linear model describing parameter variation of the trapezoid box is fitted. The target trapezoid region is extracted by the model and then refined through the affine transformation. Based on the particle filtering-based tracking framework, the fusion of color and shape is used as the main feature for target tracking. Particle weight is computed using the Bayesian fusion and recursion formula. Experimental results demonstrate the great superiority of the proposed algorithm over other methods in terms of tracking accuracy and anti-occlusion performance, showing that the proposed algorithm can considerably improve target tracking robustness of panoramic vision.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"332 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114803839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492735
Salinee Jaidilert, Ghulam Farooque
Thailand frescoes are an important art heritage in the world. However, the erosion of history has resulted in the color loss, stain and scratches of many mural paintings. How to repair the Thailand murals has become an urgent problem. It is an important scientific problem to use computer image inpainting technology to simulate and eliminate the missing pixels in the murals and obtain beautiful and intact murals. In this paper, a computer aided semi-automatic repair framework is proposed by combining a scratch detection procedure and a model optimization based inpainting procedure. To this end, we propose a scratch semi-automatic detection method. In this method, a small number of seed points are given by users, and the location of scratches is then computed by region growing method and morphological operation. After that, the pixel filling and color restoration in the missing region can be obtained by using different variational inpainting methods. The experiment shows that the proposed method is effective.
{"title":"Crack Detection and Images Inpainting Method for Thai Mural Painting Images","authors":"Salinee Jaidilert, Ghulam Farooque","doi":"10.1109/ICIVC.2018.8492735","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492735","url":null,"abstract":"Thailand frescoes are an important art heritage in the world. However, the erosion of history has resulted in the color loss, stain and scratches of many mural paintings. How to repair the Thailand murals has become an urgent problem. It is an important scientific problem to use computer image inpainting technology to simulate and eliminate the missing pixels in the murals and obtain beautiful and intact murals. In this paper, a computer aided semi-automatic repair framework is proposed by combining a scratch detection procedure and a model optimization based inpainting procedure. To this end, we propose a scratch semi-automatic detection method. In this method, a small number of seed points are given by users, and the location of scratches is then computed by region growing method and morphological operation. After that, the pixel filling and color restoration in the missing region can be obtained by using different variational inpainting methods. The experiment shows that the proposed method is effective.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128554013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492753
Nianhao Xie, Y. Shang
Structural SVM trackers and correlation filter trackers have demonstrated dominant performance in recent object tracking benchmarks. However, structural SVM trackers naturally suffer from shortage of samples and low speed, and time-consuming adaption is need to relieve the correlation filter trackers from boundary effects. Thus, we design a jointed tracker by concatenating a high-speed SSVM method-DSLT and a multi feature CF method-STAPLE to realize advantage complementation. We show that the tracking precision and robustness can be improve by a large margin comparing to either single tracker with little sacrifice of speed.
{"title":"An Object Tracking Method by Concatenating Structural SVM and Correlation Filter","authors":"Nianhao Xie, Y. Shang","doi":"10.1109/ICIVC.2018.8492753","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492753","url":null,"abstract":"Structural SVM trackers and correlation filter trackers have demonstrated dominant performance in recent object tracking benchmarks. However, structural SVM trackers naturally suffer from shortage of samples and low speed, and time-consuming adaption is need to relieve the correlation filter trackers from boundary effects. Thus, we design a jointed tracker by concatenating a high-speed SSVM method-DSLT and a multi feature CF method-STAPLE to realize advantage complementation. We show that the tracking precision and robustness can be improve by a large margin comparing to either single tracker with little sacrifice of speed.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"88 12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134214454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}