Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492910
Sijia Kong, Jing Peng, Wenxiang Liu, Mengli Wang, Feixue Wang
Global Navigation Satellite Systems (GNSS) include Global Positioning System (GPS), Global Navigation Satellite System (GLONASS), Galileo satellite navigation system (Galileo) and BeiDou Navigation Satellite System (BDS). With the development of BDS, it is necessary to monitor system time offset between BDS and the other GNSSs to enhance the compatibility and interoperability among GNSSs. The system time offset between GLONASS and BDS is affected by the inter-channel biases (ICBs) caused by frequency division multiple access technique (FDMA). To reduce the impact of GLONASS ICBs on BDS and GLONASS system time offset (BDS-GLONASS), this paper proposes a method of estimating GLONASS ICBs parameters and system time offset parameters in real time. The experimental results indicate that the standard deviation (STD) of BDS-GLONASS monitoring value can be reduced from 6 ~ 7ns to about 3ns (more than 45%). And the STD of BDS-GPS and BDS-Galileo monitoring value can be reduced more than 15%. This work will also lead to further research in GNSS system time offset monitoring and forecasting.
{"title":"GNSS System Time Offset Real-Time Monitoring with GLONASS ICBs Estimated","authors":"Sijia Kong, Jing Peng, Wenxiang Liu, Mengli Wang, Feixue Wang","doi":"10.1109/ICIVC.2018.8492910","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492910","url":null,"abstract":"Global Navigation Satellite Systems (GNSS) include Global Positioning System (GPS), Global Navigation Satellite System (GLONASS), Galileo satellite navigation system (Galileo) and BeiDou Navigation Satellite System (BDS). With the development of BDS, it is necessary to monitor system time offset between BDS and the other GNSSs to enhance the compatibility and interoperability among GNSSs. The system time offset between GLONASS and BDS is affected by the inter-channel biases (ICBs) caused by frequency division multiple access technique (FDMA). To reduce the impact of GLONASS ICBs on BDS and GLONASS system time offset (BDS-GLONASS), this paper proposes a method of estimating GLONASS ICBs parameters and system time offset parameters in real time. The experimental results indicate that the standard deviation (STD) of BDS-GLONASS monitoring value can be reduced from 6 ~ 7ns to about 3ns (more than 45%). And the STD of BDS-GPS and BDS-Galileo monitoring value can be reduced more than 15%. This work will also lead to further research in GNSS system time offset monitoring and forecasting.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133089711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492821
Jingfeng Lu, Wanyu Liu
Ultrasound Imaging is one of the most widely used imaging modalities for clinic diagnosis, but suffers from a low resolution due to the intrinsic physical flaws. In this paper, we present a novel unsupervised super-resolution (USSR) framework to solve the single image super-resolution (SR) problem in ultrasound images which lack of training examples. Our method utilizes the powerful nonlinear mapping ability of convolutional neural networks (CNNs), without relying on prior training or any external data. We exploit the multi-scale contextual information extracted from the test image itself to train an image-specific network at test time. We utilize several techniques to improve the convergence and accuracy, including dilated convolution and residual learning. To capture valuable internal information, dilated convolution is employed to increase the receptive field without increasing the network parameters. To speed up the convergence of the training, residual learning is used to directly learn the difference between the high-resolution and low-resolution images. Quantitative and qualitative evaluations on real ultrasound images demonstrate that the proposed method outperforms the state-of-the-art unsupervised method.
{"title":"Unsupervised Super-Resolution Framework for Medical Ultrasound Images Using Dilated Convolutional Neural Networks","authors":"Jingfeng Lu, Wanyu Liu","doi":"10.1109/ICIVC.2018.8492821","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492821","url":null,"abstract":"Ultrasound Imaging is one of the most widely used imaging modalities for clinic diagnosis, but suffers from a low resolution due to the intrinsic physical flaws. In this paper, we present a novel unsupervised super-resolution (USSR) framework to solve the single image super-resolution (SR) problem in ultrasound images which lack of training examples. Our method utilizes the powerful nonlinear mapping ability of convolutional neural networks (CNNs), without relying on prior training or any external data. We exploit the multi-scale contextual information extracted from the test image itself to train an image-specific network at test time. We utilize several techniques to improve the convergence and accuracy, including dilated convolution and residual learning. To capture valuable internal information, dilated convolution is employed to increase the receptive field without increasing the network parameters. To speed up the convergence of the training, residual learning is used to directly learn the difference between the high-resolution and low-resolution images. Quantitative and qualitative evaluations on real ultrasound images demonstrate that the proposed method outperforms the state-of-the-art unsupervised method.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132236768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492787
Yantao Yue, Xiangyi Sun
In this paper, we aim to solve pose estimation of rigid body motion in real time with 3d lines model. According to the line's perspective projection model, we design a new error function expressed by the average integral of the distance between line segments to estimate parameters. Considering the continuely of motion, we restore cracked line segements with re-projection of Model lines Constrianted. Last, we proposal to estimate many frames jointly in framework of SFM and get better precious while bears slow speed. Comparisons on synthetic and real images demonstrate that baseline methods get accuracy estimations in complex environments. For plane objects, the precious of pose on x, y, z axes are better than 0.5m in 100m distance, and those of relative positions perpendicular to the optical axis and along the optical axis are better than 0.3%.
{"title":"Rigid Body Pose Estimation from Line Correspondences","authors":"Yantao Yue, Xiangyi Sun","doi":"10.1109/ICIVC.2018.8492787","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492787","url":null,"abstract":"In this paper, we aim to solve pose estimation of rigid body motion in real time with 3d lines model. According to the line's perspective projection model, we design a new error function expressed by the average integral of the distance between line segments to estimate parameters. Considering the continuely of motion, we restore cracked line segements with re-projection of Model lines Constrianted. Last, we proposal to estimate many frames jointly in framework of SFM and get better precious while bears slow speed. Comparisons on synthetic and real images demonstrate that baseline methods get accuracy estimations in complex environments. For plane objects, the precious of pose on x, y, z axes are better than 0.5m in 100m distance, and those of relative positions perpendicular to the optical axis and along the optical axis are better than 0.3%.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117157236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In some high-level secure applications in need of multiple participants input their own secret data to achieve access control, such as, secure cabinet opened by multiple owners together, traditional security technology is not applicable. Although secret sharing may be used in the scenarios, there are some problems when directly applying primary secret sharing methods including visual cryptography (VC) and polynomial-based secret sharing. In this paper, we first describe the application scenario (namely secret data fusion) and its requirements, where secret data fusion is different from secret sharing. Then, we propose a possible method for secret data fusion based on Chinese remainder theorem (CRT). Theoretical analyses and experiments are examined to represent the effectiveness of our method.
{"title":"Secret Data Fusion Based on Chinese Remainder Theorem","authors":"Yuliang Lu, Xuehu Yan, Lintao Liu, Jingju Liu, Guozheng Yang, Qiang Li","doi":"10.1109/ICIVC.2018.8492875","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492875","url":null,"abstract":"In some high-level secure applications in need of multiple participants input their own secret data to achieve access control, such as, secure cabinet opened by multiple owners together, traditional security technology is not applicable. Although secret sharing may be used in the scenarios, there are some problems when directly applying primary secret sharing methods including visual cryptography (VC) and polynomial-based secret sharing. In this paper, we first describe the application scenario (namely secret data fusion) and its requirements, where secret data fusion is different from secret sharing. Then, we propose a possible method for secret data fusion based on Chinese remainder theorem (CRT). Theoretical analyses and experiments are examined to represent the effectiveness of our method.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"151 10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123568199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492732
Ying Zhang, Qianqian Hu, Zhen Guo, Jian Xu, Kun Xiong
With the development of computer technology, the diagnostic capability of the computer-aided diagnosis systems has improved. It has contributed to classify the brain images into health or other pathological categories automatically and accurately. In this paper, we proposed an improved method by introducing reality-preserving fractional Fourier transform (RPFRFT) and Adaboost to classify brain images into five different categories of health, cerebrovascular disease, neoplastic disease, degenerative disease and inflammatory disease. We used 190 T2-weighted images obtained by magnetic resonance imaging in the experiment. First, we employed RPFRFT to extract spectrum features from each magnetic resonance image. Second, we applied principal component analysis (PCA) to reduce feature dimensionality to only 86. Third, those reduced spectral features of different samples were combined and then were fed into Adaboost to train the classifier. The 10×10-fold cross validation obtained an accuracy of 98.6%. The result confirms the effectiveness of our proposed method.
{"title":"Multi-Class Brain Images Classification Based on Reality-Preserving Fractional Fourier Transform and Adaboost","authors":"Ying Zhang, Qianqian Hu, Zhen Guo, Jian Xu, Kun Xiong","doi":"10.1109/ICIVC.2018.8492732","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492732","url":null,"abstract":"With the development of computer technology, the diagnostic capability of the computer-aided diagnosis systems has improved. It has contributed to classify the brain images into health or other pathological categories automatically and accurately. In this paper, we proposed an improved method by introducing reality-preserving fractional Fourier transform (RPFRFT) and Adaboost to classify brain images into five different categories of health, cerebrovascular disease, neoplastic disease, degenerative disease and inflammatory disease. We used 190 T2-weighted images obtained by magnetic resonance imaging in the experiment. First, we employed RPFRFT to extract spectrum features from each magnetic resonance image. Second, we applied principal component analysis (PCA) to reduce feature dimensionality to only 86. Third, those reduced spectral features of different samples were combined and then were fed into Adaboost to train the classifier. The 10×10-fold cross validation obtained an accuracy of 98.6%. The result confirms the effectiveness of our proposed method.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121775572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492728
Xuebo Zhang, Cheng Tan, Wenwei Ying
The parameter initializations play an important role in the iteration of parameter estimation. Based on characteristic function, a parameter estimation method for Class B noise considering the parameter initialization is presented in this paper. The noise is firstly considered as the symmetric alpha stable (SαS) distribution. With the log method, we get the estimated parameters, which are further used as the parameter initial values of iteration. It improves the convergence speed. The processing results of simulated data indicate that the parameters of Class B noise can be efficiently estimated with the presented method.
{"title":"Characteristic Function Based Parameter Estimation for Ocean Ambient Noise","authors":"Xuebo Zhang, Cheng Tan, Wenwei Ying","doi":"10.1109/ICIVC.2018.8492728","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492728","url":null,"abstract":"The parameter initializations play an important role in the iteration of parameter estimation. Based on characteristic function, a parameter estimation method for Class B noise considering the parameter initialization is presented in this paper. The noise is firstly considered as the symmetric alpha stable (SαS) distribution. With the log method, we get the estimated parameters, which are further used as the parameter initial values of iteration. It improves the convergence speed. The processing results of simulated data indicate that the parameters of Class B noise can be efficiently estimated with the presented method.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122775713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492895
Huiqian Niu, Qiankun Lu, Chao Wang
Image stitching is a widely used technique to obtain panoramas in daily life. There are often color differences between neighboring views due to different exposure levels and view angels. Although many different automatic color correction approaches have been proposed, they are not appropriate for all multi-view image and video stitching, especially when occlusion or parallax exists. This paper puts forward a new method based on histogram matching and polynomial regression. The experimental results show that the method has good effects on the color difference no matter whether parallax exists or not.
{"title":"Color Correction Based on Histogram Matching and Polynomial Regression for Image Stitching","authors":"Huiqian Niu, Qiankun Lu, Chao Wang","doi":"10.1109/ICIVC.2018.8492895","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492895","url":null,"abstract":"Image stitching is a widely used technique to obtain panoramas in daily life. There are often color differences between neighboring views due to different exposure levels and view angels. Although many different automatic color correction approaches have been proposed, they are not appropriate for all multi-view image and video stitching, especially when occlusion or parallax exists. This paper puts forward a new method based on histogram matching and polynomial regression. The experimental results show that the method has good effects on the color difference no matter whether parallax exists or not.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125285933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492874
Feng Huahui, Zhang Geng, Zhang Xin, Hu Bingliang
Focusing on the problem existing in stereo matching that low-SNR image, such as images collected at night, we propose a novel matching framework based on semi-global matching algorithm and AD-Census. This algorithm extends the original algorithms in two ways. First, image segmentation information as an additional constraint is added that solve the problem of incomplete path and improve the accuracy of cost calculation. Second, the matching cost volume is calculated with AD-SoftCensus measure that minimizes the impact of noise on the quality of matching by changing the pattern of census descriptor from binary to trinary. Results of Middlebury standard test data show that the algorithm significantly improves the precision of matching. In addition, a low-light binocular platform is built to test our method in night environment. Results show the disparity maps are more accurate compared to previous methods.
{"title":"A Noise-Resistant Stereo Matching Algorithm Integrating Regional Information","authors":"Feng Huahui, Zhang Geng, Zhang Xin, Hu Bingliang","doi":"10.1109/ICIVC.2018.8492874","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492874","url":null,"abstract":"Focusing on the problem existing in stereo matching that low-SNR image, such as images collected at night, we propose a novel matching framework based on semi-global matching algorithm and AD-Census. This algorithm extends the original algorithms in two ways. First, image segmentation information as an additional constraint is added that solve the problem of incomplete path and improve the accuracy of cost calculation. Second, the matching cost volume is calculated with AD-SoftCensus measure that minimizes the impact of noise on the quality of matching by changing the pattern of census descriptor from binary to trinary. Results of Middlebury standard test data show that the algorithm significantly improves the precision of matching. In addition, a low-light binocular platform is built to test our method in night environment. Results show the disparity maps are more accurate compared to previous methods.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129893415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492751
Xiaolin Zhao, Shilin Zhou, Lin Lei, Zhipeng Deng
In Unmanned Aerial Vehicle (UAV) videos, object tracking remains a challenge, due to its low spatial resolution and poor real-time performance. Recently, methods of deep learning have made great progress in object tracking in computer vision, especially fully-convolutional siamese neural networks (SiamFC). Inspired by it, this paper aims to investigate the use of SiamFC for object tracking in UAV videos. The network is trained on part of a UAV123 dataset and Stanford Drone dataset. First, exemplar image is extracted from the first frame and search regions are extracted in the following frames. Then, a Siamese network is used for tracking objects by calculating the similarity between exemplar image and search region. To evaluate our method, we test on a challenge VIVID dataset. The experiment shows that the proposed method has improvements in accuracy and speed in low spatial resolution UAV videos compared to existing methods.
{"title":"Siamese Network for Object Tracking in Aerial Video","authors":"Xiaolin Zhao, Shilin Zhou, Lin Lei, Zhipeng Deng","doi":"10.1109/ICIVC.2018.8492751","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492751","url":null,"abstract":"In Unmanned Aerial Vehicle (UAV) videos, object tracking remains a challenge, due to its low spatial resolution and poor real-time performance. Recently, methods of deep learning have made great progress in object tracking in computer vision, especially fully-convolutional siamese neural networks (SiamFC). Inspired by it, this paper aims to investigate the use of SiamFC for object tracking in UAV videos. The network is trained on part of a UAV123 dataset and Stanford Drone dataset. First, exemplar image is extracted from the first frame and search regions are extracted in the following frames. Then, a Siamese network is used for tracking objects by calculating the similarity between exemplar image and search region. To evaluate our method, we test on a challenge VIVID dataset. The experiment shows that the proposed method has improvements in accuracy and speed in low spatial resolution UAV videos compared to existing methods.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128910828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492826
Xunping Huang, Ridong Zhang, Ke-bin Jia, Zuyun Wang, Wenzhen Nie
The Adaboost vehicle detection algorithm based on Haar feature has a good performance in real-time and accuracy. However, the method has a lot of omission and error detection in the detection of Special vehicle under complicated traffic flow. In this paper, a method of taxi window area detection is proposed to replace vehicle detection. At the same time, a method of sliding color histogram matching is proposed to reduce the error detection. Finally, the traffic surveillance video is used to verify the algorithm, Detection results proves that the algorithm has a good accuracy and real-time performance for the detection of taxi vehicles.
{"title":"Taxi Detection Based on the Sliding Color Histogram Matching","authors":"Xunping Huang, Ridong Zhang, Ke-bin Jia, Zuyun Wang, Wenzhen Nie","doi":"10.1109/ICIVC.2018.8492826","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492826","url":null,"abstract":"The Adaboost vehicle detection algorithm based on Haar feature has a good performance in real-time and accuracy. However, the method has a lot of omission and error detection in the detection of Special vehicle under complicated traffic flow. In this paper, a method of taxi window area detection is proposed to replace vehicle detection. At the same time, a method of sliding color histogram matching is proposed to reduce the error detection. Finally, the traffic surveillance video is used to verify the algorithm, Detection results proves that the algorithm has a good accuracy and real-time performance for the detection of taxi vehicles.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"314 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116380277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}