Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492910
Sijia Kong, Jing Peng, Wenxiang Liu, Mengli Wang, Feixue Wang
Global Navigation Satellite Systems (GNSS) include Global Positioning System (GPS), Global Navigation Satellite System (GLONASS), Galileo satellite navigation system (Galileo) and BeiDou Navigation Satellite System (BDS). With the development of BDS, it is necessary to monitor system time offset between BDS and the other GNSSs to enhance the compatibility and interoperability among GNSSs. The system time offset between GLONASS and BDS is affected by the inter-channel biases (ICBs) caused by frequency division multiple access technique (FDMA). To reduce the impact of GLONASS ICBs on BDS and GLONASS system time offset (BDS-GLONASS), this paper proposes a method of estimating GLONASS ICBs parameters and system time offset parameters in real time. The experimental results indicate that the standard deviation (STD) of BDS-GLONASS monitoring value can be reduced from 6 ~ 7ns to about 3ns (more than 45%). And the STD of BDS-GPS and BDS-Galileo monitoring value can be reduced more than 15%. This work will also lead to further research in GNSS system time offset monitoring and forecasting.
{"title":"GNSS System Time Offset Real-Time Monitoring with GLONASS ICBs Estimated","authors":"Sijia Kong, Jing Peng, Wenxiang Liu, Mengli Wang, Feixue Wang","doi":"10.1109/ICIVC.2018.8492910","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492910","url":null,"abstract":"Global Navigation Satellite Systems (GNSS) include Global Positioning System (GPS), Global Navigation Satellite System (GLONASS), Galileo satellite navigation system (Galileo) and BeiDou Navigation Satellite System (BDS). With the development of BDS, it is necessary to monitor system time offset between BDS and the other GNSSs to enhance the compatibility and interoperability among GNSSs. The system time offset between GLONASS and BDS is affected by the inter-channel biases (ICBs) caused by frequency division multiple access technique (FDMA). To reduce the impact of GLONASS ICBs on BDS and GLONASS system time offset (BDS-GLONASS), this paper proposes a method of estimating GLONASS ICBs parameters and system time offset parameters in real time. The experimental results indicate that the standard deviation (STD) of BDS-GLONASS monitoring value can be reduced from 6 ~ 7ns to about 3ns (more than 45%). And the STD of BDS-GPS and BDS-Galileo monitoring value can be reduced more than 15%. This work will also lead to further research in GNSS system time offset monitoring and forecasting.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133089711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492821
Jingfeng Lu, Wanyu Liu
Ultrasound Imaging is one of the most widely used imaging modalities for clinic diagnosis, but suffers from a low resolution due to the intrinsic physical flaws. In this paper, we present a novel unsupervised super-resolution (USSR) framework to solve the single image super-resolution (SR) problem in ultrasound images which lack of training examples. Our method utilizes the powerful nonlinear mapping ability of convolutional neural networks (CNNs), without relying on prior training or any external data. We exploit the multi-scale contextual information extracted from the test image itself to train an image-specific network at test time. We utilize several techniques to improve the convergence and accuracy, including dilated convolution and residual learning. To capture valuable internal information, dilated convolution is employed to increase the receptive field without increasing the network parameters. To speed up the convergence of the training, residual learning is used to directly learn the difference between the high-resolution and low-resolution images. Quantitative and qualitative evaluations on real ultrasound images demonstrate that the proposed method outperforms the state-of-the-art unsupervised method.
{"title":"Unsupervised Super-Resolution Framework for Medical Ultrasound Images Using Dilated Convolutional Neural Networks","authors":"Jingfeng Lu, Wanyu Liu","doi":"10.1109/ICIVC.2018.8492821","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492821","url":null,"abstract":"Ultrasound Imaging is one of the most widely used imaging modalities for clinic diagnosis, but suffers from a low resolution due to the intrinsic physical flaws. In this paper, we present a novel unsupervised super-resolution (USSR) framework to solve the single image super-resolution (SR) problem in ultrasound images which lack of training examples. Our method utilizes the powerful nonlinear mapping ability of convolutional neural networks (CNNs), without relying on prior training or any external data. We exploit the multi-scale contextual information extracted from the test image itself to train an image-specific network at test time. We utilize several techniques to improve the convergence and accuracy, including dilated convolution and residual learning. To capture valuable internal information, dilated convolution is employed to increase the receptive field without increasing the network parameters. To speed up the convergence of the training, residual learning is used to directly learn the difference between the high-resolution and low-resolution images. Quantitative and qualitative evaluations on real ultrasound images demonstrate that the proposed method outperforms the state-of-the-art unsupervised method.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132236768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492797
Hu Xiaomei, Li Minghang, Wang Chuan, Yang Xu, Wei Chenjun
In order to establish the simulation model of dust evolution and reveal the evolution process of dust, Shanghai University is selected as the simulation area, the method of Kinetic Monte Carlo is used to simulate the dust particles in the virtual campus, OpenGL and C language are used so as to realize the visualization of dust evolution simulation model. The collection data and simulation data are compared at different locations in the campus, and the results prove the validity of dust evolution simulation model. Based on the visualization results of dust evolution simulation, the relationships among wind speed, simulation time, vegetation effect and accumulation of dust particles on the ground or the motion of dust particles in the vertical surface are revealed. Visualization of dust evolution simulation model will provide a valid reference for dust control.
{"title":"Visualization of Dust Evolution Simulation Model in Campus Environment","authors":"Hu Xiaomei, Li Minghang, Wang Chuan, Yang Xu, Wei Chenjun","doi":"10.1109/ICIVC.2018.8492797","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492797","url":null,"abstract":"In order to establish the simulation model of dust evolution and reveal the evolution process of dust, Shanghai University is selected as the simulation area, the method of Kinetic Monte Carlo is used to simulate the dust particles in the virtual campus, OpenGL and C language are used so as to realize the visualization of dust evolution simulation model. The collection data and simulation data are compared at different locations in the campus, and the results prove the validity of dust evolution simulation model. Based on the visualization results of dust evolution simulation, the relationships among wind speed, simulation time, vegetation effect and accumulation of dust particles on the ground or the motion of dust particles in the vertical surface are revealed. Visualization of dust evolution simulation model will provide a valid reference for dust control.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132454308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492755
Pei An, Yanchao Liu, Wei Zhang, Z. Jin
With the development of lunar exploration technology, vision-based localization and navigation technology has become a research focus in the field of lunar rover. This paper proposes an image-based method for localization and mapping with a lunar rover. The motion of the camera represents the movement of the lunar rover. Based on the images acquired by the camera, the relative pose of the camera and 3D landmarks are obtained using the multi-view geometry and the bundle adjustment optimization methods. The prior knowledge of the lunar rover movement is not required. In addition, this paper also proposes a grid-based feature extraction method to solve the problem of uneven feature extraction and mis-matching. The algorithm in this paper has been tested in real time in a large image dataset. Finally, the error analysis of the estimated pose obtained from the experiment and the real trajectory proves the excellent performance of the algorithm.
{"title":"Vision-Based Simultaneous Localization and Mapping on Lunar Rover","authors":"Pei An, Yanchao Liu, Wei Zhang, Z. Jin","doi":"10.1109/ICIVC.2018.8492755","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492755","url":null,"abstract":"With the development of lunar exploration technology, vision-based localization and navigation technology has become a research focus in the field of lunar rover. This paper proposes an image-based method for localization and mapping with a lunar rover. The motion of the camera represents the movement of the lunar rover. Based on the images acquired by the camera, the relative pose of the camera and 3D landmarks are obtained using the multi-view geometry and the bundle adjustment optimization methods. The prior knowledge of the lunar rover movement is not required. In addition, this paper also proposes a grid-based feature extraction method to solve the problem of uneven feature extraction and mis-matching. The algorithm in this paper has been tested in real time in a large image dataset. Finally, the error analysis of the estimated pose obtained from the experiment and the real trajectory proves the excellent performance of the algorithm.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129077739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492884
Mandan Zhao, X. Hao, Gaochang Wu
This paper addresses the problem of disparity map accurate estimation in the cross-scale reference-based light field, which consists several low-quality images arranged around one central high-resolution (HR) image. In the framework, we use a HR image-guidance CNN (HRIG-CNN) for estimating the disparity map in the HR level. Specifically, we first calculate the coarse disparity map using our cross-pattern strategy, which can blend the multiple disparity maps. And then, we refine this coarse disparity map using HRIG-CNN for obtaining high-quality disparity map, which contains detail information and preserve edge information. With the HR image guidance, our HRIG-CNN achieves state-of-the-art for obtaining disparity map in such hybrid light field condition. In the end, we provide both quantitative and qualitative evaluations on different methods, and demonstrate the high performance and robustness of the proposed framework compared with the state-of-the-arts algorithms.
{"title":"The Accurate Estimation of Disparity Maps from Cross-Scale Reference-Based Light Field","authors":"Mandan Zhao, X. Hao, Gaochang Wu","doi":"10.1109/ICIVC.2018.8492884","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492884","url":null,"abstract":"This paper addresses the problem of disparity map accurate estimation in the cross-scale reference-based light field, which consists several low-quality images arranged around one central high-resolution (HR) image. In the framework, we use a HR image-guidance CNN (HRIG-CNN) for estimating the disparity map in the HR level. Specifically, we first calculate the coarse disparity map using our cross-pattern strategy, which can blend the multiple disparity maps. And then, we refine this coarse disparity map using HRIG-CNN for obtaining high-quality disparity map, which contains detail information and preserve edge information. With the HR image guidance, our HRIG-CNN achieves state-of-the-art for obtaining disparity map in such hybrid light field condition. In the end, we provide both quantitative and qualitative evaluations on different methods, and demonstrate the high performance and robustness of the proposed framework compared with the state-of-the-arts algorithms.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132581223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492725
Abubakar Siddique, Bin Xiao, Weisheng Li, Qamar Nawaz, Isma Hamid
In this work, multi-focus image fusion method has been proposed by using color-principal component analysis (C-PCA). Proposed method consists of different phases. In the first phase, both source images have been converted into three RGB color channels. In the next phase, for each channel, covariance's has been calculated for both images. Special weights have been calculated to generate intermediate images. In the next phase, Convolution has been used with Gaussian blur to make image smooth. Zero-crossing based second order-derivative has been incorporated to calculate edges. In the last phase, images have been decomposed into blocks. Salient features information by using Laplacian of Gaussian and Spatial Frequency of each block have been used to get the fused image. Experimental results show that the proposed method performs well as compare to existing methods by using quality matrices.
{"title":"Multi-Focus Image Fusion Using Block-Wise Color-Principal Component Analysis","authors":"Abubakar Siddique, Bin Xiao, Weisheng Li, Qamar Nawaz, Isma Hamid","doi":"10.1109/ICIVC.2018.8492725","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492725","url":null,"abstract":"In this work, multi-focus image fusion method has been proposed by using color-principal component analysis (C-PCA). Proposed method consists of different phases. In the first phase, both source images have been converted into three RGB color channels. In the next phase, for each channel, covariance's has been calculated for both images. Special weights have been calculated to generate intermediate images. In the next phase, Convolution has been used with Gaussian blur to make image smooth. Zero-crossing based second order-derivative has been incorporated to calculate edges. In the last phase, images have been decomposed into blocks. Salient features information by using Laplacian of Gaussian and Spatial Frequency of each block have been used to get the fused image. Experimental results show that the proposed method performs well as compare to existing methods by using quality matrices.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"231 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132828048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492787
Yantao Yue, Xiangyi Sun
In this paper, we aim to solve pose estimation of rigid body motion in real time with 3d lines model. According to the line's perspective projection model, we design a new error function expressed by the average integral of the distance between line segments to estimate parameters. Considering the continuely of motion, we restore cracked line segements with re-projection of Model lines Constrianted. Last, we proposal to estimate many frames jointly in framework of SFM and get better precious while bears slow speed. Comparisons on synthetic and real images demonstrate that baseline methods get accuracy estimations in complex environments. For plane objects, the precious of pose on x, y, z axes are better than 0.5m in 100m distance, and those of relative positions perpendicular to the optical axis and along the optical axis are better than 0.3%.
{"title":"Rigid Body Pose Estimation from Line Correspondences","authors":"Yantao Yue, Xiangyi Sun","doi":"10.1109/ICIVC.2018.8492787","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492787","url":null,"abstract":"In this paper, we aim to solve pose estimation of rigid body motion in real time with 3d lines model. According to the line's perspective projection model, we design a new error function expressed by the average integral of the distance between line segments to estimate parameters. Considering the continuely of motion, we restore cracked line segements with re-projection of Model lines Constrianted. Last, we proposal to estimate many frames jointly in framework of SFM and get better precious while bears slow speed. Comparisons on synthetic and real images demonstrate that baseline methods get accuracy estimations in complex environments. For plane objects, the precious of pose on x, y, z axes are better than 0.5m in 100m distance, and those of relative positions perpendicular to the optical axis and along the optical axis are better than 0.3%.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117157236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492826
Xunping Huang, Ridong Zhang, Ke-bin Jia, Zuyun Wang, Wenzhen Nie
The Adaboost vehicle detection algorithm based on Haar feature has a good performance in real-time and accuracy. However, the method has a lot of omission and error detection in the detection of Special vehicle under complicated traffic flow. In this paper, a method of taxi window area detection is proposed to replace vehicle detection. At the same time, a method of sliding color histogram matching is proposed to reduce the error detection. Finally, the traffic surveillance video is used to verify the algorithm, Detection results proves that the algorithm has a good accuracy and real-time performance for the detection of taxi vehicles.
{"title":"Taxi Detection Based on the Sliding Color Histogram Matching","authors":"Xunping Huang, Ridong Zhang, Ke-bin Jia, Zuyun Wang, Wenzhen Nie","doi":"10.1109/ICIVC.2018.8492826","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492826","url":null,"abstract":"The Adaboost vehicle detection algorithm based on Haar feature has a good performance in real-time and accuracy. However, the method has a lot of omission and error detection in the detection of Special vehicle under complicated traffic flow. In this paper, a method of taxi window area detection is proposed to replace vehicle detection. At the same time, a method of sliding color histogram matching is proposed to reduce the error detection. Finally, the traffic surveillance video is used to verify the algorithm, Detection results proves that the algorithm has a good accuracy and real-time performance for the detection of taxi vehicles.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"314 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116380277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492732
Ying Zhang, Qianqian Hu, Zhen Guo, Jian Xu, Kun Xiong
With the development of computer technology, the diagnostic capability of the computer-aided diagnosis systems has improved. It has contributed to classify the brain images into health or other pathological categories automatically and accurately. In this paper, we proposed an improved method by introducing reality-preserving fractional Fourier transform (RPFRFT) and Adaboost to classify brain images into five different categories of health, cerebrovascular disease, neoplastic disease, degenerative disease and inflammatory disease. We used 190 T2-weighted images obtained by magnetic resonance imaging in the experiment. First, we employed RPFRFT to extract spectrum features from each magnetic resonance image. Second, we applied principal component analysis (PCA) to reduce feature dimensionality to only 86. Third, those reduced spectral features of different samples were combined and then were fed into Adaboost to train the classifier. The 10×10-fold cross validation obtained an accuracy of 98.6%. The result confirms the effectiveness of our proposed method.
{"title":"Multi-Class Brain Images Classification Based on Reality-Preserving Fractional Fourier Transform and Adaboost","authors":"Ying Zhang, Qianqian Hu, Zhen Guo, Jian Xu, Kun Xiong","doi":"10.1109/ICIVC.2018.8492732","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492732","url":null,"abstract":"With the development of computer technology, the diagnostic capability of the computer-aided diagnosis systems has improved. It has contributed to classify the brain images into health or other pathological categories automatically and accurately. In this paper, we proposed an improved method by introducing reality-preserving fractional Fourier transform (RPFRFT) and Adaboost to classify brain images into five different categories of health, cerebrovascular disease, neoplastic disease, degenerative disease and inflammatory disease. We used 190 T2-weighted images obtained by magnetic resonance imaging in the experiment. First, we employed RPFRFT to extract spectrum features from each magnetic resonance image. Second, we applied principal component analysis (PCA) to reduce feature dimensionality to only 86. Third, those reduced spectral features of different samples were combined and then were fed into Adaboost to train the classifier. The 10×10-fold cross validation obtained an accuracy of 98.6%. The result confirms the effectiveness of our proposed method.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121775572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492895
Huiqian Niu, Qiankun Lu, Chao Wang
Image stitching is a widely used technique to obtain panoramas in daily life. There are often color differences between neighboring views due to different exposure levels and view angels. Although many different automatic color correction approaches have been proposed, they are not appropriate for all multi-view image and video stitching, especially when occlusion or parallax exists. This paper puts forward a new method based on histogram matching and polynomial regression. The experimental results show that the method has good effects on the color difference no matter whether parallax exists or not.
{"title":"Color Correction Based on Histogram Matching and Polynomial Regression for Image Stitching","authors":"Huiqian Niu, Qiankun Lu, Chao Wang","doi":"10.1109/ICIVC.2018.8492895","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492895","url":null,"abstract":"Image stitching is a widely used technique to obtain panoramas in daily life. There are often color differences between neighboring views due to different exposure levels and view angels. Although many different automatic color correction approaches have been proposed, they are not appropriate for all multi-view image and video stitching, especially when occlusion or parallax exists. This paper puts forward a new method based on histogram matching and polynomial regression. The experimental results show that the method has good effects on the color difference no matter whether parallax exists or not.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125285933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}