首页 > 最新文献

2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)最新文献

英文 中文
GNSS System Time Offset Real-Time Monitoring with GLONASS ICBs Estimated GLONASS ICBs的GNSS系统时间偏移实时监测估计
Pub Date : 2018-06-01 DOI: 10.1109/ICIVC.2018.8492910
Sijia Kong, Jing Peng, Wenxiang Liu, Mengli Wang, Feixue Wang
Global Navigation Satellite Systems (GNSS) include Global Positioning System (GPS), Global Navigation Satellite System (GLONASS), Galileo satellite navigation system (Galileo) and BeiDou Navigation Satellite System (BDS). With the development of BDS, it is necessary to monitor system time offset between BDS and the other GNSSs to enhance the compatibility and interoperability among GNSSs. The system time offset between GLONASS and BDS is affected by the inter-channel biases (ICBs) caused by frequency division multiple access technique (FDMA). To reduce the impact of GLONASS ICBs on BDS and GLONASS system time offset (BDS-GLONASS), this paper proposes a method of estimating GLONASS ICBs parameters and system time offset parameters in real time. The experimental results indicate that the standard deviation (STD) of BDS-GLONASS monitoring value can be reduced from 6 ~ 7ns to about 3ns (more than 45%). And the STD of BDS-GPS and BDS-Galileo monitoring value can be reduced more than 15%. This work will also lead to further research in GNSS system time offset monitoring and forecasting.
全球卫星导航系统(GNSS)包括全球定位系统(GPS)、全球卫星导航系统(GLONASS)、伽利略卫星导航系统(Galileo)和北斗卫星导航系统(BDS)。随着北斗系统的发展,有必要对北斗系统与其他gnss之间的系统时间偏移进行监测,以增强各gnss之间的兼容性和互操作性。频分多址技术(FDMA)产生的信道间偏置(ICBs)会影响GLONASS和BDS之间的系统时间偏移。为了减少GLONASS ICBs对BDS和GLONASS系统时间偏移(BDS-GLONASS)的影响,本文提出了一种实时估计GLONASS ICBs参数和系统时间偏移参数的方法。实验结果表明,BDS-GLONASS监测值的标准偏差(STD)可从6 ~ 7ns降低到3ns左右(超过45%)。BDS-GPS和BDS-Galileo监测值的STD可降低15%以上。这项工作也将导致GNSS系统时间偏移监测和预测的进一步研究。
{"title":"GNSS System Time Offset Real-Time Monitoring with GLONASS ICBs Estimated","authors":"Sijia Kong, Jing Peng, Wenxiang Liu, Mengli Wang, Feixue Wang","doi":"10.1109/ICIVC.2018.8492910","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492910","url":null,"abstract":"Global Navigation Satellite Systems (GNSS) include Global Positioning System (GPS), Global Navigation Satellite System (GLONASS), Galileo satellite navigation system (Galileo) and BeiDou Navigation Satellite System (BDS). With the development of BDS, it is necessary to monitor system time offset between BDS and the other GNSSs to enhance the compatibility and interoperability among GNSSs. The system time offset between GLONASS and BDS is affected by the inter-channel biases (ICBs) caused by frequency division multiple access technique (FDMA). To reduce the impact of GLONASS ICBs on BDS and GLONASS system time offset (BDS-GLONASS), this paper proposes a method of estimating GLONASS ICBs parameters and system time offset parameters in real time. The experimental results indicate that the standard deviation (STD) of BDS-GLONASS monitoring value can be reduced from 6 ~ 7ns to about 3ns (more than 45%). And the STD of BDS-GPS and BDS-Galileo monitoring value can be reduced more than 15%. This work will also lead to further research in GNSS system time offset monitoring and forecasting.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133089711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Unsupervised Super-Resolution Framework for Medical Ultrasound Images Using Dilated Convolutional Neural Networks 基于扩展卷积神经网络的医学超声图像无监督超分辨率框架
Pub Date : 2018-06-01 DOI: 10.1109/ICIVC.2018.8492821
Jingfeng Lu, Wanyu Liu
Ultrasound Imaging is one of the most widely used imaging modalities for clinic diagnosis, but suffers from a low resolution due to the intrinsic physical flaws. In this paper, we present a novel unsupervised super-resolution (USSR) framework to solve the single image super-resolution (SR) problem in ultrasound images which lack of training examples. Our method utilizes the powerful nonlinear mapping ability of convolutional neural networks (CNNs), without relying on prior training or any external data. We exploit the multi-scale contextual information extracted from the test image itself to train an image-specific network at test time. We utilize several techniques to improve the convergence and accuracy, including dilated convolution and residual learning. To capture valuable internal information, dilated convolution is employed to increase the receptive field without increasing the network parameters. To speed up the convergence of the training, residual learning is used to directly learn the difference between the high-resolution and low-resolution images. Quantitative and qualitative evaluations on real ultrasound images demonstrate that the proposed method outperforms the state-of-the-art unsupervised method.
超声成像是临床诊断中应用最广泛的成像方式之一,但由于其固有的物理缺陷,其分辨率较低。本文提出了一种新的无监督超分辨率(USSR)框架,用于解决超声图像缺乏训练样例的单图像超分辨率(SR)问题。我们的方法利用卷积神经网络(cnn)强大的非线性映射能力,不依赖于先验训练或任何外部数据。我们利用从测试图像本身提取的多尺度上下文信息来训练测试时的图像特定网络。我们使用了几种技术来提高收敛性和准确性,包括扩展卷积和残差学习。为了获取有价值的内部信息,在不增加网络参数的情况下,采用扩展卷积增加接收野。为了加快训练的收敛速度,残差学习直接学习高分辨率和低分辨率图像之间的差异。对真实超声图像的定量和定性评价表明,该方法优于最先进的无监督方法。
{"title":"Unsupervised Super-Resolution Framework for Medical Ultrasound Images Using Dilated Convolutional Neural Networks","authors":"Jingfeng Lu, Wanyu Liu","doi":"10.1109/ICIVC.2018.8492821","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492821","url":null,"abstract":"Ultrasound Imaging is one of the most widely used imaging modalities for clinic diagnosis, but suffers from a low resolution due to the intrinsic physical flaws. In this paper, we present a novel unsupervised super-resolution (USSR) framework to solve the single image super-resolution (SR) problem in ultrasound images which lack of training examples. Our method utilizes the powerful nonlinear mapping ability of convolutional neural networks (CNNs), without relying on prior training or any external data. We exploit the multi-scale contextual information extracted from the test image itself to train an image-specific network at test time. We utilize several techniques to improve the convergence and accuracy, including dilated convolution and residual learning. To capture valuable internal information, dilated convolution is employed to increase the receptive field without increasing the network parameters. To speed up the convergence of the training, residual learning is used to directly learn the difference between the high-resolution and low-resolution images. Quantitative and qualitative evaluations on real ultrasound images demonstrate that the proposed method outperforms the state-of-the-art unsupervised method.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132236768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Rigid Body Pose Estimation from Line Correspondences 基于直线对应的刚体姿态估计
Pub Date : 2018-06-01 DOI: 10.1109/ICIVC.2018.8492787
Yantao Yue, Xiangyi Sun
In this paper, we aim to solve pose estimation of rigid body motion in real time with 3d lines model. According to the line's perspective projection model, we design a new error function expressed by the average integral of the distance between line segments to estimate parameters. Considering the continuely of motion, we restore cracked line segements with re-projection of Model lines Constrianted. Last, we proposal to estimate many frames jointly in framework of SFM and get better precious while bears slow speed. Comparisons on synthetic and real images demonstrate that baseline methods get accuracy estimations in complex environments. For plane objects, the precious of pose on x, y, z axes are better than 0.5m in 100m distance, and those of relative positions perpendicular to the optical axis and along the optical axis are better than 0.3%.
本文旨在利用三维直线模型实时求解刚体运动的位姿估计问题。根据直线的透视投影模型,设计了用线段间距的平均积分表示的误差函数来估计参数。考虑到运动的连续性,我们用约束模型线的重新投影来恢复破碎的线段。最后,我们提出在SFM框架中对多帧进行联合估计,在承受较慢速度的情况下获得更好的精度。对合成图像和真实图像的比较表明,基线方法在复杂环境下具有较好的估计精度。对于平面物体,在100米距离内,x、y、z轴上的位姿误差优于0.5m,垂直于光轴和沿光轴的相对位置误差优于0.3%。
{"title":"Rigid Body Pose Estimation from Line Correspondences","authors":"Yantao Yue, Xiangyi Sun","doi":"10.1109/ICIVC.2018.8492787","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492787","url":null,"abstract":"In this paper, we aim to solve pose estimation of rigid body motion in real time with 3d lines model. According to the line's perspective projection model, we design a new error function expressed by the average integral of the distance between line segments to estimate parameters. Considering the continuely of motion, we restore cracked line segements with re-projection of Model lines Constrianted. Last, we proposal to estimate many frames jointly in framework of SFM and get better precious while bears slow speed. Comparisons on synthetic and real images demonstrate that baseline methods get accuracy estimations in complex environments. For plane objects, the precious of pose on x, y, z axes are better than 0.5m in 100m distance, and those of relative positions perpendicular to the optical axis and along the optical axis are better than 0.3%.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117157236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secret Data Fusion Based on Chinese Remainder Theorem 基于中国剩余定理的秘密数据融合
Pub Date : 2018-06-01 DOI: 10.1109/ICIVC.2018.8492875
Yuliang Lu, Xuehu Yan, Lintao Liu, Jingju Liu, Guozheng Yang, Qiang Li
In some high-level secure applications in need of multiple participants input their own secret data to achieve access control, such as, secure cabinet opened by multiple owners together, traditional security technology is not applicable. Although secret sharing may be used in the scenarios, there are some problems when directly applying primary secret sharing methods including visual cryptography (VC) and polynomial-based secret sharing. In this paper, we first describe the application scenario (namely secret data fusion) and its requirements, where secret data fusion is different from secret sharing. Then, we propose a possible method for secret data fusion based on Chinese remainder theorem (CRT). Theoretical analyses and experiments are examined to represent the effectiveness of our method.
在一些需要多个参与者输入自己的保密数据来实现访问控制的高级安全应用中,如多个所有者共同打开的安全柜,传统的安全技术就不适用了。虽然可以在这些场景中使用秘密共享,但直接应用主要的秘密共享方法,包括视觉密码(VC)和基于多项式的秘密共享,存在一些问题。本文首先描述了秘密数据融合不同于秘密共享的应用场景(即秘密数据融合)及其需求。在此基础上,提出了一种基于中国剩余定理的秘密数据融合方法。理论分析和实验验证了该方法的有效性。
{"title":"Secret Data Fusion Based on Chinese Remainder Theorem","authors":"Yuliang Lu, Xuehu Yan, Lintao Liu, Jingju Liu, Guozheng Yang, Qiang Li","doi":"10.1109/ICIVC.2018.8492875","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492875","url":null,"abstract":"In some high-level secure applications in need of multiple participants input their own secret data to achieve access control, such as, secure cabinet opened by multiple owners together, traditional security technology is not applicable. Although secret sharing may be used in the scenarios, there are some problems when directly applying primary secret sharing methods including visual cryptography (VC) and polynomial-based secret sharing. In this paper, we first describe the application scenario (namely secret data fusion) and its requirements, where secret data fusion is different from secret sharing. Then, we propose a possible method for secret data fusion based on Chinese remainder theorem (CRT). Theoretical analyses and experiments are examined to represent the effectiveness of our method.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"151 10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123568199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multi-Class Brain Images Classification Based on Reality-Preserving Fractional Fourier Transform and Adaboost 基于保真分数傅里叶变换和Adaboost的多类脑图像分类
Pub Date : 2018-06-01 DOI: 10.1109/ICIVC.2018.8492732
Ying Zhang, Qianqian Hu, Zhen Guo, Jian Xu, Kun Xiong
With the development of computer technology, the diagnostic capability of the computer-aided diagnosis systems has improved. It has contributed to classify the brain images into health or other pathological categories automatically and accurately. In this paper, we proposed an improved method by introducing reality-preserving fractional Fourier transform (RPFRFT) and Adaboost to classify brain images into five different categories of health, cerebrovascular disease, neoplastic disease, degenerative disease and inflammatory disease. We used 190 T2-weighted images obtained by magnetic resonance imaging in the experiment. First, we employed RPFRFT to extract spectrum features from each magnetic resonance image. Second, we applied principal component analysis (PCA) to reduce feature dimensionality to only 86. Third, those reduced spectral features of different samples were combined and then were fed into Adaboost to train the classifier. The 10×10-fold cross validation obtained an accuracy of 98.6%. The result confirms the effectiveness of our proposed method.
随着计算机技术的发展,计算机辅助诊断系统的诊断能力不断提高。它有助于将脑图像自动准确地划分为健康或其他病理类别。本文提出了一种改进的方法,通过引入保持真实的分数傅里叶变换(RPFRFT)和Adaboost,将脑图像分为健康、脑血管疾病、肿瘤疾病、退行性疾病和炎症性疾病五类。实验使用磁共振成像获得的t2加权图像190张。首先,我们利用RPFRFT提取每张磁共振图像的频谱特征。其次,我们应用主成分分析(PCA)将特征维数降至86。第三步,将不同样本的谱特征进行组合,然后输入Adaboost进行分类器训练。10×10-fold交叉验证的准确率为98.6%。实验结果证实了该方法的有效性。
{"title":"Multi-Class Brain Images Classification Based on Reality-Preserving Fractional Fourier Transform and Adaboost","authors":"Ying Zhang, Qianqian Hu, Zhen Guo, Jian Xu, Kun Xiong","doi":"10.1109/ICIVC.2018.8492732","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492732","url":null,"abstract":"With the development of computer technology, the diagnostic capability of the computer-aided diagnosis systems has improved. It has contributed to classify the brain images into health or other pathological categories automatically and accurately. In this paper, we proposed an improved method by introducing reality-preserving fractional Fourier transform (RPFRFT) and Adaboost to classify brain images into five different categories of health, cerebrovascular disease, neoplastic disease, degenerative disease and inflammatory disease. We used 190 T2-weighted images obtained by magnetic resonance imaging in the experiment. First, we employed RPFRFT to extract spectrum features from each magnetic resonance image. Second, we applied principal component analysis (PCA) to reduce feature dimensionality to only 86. Third, those reduced spectral features of different samples were combined and then were fed into Adaboost to train the classifier. The 10×10-fold cross validation obtained an accuracy of 98.6%. The result confirms the effectiveness of our proposed method.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121775572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Characteristic Function Based Parameter Estimation for Ocean Ambient Noise 基于特征函数的海洋环境噪声参数估计
Pub Date : 2018-06-01 DOI: 10.1109/ICIVC.2018.8492728
Xuebo Zhang, Cheng Tan, Wenwei Ying
The parameter initializations play an important role in the iteration of parameter estimation. Based on characteristic function, a parameter estimation method for Class B noise considering the parameter initialization is presented in this paper. The noise is firstly considered as the symmetric alpha stable (SαS) distribution. With the log method, we get the estimated parameters, which are further used as the parameter initial values of iteration. It improves the convergence speed. The processing results of simulated data indicate that the parameters of Class B noise can be efficiently estimated with the presented method.
参数初始化在参数估计迭代中起着重要的作用。提出了一种基于特征函数的考虑参数初始化的B类噪声参数估计方法。首先将噪声视为对称α稳定(s - α - s)分布。利用对数法得到估计参数,并将估计参数作为迭代的参数初始值。提高了收敛速度。仿真数据的处理结果表明,该方法可以有效地估计出B类噪声的参数。
{"title":"Characteristic Function Based Parameter Estimation for Ocean Ambient Noise","authors":"Xuebo Zhang, Cheng Tan, Wenwei Ying","doi":"10.1109/ICIVC.2018.8492728","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492728","url":null,"abstract":"The parameter initializations play an important role in the iteration of parameter estimation. Based on characteristic function, a parameter estimation method for Class B noise considering the parameter initialization is presented in this paper. The noise is firstly considered as the symmetric alpha stable (SαS) distribution. With the log method, we get the estimated parameters, which are further used as the parameter initial values of iteration. It improves the convergence speed. The processing results of simulated data indicate that the parameters of Class B noise can be efficiently estimated with the presented method.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122775713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Color Correction Based on Histogram Matching and Polynomial Regression for Image Stitching 基于直方图匹配和多项式回归的图像拼接颜色校正
Pub Date : 2018-06-01 DOI: 10.1109/ICIVC.2018.8492895
Huiqian Niu, Qiankun Lu, Chao Wang
Image stitching is a widely used technique to obtain panoramas in daily life. There are often color differences between neighboring views due to different exposure levels and view angels. Although many different automatic color correction approaches have been proposed, they are not appropriate for all multi-view image and video stitching, especially when occlusion or parallax exists. This paper puts forward a new method based on histogram matching and polynomial regression. The experimental results show that the method has good effects on the color difference no matter whether parallax exists or not.
在日常生活中,图像拼接是一种广泛应用的全景图像获取技术。由于不同的曝光水平和视角,相邻视图之间经常存在颜色差异。虽然提出了许多不同的自动色彩校正方法,但它们并不适用于所有的多视点图像和视频拼接,特别是当存在遮挡或视差时。本文提出了一种基于直方图匹配和多项式回归的新方法。实验结果表明,无论视差是否存在,该方法对色差都有较好的处理效果。
{"title":"Color Correction Based on Histogram Matching and Polynomial Regression for Image Stitching","authors":"Huiqian Niu, Qiankun Lu, Chao Wang","doi":"10.1109/ICIVC.2018.8492895","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492895","url":null,"abstract":"Image stitching is a widely used technique to obtain panoramas in daily life. There are often color differences between neighboring views due to different exposure levels and view angels. Although many different automatic color correction approaches have been proposed, they are not appropriate for all multi-view image and video stitching, especially when occlusion or parallax exists. This paper puts forward a new method based on histogram matching and polynomial regression. The experimental results show that the method has good effects on the color difference no matter whether parallax exists or not.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125285933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
A Noise-Resistant Stereo Matching Algorithm Integrating Regional Information 一种融合区域信息的抗噪声立体匹配算法
Pub Date : 2018-06-01 DOI: 10.1109/ICIVC.2018.8492874
Feng Huahui, Zhang Geng, Zhang Xin, Hu Bingliang
Focusing on the problem existing in stereo matching that low-SNR image, such as images collected at night, we propose a novel matching framework based on semi-global matching algorithm and AD-Census. This algorithm extends the original algorithms in two ways. First, image segmentation information as an additional constraint is added that solve the problem of incomplete path and improve the accuracy of cost calculation. Second, the matching cost volume is calculated with AD-SoftCensus measure that minimizes the impact of noise on the quality of matching by changing the pattern of census descriptor from binary to trinary. Results of Middlebury standard test data show that the algorithm significantly improves the precision of matching. In addition, a low-light binocular platform is built to test our method in night environment. Results show the disparity maps are more accurate compared to previous methods.
针对夜间采集的低信噪比图像在立体匹配中存在的问题,提出了一种基于半全局匹配算法和AD-Census的立体匹配框架。该算法从两个方面对原有算法进行了扩展。首先,加入图像分割信息作为附加约束,解决了路径不完全的问题,提高了代价计算的准确性;其次,采用AD-SoftCensus方法计算匹配成本体积,该方法通过将普查描述符的模式从二进制更改为二进制,从而最大限度地减少噪声对匹配质量的影响。Middlebury标准测试数据的结果表明,该算法显著提高了匹配精度。此外,还搭建了一个微光双目平台,在夜间环境下对该方法进行了测试。结果表明,与以往的方法相比,视差图的精度更高。
{"title":"A Noise-Resistant Stereo Matching Algorithm Integrating Regional Information","authors":"Feng Huahui, Zhang Geng, Zhang Xin, Hu Bingliang","doi":"10.1109/ICIVC.2018.8492874","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492874","url":null,"abstract":"Focusing on the problem existing in stereo matching that low-SNR image, such as images collected at night, we propose a novel matching framework based on semi-global matching algorithm and AD-Census. This algorithm extends the original algorithms in two ways. First, image segmentation information as an additional constraint is added that solve the problem of incomplete path and improve the accuracy of cost calculation. Second, the matching cost volume is calculated with AD-SoftCensus measure that minimizes the impact of noise on the quality of matching by changing the pattern of census descriptor from binary to trinary. Results of Middlebury standard test data show that the algorithm significantly improves the precision of matching. In addition, a low-light binocular platform is built to test our method in night environment. Results show the disparity maps are more accurate compared to previous methods.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129893415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Siamese Network for Object Tracking in Aerial Video 航空视频中目标跟踪的暹罗网络
Pub Date : 2018-06-01 DOI: 10.1109/ICIVC.2018.8492751
Xiaolin Zhao, Shilin Zhou, Lin Lei, Zhipeng Deng
In Unmanned Aerial Vehicle (UAV) videos, object tracking remains a challenge, due to its low spatial resolution and poor real-time performance. Recently, methods of deep learning have made great progress in object tracking in computer vision, especially fully-convolutional siamese neural networks (SiamFC). Inspired by it, this paper aims to investigate the use of SiamFC for object tracking in UAV videos. The network is trained on part of a UAV123 dataset and Stanford Drone dataset. First, exemplar image is extracted from the first frame and search regions are extracted in the following frames. Then, a Siamese network is used for tracking objects by calculating the similarity between exemplar image and search region. To evaluate our method, we test on a challenge VIVID dataset. The experiment shows that the proposed method has improvements in accuracy and speed in low spatial resolution UAV videos compared to existing methods.
在无人机(UAV)视频中,由于空间分辨率低、实时性差,目标跟踪仍然是一个挑战。近年来,深度学习方法在计算机视觉的目标跟踪方面取得了很大的进展,特别是全卷积连体神经网络(SiamFC)。受其启发,本文旨在研究SiamFC在无人机视频中目标跟踪的应用。该网络是在UAV123数据集和斯坦福无人机数据集的一部分上训练的。首先,从第一帧提取样本图像,并在接下来的帧中提取搜索区域。然后,通过计算样本图像与搜索区域之间的相似度,使用Siamese网络进行目标跟踪。为了评估我们的方法,我们在一个挑战VIVID数据集上进行了测试。实验表明,与现有方法相比,该方法在低空间分辨率无人机视频中的精度和速度都有提高。
{"title":"Siamese Network for Object Tracking in Aerial Video","authors":"Xiaolin Zhao, Shilin Zhou, Lin Lei, Zhipeng Deng","doi":"10.1109/ICIVC.2018.8492751","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492751","url":null,"abstract":"In Unmanned Aerial Vehicle (UAV) videos, object tracking remains a challenge, due to its low spatial resolution and poor real-time performance. Recently, methods of deep learning have made great progress in object tracking in computer vision, especially fully-convolutional siamese neural networks (SiamFC). Inspired by it, this paper aims to investigate the use of SiamFC for object tracking in UAV videos. The network is trained on part of a UAV123 dataset and Stanford Drone dataset. First, exemplar image is extracted from the first frame and search regions are extracted in the following frames. Then, a Siamese network is used for tracking objects by calculating the similarity between exemplar image and search region. To evaluate our method, we test on a challenge VIVID dataset. The experiment shows that the proposed method has improvements in accuracy and speed in low spatial resolution UAV videos compared to existing methods.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128910828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Taxi Detection Based on the Sliding Color Histogram Matching 基于滑动颜色直方图匹配的出租车检测
Pub Date : 2018-06-01 DOI: 10.1109/ICIVC.2018.8492826
Xunping Huang, Ridong Zhang, Ke-bin Jia, Zuyun Wang, Wenzhen Nie
The Adaboost vehicle detection algorithm based on Haar feature has a good performance in real-time and accuracy. However, the method has a lot of omission and error detection in the detection of Special vehicle under complicated traffic flow. In this paper, a method of taxi window area detection is proposed to replace vehicle detection. At the same time, a method of sliding color histogram matching is proposed to reduce the error detection. Finally, the traffic surveillance video is used to verify the algorithm, Detection results proves that the algorithm has a good accuracy and real-time performance for the detection of taxi vehicles.
基于Haar特征的Adaboost车辆检测算法具有较好的实时性和准确性。然而,该方法在复杂交通流下的特种车辆检测中存在大量的遗漏和错误检测。本文提出了一种出租车窗口区域检测方法来代替车辆检测。同时,提出了一种滑动颜色直方图匹配的方法来减少检测误差。最后,利用交通监控视频对算法进行验证,检测结果证明该算法对于出租车车辆的检测具有良好的准确性和实时性。
{"title":"Taxi Detection Based on the Sliding Color Histogram Matching","authors":"Xunping Huang, Ridong Zhang, Ke-bin Jia, Zuyun Wang, Wenzhen Nie","doi":"10.1109/ICIVC.2018.8492826","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492826","url":null,"abstract":"The Adaboost vehicle detection algorithm based on Haar feature has a good performance in real-time and accuracy. However, the method has a lot of omission and error detection in the detection of Special vehicle under complicated traffic flow. In this paper, a method of taxi window area detection is proposed to replace vehicle detection. At the same time, a method of sliding color histogram matching is proposed to reduce the error detection. Finally, the traffic surveillance video is used to verify the algorithm, Detection results proves that the algorithm has a good accuracy and real-time performance for the detection of taxi vehicles.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"314 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116380277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1