首页 > 最新文献

2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)最新文献

英文 中文
GNSS System Time Offset Real-Time Monitoring with GLONASS ICBs Estimated GLONASS ICBs的GNSS系统时间偏移实时监测估计
Pub Date : 2018-06-01 DOI: 10.1109/ICIVC.2018.8492910
Sijia Kong, Jing Peng, Wenxiang Liu, Mengli Wang, Feixue Wang
Global Navigation Satellite Systems (GNSS) include Global Positioning System (GPS), Global Navigation Satellite System (GLONASS), Galileo satellite navigation system (Galileo) and BeiDou Navigation Satellite System (BDS). With the development of BDS, it is necessary to monitor system time offset between BDS and the other GNSSs to enhance the compatibility and interoperability among GNSSs. The system time offset between GLONASS and BDS is affected by the inter-channel biases (ICBs) caused by frequency division multiple access technique (FDMA). To reduce the impact of GLONASS ICBs on BDS and GLONASS system time offset (BDS-GLONASS), this paper proposes a method of estimating GLONASS ICBs parameters and system time offset parameters in real time. The experimental results indicate that the standard deviation (STD) of BDS-GLONASS monitoring value can be reduced from 6 ~ 7ns to about 3ns (more than 45%). And the STD of BDS-GPS and BDS-Galileo monitoring value can be reduced more than 15%. This work will also lead to further research in GNSS system time offset monitoring and forecasting.
全球卫星导航系统(GNSS)包括全球定位系统(GPS)、全球卫星导航系统(GLONASS)、伽利略卫星导航系统(Galileo)和北斗卫星导航系统(BDS)。随着北斗系统的发展,有必要对北斗系统与其他gnss之间的系统时间偏移进行监测,以增强各gnss之间的兼容性和互操作性。频分多址技术(FDMA)产生的信道间偏置(ICBs)会影响GLONASS和BDS之间的系统时间偏移。为了减少GLONASS ICBs对BDS和GLONASS系统时间偏移(BDS-GLONASS)的影响,本文提出了一种实时估计GLONASS ICBs参数和系统时间偏移参数的方法。实验结果表明,BDS-GLONASS监测值的标准偏差(STD)可从6 ~ 7ns降低到3ns左右(超过45%)。BDS-GPS和BDS-Galileo监测值的STD可降低15%以上。这项工作也将导致GNSS系统时间偏移监测和预测的进一步研究。
{"title":"GNSS System Time Offset Real-Time Monitoring with GLONASS ICBs Estimated","authors":"Sijia Kong, Jing Peng, Wenxiang Liu, Mengli Wang, Feixue Wang","doi":"10.1109/ICIVC.2018.8492910","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492910","url":null,"abstract":"Global Navigation Satellite Systems (GNSS) include Global Positioning System (GPS), Global Navigation Satellite System (GLONASS), Galileo satellite navigation system (Galileo) and BeiDou Navigation Satellite System (BDS). With the development of BDS, it is necessary to monitor system time offset between BDS and the other GNSSs to enhance the compatibility and interoperability among GNSSs. The system time offset between GLONASS and BDS is affected by the inter-channel biases (ICBs) caused by frequency division multiple access technique (FDMA). To reduce the impact of GLONASS ICBs on BDS and GLONASS system time offset (BDS-GLONASS), this paper proposes a method of estimating GLONASS ICBs parameters and system time offset parameters in real time. The experimental results indicate that the standard deviation (STD) of BDS-GLONASS monitoring value can be reduced from 6 ~ 7ns to about 3ns (more than 45%). And the STD of BDS-GPS and BDS-Galileo monitoring value can be reduced more than 15%. This work will also lead to further research in GNSS system time offset monitoring and forecasting.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133089711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Unsupervised Super-Resolution Framework for Medical Ultrasound Images Using Dilated Convolutional Neural Networks 基于扩展卷积神经网络的医学超声图像无监督超分辨率框架
Pub Date : 2018-06-01 DOI: 10.1109/ICIVC.2018.8492821
Jingfeng Lu, Wanyu Liu
Ultrasound Imaging is one of the most widely used imaging modalities for clinic diagnosis, but suffers from a low resolution due to the intrinsic physical flaws. In this paper, we present a novel unsupervised super-resolution (USSR) framework to solve the single image super-resolution (SR) problem in ultrasound images which lack of training examples. Our method utilizes the powerful nonlinear mapping ability of convolutional neural networks (CNNs), without relying on prior training or any external data. We exploit the multi-scale contextual information extracted from the test image itself to train an image-specific network at test time. We utilize several techniques to improve the convergence and accuracy, including dilated convolution and residual learning. To capture valuable internal information, dilated convolution is employed to increase the receptive field without increasing the network parameters. To speed up the convergence of the training, residual learning is used to directly learn the difference between the high-resolution and low-resolution images. Quantitative and qualitative evaluations on real ultrasound images demonstrate that the proposed method outperforms the state-of-the-art unsupervised method.
超声成像是临床诊断中应用最广泛的成像方式之一,但由于其固有的物理缺陷,其分辨率较低。本文提出了一种新的无监督超分辨率(USSR)框架,用于解决超声图像缺乏训练样例的单图像超分辨率(SR)问题。我们的方法利用卷积神经网络(cnn)强大的非线性映射能力,不依赖于先验训练或任何外部数据。我们利用从测试图像本身提取的多尺度上下文信息来训练测试时的图像特定网络。我们使用了几种技术来提高收敛性和准确性,包括扩展卷积和残差学习。为了获取有价值的内部信息,在不增加网络参数的情况下,采用扩展卷积增加接收野。为了加快训练的收敛速度,残差学习直接学习高分辨率和低分辨率图像之间的差异。对真实超声图像的定量和定性评价表明,该方法优于最先进的无监督方法。
{"title":"Unsupervised Super-Resolution Framework for Medical Ultrasound Images Using Dilated Convolutional Neural Networks","authors":"Jingfeng Lu, Wanyu Liu","doi":"10.1109/ICIVC.2018.8492821","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492821","url":null,"abstract":"Ultrasound Imaging is one of the most widely used imaging modalities for clinic diagnosis, but suffers from a low resolution due to the intrinsic physical flaws. In this paper, we present a novel unsupervised super-resolution (USSR) framework to solve the single image super-resolution (SR) problem in ultrasound images which lack of training examples. Our method utilizes the powerful nonlinear mapping ability of convolutional neural networks (CNNs), without relying on prior training or any external data. We exploit the multi-scale contextual information extracted from the test image itself to train an image-specific network at test time. We utilize several techniques to improve the convergence and accuracy, including dilated convolution and residual learning. To capture valuable internal information, dilated convolution is employed to increase the receptive field without increasing the network parameters. To speed up the convergence of the training, residual learning is used to directly learn the difference between the high-resolution and low-resolution images. Quantitative and qualitative evaluations on real ultrasound images demonstrate that the proposed method outperforms the state-of-the-art unsupervised method.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132236768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Visualization of Dust Evolution Simulation Model in Campus Environment 校园环境粉尘演化仿真模型可视化研究
Pub Date : 2018-06-01 DOI: 10.1109/ICIVC.2018.8492797
Hu Xiaomei, Li Minghang, Wang Chuan, Yang Xu, Wei Chenjun
In order to establish the simulation model of dust evolution and reveal the evolution process of dust, Shanghai University is selected as the simulation area, the method of Kinetic Monte Carlo is used to simulate the dust particles in the virtual campus, OpenGL and C language are used so as to realize the visualization of dust evolution simulation model. The collection data and simulation data are compared at different locations in the campus, and the results prove the validity of dust evolution simulation model. Based on the visualization results of dust evolution simulation, the relationships among wind speed, simulation time, vegetation effect and accumulation of dust particles on the ground or the motion of dust particles in the vertical surface are revealed. Visualization of dust evolution simulation model will provide a valid reference for dust control.
为了建立粉尘演化仿真模型,揭示粉尘演化过程,选择上海大学作为仿真区域,采用动力学蒙特卡罗方法对虚拟校园中的粉尘粒子进行仿真,利用OpenGL和C语言实现粉尘演化仿真模型的可视化。通过对校园内不同地点的采集数据和模拟数据的比较,验证了粉尘演化模拟模型的有效性。基于沙尘演变模拟的可视化结果,揭示了风速、模拟时间、植被效应与地面沙尘累积或垂直表面沙尘运动的关系。粉尘演化仿真模型的可视化将为粉尘控制提供有效的参考。
{"title":"Visualization of Dust Evolution Simulation Model in Campus Environment","authors":"Hu Xiaomei, Li Minghang, Wang Chuan, Yang Xu, Wei Chenjun","doi":"10.1109/ICIVC.2018.8492797","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492797","url":null,"abstract":"In order to establish the simulation model of dust evolution and reveal the evolution process of dust, Shanghai University is selected as the simulation area, the method of Kinetic Monte Carlo is used to simulate the dust particles in the virtual campus, OpenGL and C language are used so as to realize the visualization of dust evolution simulation model. The collection data and simulation data are compared at different locations in the campus, and the results prove the validity of dust evolution simulation model. Based on the visualization results of dust evolution simulation, the relationships among wind speed, simulation time, vegetation effect and accumulation of dust particles on the ground or the motion of dust particles in the vertical surface are revealed. Visualization of dust evolution simulation model will provide a valid reference for dust control.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132454308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Vision-Based Simultaneous Localization and Mapping on Lunar Rover 基于视觉的月球车同步定位与制图
Pub Date : 2018-06-01 DOI: 10.1109/ICIVC.2018.8492755
Pei An, Yanchao Liu, Wei Zhang, Z. Jin
With the development of lunar exploration technology, vision-based localization and navigation technology has become a research focus in the field of lunar rover. This paper proposes an image-based method for localization and mapping with a lunar rover. The motion of the camera represents the movement of the lunar rover. Based on the images acquired by the camera, the relative pose of the camera and 3D landmarks are obtained using the multi-view geometry and the bundle adjustment optimization methods. The prior knowledge of the lunar rover movement is not required. In addition, this paper also proposes a grid-based feature extraction method to solve the problem of uneven feature extraction and mis-matching. The algorithm in this paper has been tested in real time in a large image dataset. Finally, the error analysis of the estimated pose obtained from the experiment and the real trajectory proves the excellent performance of the algorithm.
随着月球探测技术的发展,基于视觉的定位与导航技术已成为月球车领域的研究热点。提出了一种基于图像的月球车定位与制图方法。相机的运动代表月球车的运动。基于相机获取的图像,采用多视角几何和束调整优化方法获得相机的相对姿态和三维地标。不需要事先了解月球车的运动情况。此外,本文还提出了一种基于网格的特征提取方法,解决了特征提取不均匀和不匹配的问题。本文算法已在大型图像数据集上进行了实时测试。最后,将实验得到的姿态估计与实际轨迹进行误差分析,验证了算法的优异性能。
{"title":"Vision-Based Simultaneous Localization and Mapping on Lunar Rover","authors":"Pei An, Yanchao Liu, Wei Zhang, Z. Jin","doi":"10.1109/ICIVC.2018.8492755","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492755","url":null,"abstract":"With the development of lunar exploration technology, vision-based localization and navigation technology has become a research focus in the field of lunar rover. This paper proposes an image-based method for localization and mapping with a lunar rover. The motion of the camera represents the movement of the lunar rover. Based on the images acquired by the camera, the relative pose of the camera and 3D landmarks are obtained using the multi-view geometry and the bundle adjustment optimization methods. The prior knowledge of the lunar rover movement is not required. In addition, this paper also proposes a grid-based feature extraction method to solve the problem of uneven feature extraction and mis-matching. The algorithm in this paper has been tested in real time in a large image dataset. Finally, the error analysis of the estimated pose obtained from the experiment and the real trajectory proves the excellent performance of the algorithm.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129077739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
The Accurate Estimation of Disparity Maps from Cross-Scale Reference-Based Light Field 基于交叉比例尺参考光场的视差图精确估计
Pub Date : 2018-06-01 DOI: 10.1109/ICIVC.2018.8492884
Mandan Zhao, X. Hao, Gaochang Wu
This paper addresses the problem of disparity map accurate estimation in the cross-scale reference-based light field, which consists several low-quality images arranged around one central high-resolution (HR) image. In the framework, we use a HR image-guidance CNN (HRIG-CNN) for estimating the disparity map in the HR level. Specifically, we first calculate the coarse disparity map using our cross-pattern strategy, which can blend the multiple disparity maps. And then, we refine this coarse disparity map using HRIG-CNN for obtaining high-quality disparity map, which contains detail information and preserve edge information. With the HR image guidance, our HRIG-CNN achieves state-of-the-art for obtaining disparity map in such hybrid light field condition. In the end, we provide both quantitative and qualitative evaluations on different methods, and demonstrate the high performance and robustness of the proposed framework compared with the state-of-the-arts algorithms.
本文研究了由多幅低质量图像围绕一幅中心高分辨率图像组成的基于交叉比例尺参考光场的视差图精确估计问题。在该框架中,我们使用HR图像引导CNN (hrg -CNN)来估计HR层的视差图。具体来说,我们首先使用我们的交叉模式策略计算粗视差图,该策略可以混合多个视差图。然后,我们使用HRIG-CNN对粗视差图进行细化,得到包含细节信息和保留边缘信息的高质量视差图。在HR图像的引导下,我们的hrg - cnn在这种混合光场条件下获得视差图达到了最先进的水平。最后,我们对不同的方法进行了定量和定性评估,并与最先进的算法相比,证明了所提出框架的高性能和鲁棒性。
{"title":"The Accurate Estimation of Disparity Maps from Cross-Scale Reference-Based Light Field","authors":"Mandan Zhao, X. Hao, Gaochang Wu","doi":"10.1109/ICIVC.2018.8492884","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492884","url":null,"abstract":"This paper addresses the problem of disparity map accurate estimation in the cross-scale reference-based light field, which consists several low-quality images arranged around one central high-resolution (HR) image. In the framework, we use a HR image-guidance CNN (HRIG-CNN) for estimating the disparity map in the HR level. Specifically, we first calculate the coarse disparity map using our cross-pattern strategy, which can blend the multiple disparity maps. And then, we refine this coarse disparity map using HRIG-CNN for obtaining high-quality disparity map, which contains detail information and preserve edge information. With the HR image guidance, our HRIG-CNN achieves state-of-the-art for obtaining disparity map in such hybrid light field condition. In the end, we provide both quantitative and qualitative evaluations on different methods, and demonstrate the high performance and robustness of the proposed framework compared with the state-of-the-arts algorithms.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132581223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Focus Image Fusion Using Block-Wise Color-Principal Component Analysis 基于分块颜色主成分分析的多焦点图像融合
Pub Date : 2018-06-01 DOI: 10.1109/ICIVC.2018.8492725
Abubakar Siddique, Bin Xiao, Weisheng Li, Qamar Nawaz, Isma Hamid
In this work, multi-focus image fusion method has been proposed by using color-principal component analysis (C-PCA). Proposed method consists of different phases. In the first phase, both source images have been converted into three RGB color channels. In the next phase, for each channel, covariance's has been calculated for both images. Special weights have been calculated to generate intermediate images. In the next phase, Convolution has been used with Gaussian blur to make image smooth. Zero-crossing based second order-derivative has been incorporated to calculate edges. In the last phase, images have been decomposed into blocks. Salient features information by using Laplacian of Gaussian and Spatial Frequency of each block have been used to get the fused image. Experimental results show that the proposed method performs well as compare to existing methods by using quality matrices.
本文提出了一种基于颜色主成分分析(C-PCA)的多焦点图像融合方法。提出的方法由不同的阶段组成。在第一阶段,两个源图像都被转换成三个RGB颜色通道。在下一阶段,对于每个通道,计算了两个图像的协方差。计算了特殊的权重来生成中间图像。在下一阶段,卷积与高斯模糊一起使用,使图像平滑。引入了基于过零的二阶导数来计算边。在最后一个阶段,图像被分解成块。利用高斯拉普拉斯算子和各块空间频率的显著特征信息得到融合图像。实验结果表明,与现有的基于质量矩阵的方法相比,该方法具有良好的性能。
{"title":"Multi-Focus Image Fusion Using Block-Wise Color-Principal Component Analysis","authors":"Abubakar Siddique, Bin Xiao, Weisheng Li, Qamar Nawaz, Isma Hamid","doi":"10.1109/ICIVC.2018.8492725","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492725","url":null,"abstract":"In this work, multi-focus image fusion method has been proposed by using color-principal component analysis (C-PCA). Proposed method consists of different phases. In the first phase, both source images have been converted into three RGB color channels. In the next phase, for each channel, covariance's has been calculated for both images. Special weights have been calculated to generate intermediate images. In the next phase, Convolution has been used with Gaussian blur to make image smooth. Zero-crossing based second order-derivative has been incorporated to calculate edges. In the last phase, images have been decomposed into blocks. Salient features information by using Laplacian of Gaussian and Spatial Frequency of each block have been used to get the fused image. Experimental results show that the proposed method performs well as compare to existing methods by using quality matrices.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"231 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132828048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Rigid Body Pose Estimation from Line Correspondences 基于直线对应的刚体姿态估计
Pub Date : 2018-06-01 DOI: 10.1109/ICIVC.2018.8492787
Yantao Yue, Xiangyi Sun
In this paper, we aim to solve pose estimation of rigid body motion in real time with 3d lines model. According to the line's perspective projection model, we design a new error function expressed by the average integral of the distance between line segments to estimate parameters. Considering the continuely of motion, we restore cracked line segements with re-projection of Model lines Constrianted. Last, we proposal to estimate many frames jointly in framework of SFM and get better precious while bears slow speed. Comparisons on synthetic and real images demonstrate that baseline methods get accuracy estimations in complex environments. For plane objects, the precious of pose on x, y, z axes are better than 0.5m in 100m distance, and those of relative positions perpendicular to the optical axis and along the optical axis are better than 0.3%.
本文旨在利用三维直线模型实时求解刚体运动的位姿估计问题。根据直线的透视投影模型,设计了用线段间距的平均积分表示的误差函数来估计参数。考虑到运动的连续性,我们用约束模型线的重新投影来恢复破碎的线段。最后,我们提出在SFM框架中对多帧进行联合估计,在承受较慢速度的情况下获得更好的精度。对合成图像和真实图像的比较表明,基线方法在复杂环境下具有较好的估计精度。对于平面物体,在100米距离内,x、y、z轴上的位姿误差优于0.5m,垂直于光轴和沿光轴的相对位置误差优于0.3%。
{"title":"Rigid Body Pose Estimation from Line Correspondences","authors":"Yantao Yue, Xiangyi Sun","doi":"10.1109/ICIVC.2018.8492787","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492787","url":null,"abstract":"In this paper, we aim to solve pose estimation of rigid body motion in real time with 3d lines model. According to the line's perspective projection model, we design a new error function expressed by the average integral of the distance between line segments to estimate parameters. Considering the continuely of motion, we restore cracked line segements with re-projection of Model lines Constrianted. Last, we proposal to estimate many frames jointly in framework of SFM and get better precious while bears slow speed. Comparisons on synthetic and real images demonstrate that baseline methods get accuracy estimations in complex environments. For plane objects, the precious of pose on x, y, z axes are better than 0.5m in 100m distance, and those of relative positions perpendicular to the optical axis and along the optical axis are better than 0.3%.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117157236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Taxi Detection Based on the Sliding Color Histogram Matching 基于滑动颜色直方图匹配的出租车检测
Pub Date : 2018-06-01 DOI: 10.1109/ICIVC.2018.8492826
Xunping Huang, Ridong Zhang, Ke-bin Jia, Zuyun Wang, Wenzhen Nie
The Adaboost vehicle detection algorithm based on Haar feature has a good performance in real-time and accuracy. However, the method has a lot of omission and error detection in the detection of Special vehicle under complicated traffic flow. In this paper, a method of taxi window area detection is proposed to replace vehicle detection. At the same time, a method of sliding color histogram matching is proposed to reduce the error detection. Finally, the traffic surveillance video is used to verify the algorithm, Detection results proves that the algorithm has a good accuracy and real-time performance for the detection of taxi vehicles.
基于Haar特征的Adaboost车辆检测算法具有较好的实时性和准确性。然而,该方法在复杂交通流下的特种车辆检测中存在大量的遗漏和错误检测。本文提出了一种出租车窗口区域检测方法来代替车辆检测。同时,提出了一种滑动颜色直方图匹配的方法来减少检测误差。最后,利用交通监控视频对算法进行验证,检测结果证明该算法对于出租车车辆的检测具有良好的准确性和实时性。
{"title":"Taxi Detection Based on the Sliding Color Histogram Matching","authors":"Xunping Huang, Ridong Zhang, Ke-bin Jia, Zuyun Wang, Wenzhen Nie","doi":"10.1109/ICIVC.2018.8492826","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492826","url":null,"abstract":"The Adaboost vehicle detection algorithm based on Haar feature has a good performance in real-time and accuracy. However, the method has a lot of omission and error detection in the detection of Special vehicle under complicated traffic flow. In this paper, a method of taxi window area detection is proposed to replace vehicle detection. At the same time, a method of sliding color histogram matching is proposed to reduce the error detection. Finally, the traffic surveillance video is used to verify the algorithm, Detection results proves that the algorithm has a good accuracy and real-time performance for the detection of taxi vehicles.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"314 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116380277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multi-Class Brain Images Classification Based on Reality-Preserving Fractional Fourier Transform and Adaboost 基于保真分数傅里叶变换和Adaboost的多类脑图像分类
Pub Date : 2018-06-01 DOI: 10.1109/ICIVC.2018.8492732
Ying Zhang, Qianqian Hu, Zhen Guo, Jian Xu, Kun Xiong
With the development of computer technology, the diagnostic capability of the computer-aided diagnosis systems has improved. It has contributed to classify the brain images into health or other pathological categories automatically and accurately. In this paper, we proposed an improved method by introducing reality-preserving fractional Fourier transform (RPFRFT) and Adaboost to classify brain images into five different categories of health, cerebrovascular disease, neoplastic disease, degenerative disease and inflammatory disease. We used 190 T2-weighted images obtained by magnetic resonance imaging in the experiment. First, we employed RPFRFT to extract spectrum features from each magnetic resonance image. Second, we applied principal component analysis (PCA) to reduce feature dimensionality to only 86. Third, those reduced spectral features of different samples were combined and then were fed into Adaboost to train the classifier. The 10×10-fold cross validation obtained an accuracy of 98.6%. The result confirms the effectiveness of our proposed method.
随着计算机技术的发展,计算机辅助诊断系统的诊断能力不断提高。它有助于将脑图像自动准确地划分为健康或其他病理类别。本文提出了一种改进的方法,通过引入保持真实的分数傅里叶变换(RPFRFT)和Adaboost,将脑图像分为健康、脑血管疾病、肿瘤疾病、退行性疾病和炎症性疾病五类。实验使用磁共振成像获得的t2加权图像190张。首先,我们利用RPFRFT提取每张磁共振图像的频谱特征。其次,我们应用主成分分析(PCA)将特征维数降至86。第三步,将不同样本的谱特征进行组合,然后输入Adaboost进行分类器训练。10×10-fold交叉验证的准确率为98.6%。实验结果证实了该方法的有效性。
{"title":"Multi-Class Brain Images Classification Based on Reality-Preserving Fractional Fourier Transform and Adaboost","authors":"Ying Zhang, Qianqian Hu, Zhen Guo, Jian Xu, Kun Xiong","doi":"10.1109/ICIVC.2018.8492732","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492732","url":null,"abstract":"With the development of computer technology, the diagnostic capability of the computer-aided diagnosis systems has improved. It has contributed to classify the brain images into health or other pathological categories automatically and accurately. In this paper, we proposed an improved method by introducing reality-preserving fractional Fourier transform (RPFRFT) and Adaboost to classify brain images into five different categories of health, cerebrovascular disease, neoplastic disease, degenerative disease and inflammatory disease. We used 190 T2-weighted images obtained by magnetic resonance imaging in the experiment. First, we employed RPFRFT to extract spectrum features from each magnetic resonance image. Second, we applied principal component analysis (PCA) to reduce feature dimensionality to only 86. Third, those reduced spectral features of different samples were combined and then were fed into Adaboost to train the classifier. The 10×10-fold cross validation obtained an accuracy of 98.6%. The result confirms the effectiveness of our proposed method.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121775572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Color Correction Based on Histogram Matching and Polynomial Regression for Image Stitching 基于直方图匹配和多项式回归的图像拼接颜色校正
Pub Date : 2018-06-01 DOI: 10.1109/ICIVC.2018.8492895
Huiqian Niu, Qiankun Lu, Chao Wang
Image stitching is a widely used technique to obtain panoramas in daily life. There are often color differences between neighboring views due to different exposure levels and view angels. Although many different automatic color correction approaches have been proposed, they are not appropriate for all multi-view image and video stitching, especially when occlusion or parallax exists. This paper puts forward a new method based on histogram matching and polynomial regression. The experimental results show that the method has good effects on the color difference no matter whether parallax exists or not.
在日常生活中,图像拼接是一种广泛应用的全景图像获取技术。由于不同的曝光水平和视角,相邻视图之间经常存在颜色差异。虽然提出了许多不同的自动色彩校正方法,但它们并不适用于所有的多视点图像和视频拼接,特别是当存在遮挡或视差时。本文提出了一种基于直方图匹配和多项式回归的新方法。实验结果表明,无论视差是否存在,该方法对色差都有较好的处理效果。
{"title":"Color Correction Based on Histogram Matching and Polynomial Regression for Image Stitching","authors":"Huiqian Niu, Qiankun Lu, Chao Wang","doi":"10.1109/ICIVC.2018.8492895","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492895","url":null,"abstract":"Image stitching is a widely used technique to obtain panoramas in daily life. There are often color differences between neighboring views due to different exposure levels and view angels. Although many different automatic color correction approaches have been proposed, they are not appropriate for all multi-view image and video stitching, especially when occlusion or parallax exists. This paper puts forward a new method based on histogram matching and polynomial regression. The experimental results show that the method has good effects on the color difference no matter whether parallax exists or not.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125285933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
期刊
2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1