首页 > 最新文献

光电工程最新文献

英文 中文
Camera calibration based on color-coded phase-shifted fringe 基于彩色编码相移条纹的摄像机标定
Q3 Engineering Pub Date : 2021-01-15 DOI: 10.12086/OEE.2021.200118
Wei Boyan, Tian Qingguo, Ge Baozhen
Aiming at the low adaptability of blurring noise of target feature points in traditional calibration methods, a calibration method based on the color-coded phase-shifted fringe is proposed. Using a liquid crystal display panel as the calibration target, horizontal and vertical color-coded phase-shifted stripes are displayed in sequence; the orthogonal phase-shifted stripes are obtained by separating color channels; based on the phase-shifteg theory, the intersections of the orthogonal phase truncation lines are calculated as the feature points. After changing the target position multiple times and extracting feature points, the plane-based camera calibration technique is applied to realize the calibration of both the single camera and the binocular system. Furthermore, color-coded phase-shift circles are added to four corners of the target pattern to automatically extract and sort feature points. Accordingly, the efficiency of calibration is promoted. The experimental results indicate that when the target image is blurred, the reprojection error of the single-camera calibration is 0.15 pixels, and the standard deviation of the binocular system measurement after calibration is 0.1 mm.
针对传统定标方法对目标特征点模糊噪声适应性不强的问题,提出了一种基于彩色编码相移条纹的定标方法。以液晶显示面板为标定目标,依次显示水平和垂直的彩色编码相移条纹;通过分离颜色通道得到正交相移条纹;基于移相理论,计算正交相位截断线的交点作为特征点。在多次改变目标位置并提取特征点后,采用基于平面的摄像机标定技术,实现了单摄像机和双目系统的标定。此外,在目标图案的四个角上加入彩色编码的相移圆,自动提取和排序特征点。从而提高了标定的效率。实验结果表明,当目标图像被模糊时,单相机标定的重投影误差为0.15像素,标定后双目系统测量的标准差为0.1 mm。
{"title":"Camera calibration based on color-coded phase-shifted fringe","authors":"Wei Boyan, Tian Qingguo, Ge Baozhen","doi":"10.12086/OEE.2021.200118","DOIUrl":"https://doi.org/10.12086/OEE.2021.200118","url":null,"abstract":"Aiming at the low adaptability of blurring noise of target feature points in traditional calibration methods, a calibration method based on the color-coded phase-shifted fringe is proposed. Using a liquid crystal display panel as the calibration target, horizontal and vertical color-coded phase-shifted stripes are displayed in sequence; the orthogonal phase-shifted stripes are obtained by separating color channels; based on the phase-shifteg theory, the intersections of the orthogonal phase truncation lines are calculated as the feature points. After changing the target position multiple times and extracting feature points, the plane-based camera calibration technique is applied to realize the calibration of both the single camera and the binocular system. Furthermore, color-coded phase-shift circles are added to four corners of the target pattern to automatically extract and sort feature points. Accordingly, the efficiency of calibration is promoted. The experimental results indicate that when the target image is blurred, the reprojection error of the single-camera calibration is 0.15 pixels, and the standard deviation of the binocular system measurement after calibration is 0.1 mm.","PeriodicalId":39552,"journal":{"name":"Guangdian Gongcheng/Opto-Electronic Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80342539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A novel optical signal-to-noise ratio monitoring technique based on Gaussian process regression 一种新的基于高斯过程回归的光信噪比监测技术
Q3 Engineering Pub Date : 2021-01-15 DOI: 10.12086/OEE.2021.200077
Yanhui Ran, H. Chunjie, Li Wei
We propose and experimentally demonstrate a novel in-band optical signal-to-noise ratio (OSNR) monitoring technique that uses a commercially available widely tunable optical bandpass filter to sample the measured optical power as input features of Gaussian process regression (GPR) can accurately estimate the large dynamic range OSNR and is not affected by the configuration of the optical link, and has the characteristics of distributed and low cost. Experimental results for 32 Gbaud PDM-16QAM signals demonstrate OSNR monitoring with the root mean squared error (RMSE) of 0.429 dB and the mean absolute error (MAE) of 0.294 dB within a large OSNR range of -1 dB~30 dB. Moreover, our proposed technique is proved to be insensitive to chromatic dispersion, polarization mode dispersion, nonlinear effect, and cascaded filtering effect (CFE). Furthermore, our proposed technique has the potential to be employed for link monitoring at the intermediation nodes without knowing the transmission information and is more convenient to operate because no calibration is required.
本文提出并实验验证了一种新型带内光信噪比(OSNR)监测技术,该技术利用市上可广泛调谐的光带通滤波器对测量的光功率进行采样作为高斯过程回归(GPR)的输入特征,可以准确估计大动态范围的OSNR,并且不受光链路配置的影响,具有分布式和低成本的特点。实验结果表明,在-1 dB~30 dB的大OSNR范围内,对32 Gbaud PDM-16QAM信号进行OSNR监测,均方根误差(RMSE)为0.429 dB,平均绝对误差(MAE)为0.294 dB。此外,我们的技术被证明对色散、偏振模色散、非线性效应和级联滤波效应(CFE)不敏感。此外,我们提出的技术有可能在不知道传输信息的情况下用于中介节点的链路监测,并且由于不需要校准,因此操作更方便。
{"title":"A novel optical signal-to-noise ratio monitoring technique based on Gaussian process regression","authors":"Yanhui Ran, H. Chunjie, Li Wei","doi":"10.12086/OEE.2021.200077","DOIUrl":"https://doi.org/10.12086/OEE.2021.200077","url":null,"abstract":"We propose and experimentally demonstrate a novel in-band optical signal-to-noise ratio (OSNR) monitoring technique that uses a commercially available widely tunable optical bandpass filter to sample the measured optical power as input features of Gaussian process regression (GPR) can accurately estimate the large dynamic range OSNR and is not affected by the configuration of the optical link, and has the characteristics of distributed and low cost. Experimental results for 32 Gbaud PDM-16QAM signals demonstrate OSNR monitoring with the root mean squared error (RMSE) of 0.429 dB and the mean absolute error (MAE) of 0.294 dB within a large OSNR range of -1 dB~30 dB. Moreover, our proposed technique is proved to be insensitive to chromatic dispersion, polarization mode dispersion, nonlinear effect, and cascaded filtering effect (CFE). Furthermore, our proposed technique has the potential to be employed for link monitoring at the intermediation nodes without knowing the transmission information and is more convenient to operate because no calibration is required.","PeriodicalId":39552,"journal":{"name":"Guangdian Gongcheng/Opto-Electronic Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87241359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and experiment of a tunable narrow-passband deep UV light source 一种可调谐窄通深紫外光源的设计与实验
Q3 Engineering Pub Date : 2021-01-01 DOI: 10.12086/OEE.2021.210173
K. Yu, Xingyue Zhu, Chi Wu
{"title":"Design and experiment of a tunable narrow-passband deep UV light source","authors":"K. Yu, Xingyue Zhu, Chi Wu","doi":"10.12086/OEE.2021.210173","DOIUrl":"https://doi.org/10.12086/OEE.2021.210173","url":null,"abstract":"","PeriodicalId":39552,"journal":{"name":"Guangdian Gongcheng/Opto-Electronic Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84268059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Analysis of temperature-induced liquid crystal phase control beam quality deterioration 温度诱发液晶相位控制光束质量劣化的分析
Q3 Engineering Pub Date : 2021-01-01 DOI: 10.12086/OEE.2021.200463
Fan Huang, Xiangru Wang, He Xiaoxian, Mengxue Zhang, Yingli Wang, Hongyang Guo, Jie Hu, Haotong Ma
{"title":"Analysis of temperature-induced liquid crystal phase control beam quality deterioration","authors":"Fan Huang, Xiangru Wang, He Xiaoxian, Mengxue Zhang, Yingli Wang, Hongyang Guo, Jie Hu, Haotong Ma","doi":"10.12086/OEE.2021.200463","DOIUrl":"https://doi.org/10.12086/OEE.2021.200463","url":null,"abstract":"","PeriodicalId":39552,"journal":{"name":"Guangdian Gongcheng/Opto-Electronic Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82217812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Background suppression for infrared dim small target scene based on adaptive gradient reciprocal filtering 基于自适应梯度倒易滤波的红外弱小目标场景背景抑制
Q3 Engineering Pub Date : 2021-01-01 DOI: 10.12086/OEE.2021.210122
Biao Li, Zhiyong Xu, Chen Wang, Jianlin Zhang, Xiangru Wang, Xiangsuo Fan
: Due to the small scale and weak energy of the infrared dim small target, the background must be suppressed to enhance the target in order to ensure the performance of detection and tracking of the target in the later stage. In order to improve the ability of gradient reciprocal filter to suppress the clutter texture and reduce the interference of the residual texture to the target in the difference image, an adaptive gradient reciprocal filtering algorithm (AGRF) is proposed in this paper. In the AGRF, the adaptive judgment threshold and the adaptive relevancy coefficient function of inter-pixel correlation in the local region are determined by analyzing the distribution characteristics and statistical numeral characteristic of the background region, clutter texture, and target. Then the element value of the adaptive gradient reciprocal filter is determined by combining the relevancy coefficient function and the gradient reciprocal function. Experimental results indicate that the sensitivity of the AGRF algorithm to the clutter texture is significantly lower than that of the traditional gradient reciprocal filtering algorithm under the premise of the same target enhancement performance. Compared with the other nine algorithms, the AGRF algorithm has better signal-to-noise ratio gain (SNRG) and background suppress factor (BSF).
:由于红外弱小目标的规模小、能量弱,为了保证后期对目标的检测和跟踪性能,必须对背景进行抑制,对目标进行增强。为了提高梯度倒易滤波对杂波纹理的抑制能力,降低差分图像中残留纹理对目标的干扰,本文提出了一种自适应梯度倒易滤波算法。在AGRF中,通过分析背景区域、杂波纹理和目标的分布特征和统计数值特征,确定局部区域像素间相关的自适应判断阈值和自适应关联系数函数。然后结合相关系数函数和梯度倒数函数确定自适应梯度倒数滤波器的元值。实验结果表明,在目标增强性能相同的前提下,AGRF算法对杂波纹理的灵敏度明显低于传统梯度倒易滤波算法。与其他9种算法相比,AGRF算法具有更好的信噪比增益(SNRG)和背景抑制因子(BSF)。
{"title":"Background suppression for infrared dim small target scene based on adaptive gradient reciprocal filtering","authors":"Biao Li, Zhiyong Xu, Chen Wang, Jianlin Zhang, Xiangru Wang, Xiangsuo Fan","doi":"10.12086/OEE.2021.210122","DOIUrl":"https://doi.org/10.12086/OEE.2021.210122","url":null,"abstract":": Due to the small scale and weak energy of the infrared dim small target, the background must be suppressed to enhance the target in order to ensure the performance of detection and tracking of the target in the later stage. In order to improve the ability of gradient reciprocal filter to suppress the clutter texture and reduce the interference of the residual texture to the target in the difference image, an adaptive gradient reciprocal filtering algorithm (AGRF) is proposed in this paper. In the AGRF, the adaptive judgment threshold and the adaptive relevancy coefficient function of inter-pixel correlation in the local region are determined by analyzing the distribution characteristics and statistical numeral characteristic of the background region, clutter texture, and target. Then the element value of the adaptive gradient reciprocal filter is determined by combining the relevancy coefficient function and the gradient reciprocal function. Experimental results indicate that the sensitivity of the AGRF algorithm to the clutter texture is significantly lower than that of the traditional gradient reciprocal filtering algorithm under the premise of the same target enhancement performance. Compared with the other nine algorithms, the AGRF algorithm has better signal-to-noise ratio gain (SNRG) and background suppress factor (BSF).","PeriodicalId":39552,"journal":{"name":"Guangdian Gongcheng/Opto-Electronic Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87892418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Optimized bow-tie metasurface and its application in trace detection of lead ion 优化领结超表面及其在铅离子痕量检测中的应用
Q3 Engineering Pub Date : 2021-01-01 DOI: 10.12086/OEE.2021.210123
Junqing Zhang, Yiping Wu, Shenghao Chen, Shiyi Gu, L. Sun, Mingru Zhou, Lin Chen
{"title":"Optimized bow-tie metasurface and its application in trace detection of lead ion","authors":"Junqing Zhang, Yiping Wu, Shenghao Chen, Shiyi Gu, L. Sun, Mingru Zhou, Lin Chen","doi":"10.12086/OEE.2021.210123","DOIUrl":"https://doi.org/10.12086/OEE.2021.210123","url":null,"abstract":"","PeriodicalId":39552,"journal":{"name":"Guangdian Gongcheng/Opto-Electronic Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81929787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Pavement crack detection based on the U-shaped fully convolutional neural network 基于u型全卷积神经网络的路面裂缝检测
Q3 Engineering Pub Date : 2020-12-22 DOI: 10.12086/OEE.2020.200036
Hanshen Chen, M. Yao, Qu Xin-yu
Crack detection is one of the most important works in the system of pavement management. Cracks do not have a certain shape and the appearance of cracks usually changes drastically in different lighting conditions, making it hard to be detected by the algorithm with imagery analytics. To address these issues, we propose an effective U-shaped fully convolutional neural network called UCrackNet. First, a dropout layer is added into the skip connection to achieve better generalization. Second, pooling indices is used to reduce the shift and distortion during the up-sampling process. Third, four atrous convolutions with different dilation rates are densely connected in the bridge block, so that the receptive field of the network could cover each pixel of the whole image. In addition, multi-level fusion is introduced in the output stage to achieve better performance. Evaluations on the two public CrackTree206 and AIMCrack datasets demonstrate that the proposed method achieves high accuracy results and good generalization ability.
裂缝检测是路面管理系统中最重要的工作之一。裂缝没有一定的形状,在不同的光照条件下,裂缝的外观通常会发生很大的变化,这使得带有图像分析的算法很难检测到裂缝。为了解决这些问题,我们提出了一个有效的u形全卷积神经网络UCrackNet。首先,在跳跃连接中加入dropout层,实现更好的泛化。其次,利用池化指标减少上采样过程中的偏移和失真。第三,在桥块中密集连接4个不同扩张率的亚特鲁卷积,使网络的接受野覆盖整个图像的每个像素。此外,在输出阶段引入多级融合,以获得更好的性能。对CrackTree206和AIMCrack两个公开数据集的评价表明,该方法具有较高的准确率和良好的泛化能力。
{"title":"Pavement crack detection based on the U-shaped fully convolutional neural network","authors":"Hanshen Chen, M. Yao, Qu Xin-yu","doi":"10.12086/OEE.2020.200036","DOIUrl":"https://doi.org/10.12086/OEE.2020.200036","url":null,"abstract":"Crack detection is one of the most important works in the system of pavement management. Cracks do not have a certain shape and the appearance of cracks usually changes drastically in different lighting conditions, making it hard to be detected by the algorithm with imagery analytics. To address these issues, we propose an effective U-shaped fully convolutional neural network called UCrackNet. First, a dropout layer is added into the skip connection to achieve better generalization. Second, pooling indices is used to reduce the shift and distortion during the up-sampling process. Third, four atrous convolutions with different dilation rates are densely connected in the bridge block, so that the receptive field of the network could cover each pixel of the whole image. In addition, multi-level fusion is introduced in the output stage to achieve better performance. Evaluations on the two public CrackTree206 and AIMCrack datasets demonstrate that the proposed method achieves high accuracy results and good generalization ability.","PeriodicalId":39552,"journal":{"name":"Guangdian Gongcheng/Opto-Electronic Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87737407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Position-rate control for the time delay control system of tip-tilt mirror 倾斜镜延时控制系统的位置速率控制
Q3 Engineering Pub Date : 2020-12-22 DOI: 10.12086/OEE.2020.200006
Ruan Yong, Xun Tianrong, Yang Tao, Tang Tao
In the image-based tip-tilt mirror control system, the closed-loop performance and bandwidth of the system and are limited due to the influence of sensor sampling frequency and system delay. Under the condition of limited bandwidth, this paper proposes to use linear encoder to measure the position, and get the rate signal by differ-ence. The position-rate feedback control based on the image sensor system is realized to improve the error sup-pression ability of the tip-tilt mirror control system. Because of the addition of rate feedback, the control system has differential characteristics. When the rate feedback closed-loop is completed, the image position loop has integral characteristic. At this time, a PI controller is used to stabilize the system, which makes the system rise from zero type to two type system, and improves the error suppression ability of the system. Simulation and experiment show that this method can effectively improve the closed-loop performance of the tracking control system in low frequency domain.
在基于图像的倒立镜控制系统中,由于传感器采样频率和系统时延的影响,系统的闭环性能和带宽受到限制。在带宽有限的情况下,本文提出采用线性编码器测量位置,并通过差分得到速率信号。为了提高倒立镜控制系统的误差抑制能力,实现了基于图像传感器系统的位置速率反馈控制。由于加入了速率反馈,控制系统具有差分特性。当速率反馈闭环完成后,图像位置环具有积分特性。此时采用PI控制器对系统进行稳定,使系统从零型上升为二型,提高了系统的误差抑制能力。仿真和实验表明,该方法可以有效地提高跟踪控制系统在低频域的闭环性能。
{"title":"Position-rate control for the time delay control system of tip-tilt mirror","authors":"Ruan Yong, Xun Tianrong, Yang Tao, Tang Tao","doi":"10.12086/OEE.2020.200006","DOIUrl":"https://doi.org/10.12086/OEE.2020.200006","url":null,"abstract":"In the image-based tip-tilt mirror control system, the closed-loop performance and bandwidth of the system and are limited due to the influence of sensor sampling frequency and system delay. Under the condition of limited bandwidth, this paper proposes to use linear encoder to measure the position, and get the rate signal by differ-ence. The position-rate feedback control based on the image sensor system is realized to improve the error sup-pression ability of the tip-tilt mirror control system. Because of the addition of rate feedback, the control system has differential characteristics. When the rate feedback closed-loop is completed, the image position loop has integral characteristic. At this time, a PI controller is used to stabilize the system, which makes the system rise from zero type to two type system, and improves the error suppression ability of the system. Simulation and experiment show that this method can effectively improve the closed-loop performance of the tracking control system in low frequency domain.","PeriodicalId":39552,"journal":{"name":"Guangdian Gongcheng/Opto-Electronic Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80619363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Light-field image super-resolution based on multi-scale feature fusion 基于多尺度特征融合的光场图像超分辨率
Q3 Engineering Pub Date : 2020-12-22 DOI: 10.12086/OEE.2020.200007
Z. Yuanyuan, Shi Shengxian
As a new generation of the imaging device, light-field camera can simultaneously capture the spatial position and incident angle of light rays. However, the recorded light-field has a trade-off between spatial resolution and angular resolution. Especially the application range of light-field cameras is restricted by the limited spatial resolution of sub-aperture images. Therefore, a light-field super-resolution neural network that fuses multi-scale features to obtain super-resolved light-field is proposed in this paper. The deep-learning-based network framework contains three major modules: multi-scale feature extraction, global feature fusion, and up-sampling. Firstly, inherent structural features in the 4D light-field are learned through the multi-scale feature extraction module, and then the fusion module is exploited for feature fusion and enhancement. Finally, the up-sampling module is used to achieve light-field super-resolution. The experimental results on the synthetic light-field dataset and real-world light-field dataset showed that this method outperforms other state-of-the-art methods in both visual and numerical evaluations. In addition, the super-resolved light-field images were applied to depth estimation in this paper, the results illustrated that the disparity map was enhanced through the light-field spatial super-resolution.
光场相机作为新一代成像设备,可以同时捕捉光线的空间位置和入射角。然而,记录的光场需要在空间分辨率和角分辨率之间进行权衡。特别是光场相机的应用范围受到子孔径图像空间分辨率的限制。为此,本文提出了一种融合多尺度特征的光场超分辨神经网络来获得超分辨光场。基于深度学习的网络框架包含三个主要模块:多尺度特征提取、全局特征融合和上采样。首先通过多尺度特征提取模块学习四维光场的固有结构特征,然后利用融合模块进行特征融合和增强。最后,利用上采样模块实现光场超分辨率。在合成光场数据集和真实光场数据集上的实验结果表明,该方法在视觉和数值评估方面都优于其他最先进的方法。此外,本文还将超分辨光场图像应用于深度估计,结果表明,光场空间超分辨增强了视差图。
{"title":"Light-field image super-resolution based on multi-scale feature fusion","authors":"Z. Yuanyuan, Shi Shengxian","doi":"10.12086/OEE.2020.200007","DOIUrl":"https://doi.org/10.12086/OEE.2020.200007","url":null,"abstract":"As a new generation of the imaging device, light-field camera can simultaneously capture the spatial position and incident angle of light rays. However, the recorded light-field has a trade-off between spatial resolution and angular resolution. Especially the application range of light-field cameras is restricted by the limited spatial resolution of sub-aperture images. Therefore, a light-field super-resolution neural network that fuses multi-scale features to obtain super-resolved light-field is proposed in this paper. The deep-learning-based network framework contains three major modules: multi-scale feature extraction, global feature fusion, and up-sampling. Firstly, inherent structural features in the 4D light-field are learned through the multi-scale feature extraction module, and then the fusion module is exploited for feature fusion and enhancement. Finally, the up-sampling module is used to achieve light-field super-resolution. The experimental results on the synthetic light-field dataset and real-world light-field dataset showed that this method outperforms other state-of-the-art methods in both visual and numerical evaluations. In addition, the super-resolved light-field images were applied to depth estimation in this paper, the results illustrated that the disparity map was enhanced through the light-field spatial super-resolution.","PeriodicalId":39552,"journal":{"name":"Guangdian Gongcheng/Opto-Electronic Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90241748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic 3D vertebrae CT image active contour segmentation method based on weighted random forest 基于加权随机森林的三维椎体CT图像主动轮廓自动分割方法
Q3 Engineering Pub Date : 2020-12-22 DOI: 10.12086/OEE.2020.200002
L. Xia, Gan Quan, Li Bing, Liu Xiao, Wang Bo
In order to solve the problems of sensitive initial contours and inaccurate segmentation caused by active contour segmentation of CT images, this paper proposes an automatic 3D vertebral CT active contour segmentation method combined weighted random forest called “WRF-AC”. This method proposes a weighted random forest algorithm and an active contour energy function that includes edge energy. First, the weighted random forest is trained by extracting 3D Haar-like feature values of the vertebra CT, and the 'vertebra center' obtained is used as the initial contour of the segmentation. Then, the segmentation of the vertebra CT image is completed by solving the active contour energy function minimum containing the edge energy. The experimental results show that this method can segment the spine CT images more accurately and quickly on the same datasets to extract the vertebrae.
为了解决CT图像主动轮廓分割带来的初始轮廓敏感和分割不准确的问题,本文提出了一种结合加权随机森林的三维椎体CT主动轮廓自动分割方法“WRF-AC”。该方法提出了一种加权随机森林算法和包含边缘能量的主动轮廓能量函数。首先,通过提取椎体CT的三维haar样特征值对加权随机森林进行训练,得到的“椎体中心”作为分割的初始轮廓;然后,通过求解包含边缘能量的活动轮廓能量函数最小值,完成对椎体CT图像的分割;实验结果表明,该方法可以在相同的数据集上更准确、快速地分割脊柱CT图像,提取出椎体。
{"title":"Automatic 3D vertebrae CT image active contour segmentation method based on weighted random forest","authors":"L. Xia, Gan Quan, Li Bing, Liu Xiao, Wang Bo","doi":"10.12086/OEE.2020.200002","DOIUrl":"https://doi.org/10.12086/OEE.2020.200002","url":null,"abstract":"In order to solve the problems of sensitive initial contours and inaccurate segmentation caused by active contour segmentation of CT images, this paper proposes an automatic 3D vertebral CT active contour segmentation method combined weighted random forest called “WRF-AC”. This method proposes a weighted random forest algorithm and an active contour energy function that includes edge energy. First, the weighted random forest is trained by extracting 3D Haar-like feature values of the vertebra CT, and the 'vertebra center' obtained is used as the initial contour of the segmentation. Then, the segmentation of the vertebra CT image is completed by solving the active contour energy function minimum containing the edge energy. The experimental results show that this method can segment the spine CT images more accurately and quickly on the same datasets to extract the vertebrae.","PeriodicalId":39552,"journal":{"name":"Guangdian Gongcheng/Opto-Electronic Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78855338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
光电工程
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1