首页 > 最新文献

Optics and Lasers in Engineering最新文献

英文 中文
Realization of high-fidelity higher-order Bessel beams 实现高保真高阶贝塞尔波束
IF 3.5 2区 工程技术 Q2 OPTICS Pub Date : 2024-09-13 DOI: 10.1016/j.optlaseng.2024.108559

It is known that the Bessel beams are the solution to the Helmholtz equation, whose amplitude distributions should be strictly conformed with the Bessel functions of the first kind. In addition, the higher-order Bessel beams have helical phases of the same-order topological charges. However, the common methods can only generate approximate higher-order Bessel beams whose diffraction-free distances are shortened. In this paper, we introduce the concept of the high-fidelity higher-order Bessel beam (HHBB), a generated beam whose complex amplitude distribution is highly consistent with the theoretical expression of the corresponding higher-order Bessel beam. The generated HHBBs have the advantages of more compatible complex amplitude distributions with the corresponding theoretical expressions and enhanced diffraction-free distances which have potential applications in optical manipulation, laser processing, and high-resolution optical imaging.

众所周知,贝塞尔波束是亥姆霍兹方程的解,其振幅分布应严格符合第一类贝塞尔函数。此外,高阶贝塞尔梁具有同阶拓扑电荷的螺旋相位。然而,普通方法只能生成近似的高阶贝塞尔波束,其无衍射距离被缩短。本文提出了高保真高阶贝塞尔波束(HHBB)的概念,即生成的波束的复振幅分布与相应高阶贝塞尔波束的理论表达高度一致。生成的 HHBB 具有与相应理论表达式更兼容的复振幅分布和更强的无衍射距离等优点,有望应用于光学操纵、激光加工和高分辨率光学成像等领域。
{"title":"Realization of high-fidelity higher-order Bessel beams","authors":"","doi":"10.1016/j.optlaseng.2024.108559","DOIUrl":"10.1016/j.optlaseng.2024.108559","url":null,"abstract":"<div><p>It is known that the Bessel beams are the solution to the Helmholtz equation, whose amplitude distributions should be strictly conformed with the Bessel functions of the first kind. In addition, the higher-order Bessel beams have helical phases of the same-order topological charges. However, the common methods can only generate approximate higher-order Bessel beams whose diffraction-free distances are shortened. In this paper, we introduce the concept of the high-fidelity higher-order Bessel beam (HHBB), a generated beam whose complex amplitude distribution is highly consistent with the theoretical expression of the corresponding higher-order Bessel beam. The generated HHBBs have the advantages of more compatible complex amplitude distributions with the corresponding theoretical expressions and enhanced diffraction-free distances which have potential applications in optical manipulation, laser processing, and high-resolution optical imaging.</p></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142172730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CCM-Net: Color compensation and coordinate attention guided underwater image enhancement with multi-scale feature aggregation CCM-Net:利用多尺度特征聚合进行色彩补偿和坐标注意引导的水下图像增强
IF 3.5 2区 工程技术 Q2 OPTICS Pub Date : 2024-09-12 DOI: 10.1016/j.optlaseng.2024.108590

Due to the light scattering and wavelength absorption in water, underwater images exhibit blurred details, low contrast, and color deviation. Existing underwater image enhancement methods are divided into traditional methods and deep learning-based methods. Traditional methods either rely on scene prior and lack robustness, or are not flexible enough resulting in poor enhancement effects. Deep learning methods have achieved good results in the field of underwater image enhancement due to their powerful feature representation ability. However, these methods cannot enhance underwater images with various degradations because they do not consider the inconsistent attenuation of different color channels and spatial regions. In this paper, we propose a novel asymmetric encoder-decoder network for underwater image enhancement, called CCM-Net. Concretely, we first introduce the prior knowledge-based encoder, which includes color compensation (CC) modules and feature extraction modules that consist of depth-wise separable convolution and global-local coordinate attention (GLCA). Then, we design a multi-scale feature aggregation (MFA) module to integrate shallow, middle, and deep features. Finally, we deploy a decoder to reconstruct the underwater images with the extracted features. Extensive experiments on publicly available datasets demonstrate that our CCM-Net effectively improves the visual quality of underwater images and achieves impressive performance.

由于光在水中的散射和波长吸收,水下图像会出现细节模糊、对比度低和色彩偏差等问题。现有的水下图像增强方法分为传统方法和基于深度学习的方法。传统方法要么依赖场景先验,缺乏鲁棒性,要么不够灵活,导致增强效果不佳。深度学习方法因其强大的特征表示能力,在水下图像增强领域取得了不错的效果。然而,由于这些方法没有考虑不同颜色通道和空间区域衰减的不一致性,因此无法增强各种衰减的水下图像。本文提出了一种用于水下图像增强的新型非对称编码器-解码器网络,称为 CCM-Net。具体来说,我们首先介绍了基于先验知识的编码器,其中包括颜色补偿(CC)模块和特征提取模块,特征提取模块由深度可分离卷积和全局局部坐标注意(GLCA)组成。然后,我们设计了一个多尺度特征聚合(MFA)模块,以整合浅层、中层和深层特征。最后,我们部署了一个解码器,利用提取的特征重建水下图像。在公开数据集上进行的大量实验证明,我们的 CCM-Net 能有效改善水下图像的视觉质量,并取得令人瞩目的性能。
{"title":"CCM-Net: Color compensation and coordinate attention guided underwater image enhancement with multi-scale feature aggregation","authors":"","doi":"10.1016/j.optlaseng.2024.108590","DOIUrl":"10.1016/j.optlaseng.2024.108590","url":null,"abstract":"<div><p>Due to the light scattering and wavelength absorption in water, underwater images exhibit blurred details, low contrast, and color deviation. Existing underwater image enhancement methods are divided into traditional methods and deep learning-based methods. Traditional methods either rely on scene prior and lack robustness, or are not flexible enough resulting in poor enhancement effects. Deep learning methods have achieved good results in the field of underwater image enhancement due to their powerful feature representation ability. However, these methods cannot enhance underwater images with various degradations because they do not consider the inconsistent attenuation of different color channels and spatial regions. In this paper, we propose a novel asymmetric encoder-decoder network for underwater image enhancement, called CCM-Net. Concretely, we first introduce the prior knowledge-based encoder, which includes color compensation (CC) modules and feature extraction modules that consist of depth-wise separable convolution and global-local coordinate attention (GLCA). Then, we design a multi-scale feature aggregation (MFA) module to integrate shallow, middle, and deep features. Finally, we deploy a decoder to reconstruct the underwater images with the extracted features. Extensive experiments on publicly available datasets demonstrate that our CCM-Net effectively improves the visual quality of underwater images and achieves impressive performance.</p></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142172732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An on-machine measurement and calibration method for incident laser error in dual-swing laser heads 双摆动激光头入射激光误差的机上测量和校准方法
IF 3.5 2区 工程技术 Q2 OPTICS Pub Date : 2024-09-12 DOI: 10.1016/j.optlaseng.2024.108563

The dual-swing laser head is essential for five-axis laser machining, yet its precision is greatly affected by the incident laser beam. Any positional or angular deviation in the laser can cause the focus spot position of the head to change continuously during rotation, thereby severely compromise the manufacturing performance of the head. However, the current calibration methods for the incident beam of dual-swing laser heads have issues with low accuracy and insufficient engineering applicability. This paper proposes an on-machine measurement and calibration method for incident laser error in dual-swing laser heads. An error model for the incident beam with a dual-swing laser head was established, from which the law of spot position changes caused by incident beam errors during the head's rotation was derived. Subsequently, following this law, a precision calibration method for the laser head's incident beam error was proposed, based on the theory of optical image height. Afterwards, an on-machine error measurement system was established on the dual-swing laser head, and the calibration method was verified through experiments. The results show that the use of this calibration method can improve the accuracy of the incident beam for dual-swing laser head to 0.071 mm, which is approximately 3–4 times better than traditional calibration methods, thereby significantly enhancing the manufacturing precision of the laser head.

双摆动激光头对于五轴激光加工至关重要,但其精度却受到入射激光束的极大影响。激光的任何位置或角度偏差都会导致激光头的聚焦光斑位置在旋转过程中不断变化,从而严重影响激光头的加工性能。然而,目前双摆幅激光头入射光束的校准方法存在精度低、工程应用性不足等问题。本文提出了一种在机测量和校准双摆幅激光头入射激光误差的方法。建立了双摆动激光头入射光束的误差模型,并据此推导出了激光头旋转过程中入射光束误差引起的光斑位置变化规律。随后,根据这一规律,以光学图像高度理论为基础,提出了激光头入射光束误差的精确校准方法。随后,在双摆动激光头上建立了机上误差测量系统,并通过实验验证了校准方法。结果表明,使用该校准方法可将双摆动激光头的入射光束精度提高到 0.071 毫米,约为传统校准方法的 3-4 倍,从而显著提高了激光头的制造精度。
{"title":"An on-machine measurement and calibration method for incident laser error in dual-swing laser heads","authors":"","doi":"10.1016/j.optlaseng.2024.108563","DOIUrl":"10.1016/j.optlaseng.2024.108563","url":null,"abstract":"<div><p>The dual-swing laser head is essential for five-axis laser machining, yet its precision is greatly affected by the incident laser beam. Any positional or angular deviation in the laser can cause the focus spot position of the head to change continuously during rotation, thereby severely compromise the manufacturing performance of the head. However, the current calibration methods for the incident beam of dual-swing laser heads have issues with low accuracy and insufficient engineering applicability. This paper proposes an on-machine measurement and calibration method for incident laser error in dual-swing laser heads. An error model for the incident beam with a dual-swing laser head was established, from which the law of spot position changes caused by incident beam errors during the head's rotation was derived. Subsequently, following this law, a precision calibration method for the laser head's incident beam error was proposed, based on the theory of optical image height. Afterwards, an on-machine error measurement system was established on the dual-swing laser head, and the calibration method was verified through experiments. The results show that the use of this calibration method can improve the accuracy of the incident beam for dual-swing laser head to 0.071 mm, which is approximately 3–4 times better than traditional calibration methods, thereby significantly enhancing the manufacturing precision of the laser head.</p></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142172734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on high precision localization of space target with multi-sensor association 多传感器联合的空间目标高精度定位研究
IF 3.5 2区 工程技术 Q2 OPTICS Pub Date : 2024-09-12 DOI: 10.1016/j.optlaseng.2024.108553

In response to the challenges of acquiring spatial target position information and achieving high precision in existing methods, this paper proposes a multi-dimensional high-precision positioning method for spatial targets through multi-sensor fusion. Utilizing optical detection technology, the method extracts two-dimensional positional information of spatial targets on the observation plane. By deriving a fusion positioning formula for visible light and infrared based on the Gaussian mixture TPHD, the proposed method enhances positioning accuracy by 0.2 m compared to using visible light or infrared alone. Additionally, by integrating laser ranging for distance dimension information, precise target positioning in the world coordinate system is achieved. Outdoor experiments for spatial target positioning validate the method's effectiveness, utilizing visible light and infrared cameras along with laser ranging. Comparative analysis with a binary star angular measurement-only method demonstrates 17.9 % improvement in positioning accuracy, with the proposed method achieving 0.12 m accuracy for 5 cm spatial targets at 5 km distance.

针对现有方法在获取空间目标位置信息和实现高精度方面的挑战,本文提出了一种通过多传感器融合实现空间目标多维高精度定位的方法。该方法利用光学探测技术,提取空间目标在观测平面上的二维位置信息。通过推导基于高斯混合 TPHD 的可见光和红外融合定位公式,与单独使用可见光或红外相比,该方法的定位精度提高了 0.2 米。此外,通过整合激光测距的距离维度信息,实现了在世界坐标系中的精确目标定位。利用可见光和红外摄像机以及激光测距进行的空间目标定位室外实验验证了该方法的有效性。与仅测量双星角度的方法进行的比较分析表明,该方法的定位精度提高了 17.9%,在 5 千米的距离上对 5 厘米的空间目标定位精度达到了 0.12 米。
{"title":"Research on high precision localization of space target with multi-sensor association","authors":"","doi":"10.1016/j.optlaseng.2024.108553","DOIUrl":"10.1016/j.optlaseng.2024.108553","url":null,"abstract":"<div><p>In response to the challenges of acquiring spatial target position information and achieving high precision in existing methods, this paper proposes a multi-dimensional high-precision positioning method for spatial targets through multi-sensor fusion. Utilizing optical detection technology, the method extracts two-dimensional positional information of spatial targets on the observation plane. By deriving a fusion positioning formula for visible light and infrared based on the Gaussian mixture TPHD, the proposed method enhances positioning accuracy by 0.2 m compared to using visible light or infrared alone. Additionally, by integrating laser ranging for distance dimension information, precise target positioning in the world coordinate system is achieved. Outdoor experiments for spatial target positioning validate the method's effectiveness, utilizing visible light and infrared cameras along with laser ranging. Comparative analysis with a binary star angular measurement-only method demonstrates 17.9 % improvement in positioning accuracy, with the proposed method achieving 0.12 m accuracy for 5 cm spatial targets at 5 km distance.</p></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142172733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-line laser scanning reconstruction with binocularly speckle matching and trained deep neural networks 利用双目斑点匹配和训练有素的深度神经网络进行多线激光扫描重建
IF 3.5 2区 工程技术 Q2 OPTICS Pub Date : 2024-09-12 DOI: 10.1016/j.optlaseng.2024.108582

A multi-line laser scanning system for 3D topography measurement is proposed. This method not only has the advantages of high precision of laser scanning technology, but also has high reconstruction efficiency. In this paper, speckle reconstruction technique, multi-line laser technique and Biocular reconstruction technique are used to construct a 3D reconstruction system, and test equipment is built, and the problems existing in the system establishment process are actually studied. In order to solve the problem of mismatching in binocular multi-line laser matching, a method to sort out the correspondence of multiple laser lines in binocular images based on speckle matching results is proposed. In order to optimize the multi-line laser matching effect, a speckle matching network based on deep learning is proposed, which integrates the grayscale images of the left and right cameras as supplementary information, and takes the speckle image and grayscale image as the input of the network model to obtain more accurate and edge-complete matching results. Finally, the matching results of the multi-line laser and the camera calibration parameters were used to reconstruct the object point cloud. Experimental results show that the proposed speckle matching method can make binocular multiline laser point cloud reconstruction more robust and stable than the traditional method, and through the accuracy analysis of the system, it is proved that the average measurement accuracy of the proposed method can reach 0.05 mm.

提出了一种用于三维地形测量的多线激光扫描系统。该方法不仅具有激光扫描技术精度高的优点,而且重建效率高。本文利用斑点重建技术、多线激光技术和双目重建技术构建了三维重建系统,并搭建了测试设备,对系统建立过程中存在的问题进行了实际研究。为了解决双目多线激光匹配中的不匹配问题,提出了一种基于斑点匹配结果的双目图像中多条激光线对应关系的梳理方法。为了优化多条激光线的匹配效果,提出了一种基于深度学习的斑点匹配网络,该网络将左右相机的灰度图像作为辅助信息进行整合,并将斑点图像和灰度图像作为网络模型的输入,从而得到更加精确和边缘完整的匹配结果。最后,利用多线激光的匹配结果和相机校准参数重建物体点云。实验结果表明,与传统方法相比,所提出的斑点匹配方法能使双目多线激光点云重建更加鲁棒和稳定,而且通过对系统的精度分析,证明所提出方法的平均测量精度可达 0.05 毫米。
{"title":"Multi-line laser scanning reconstruction with binocularly speckle matching and trained deep neural networks","authors":"","doi":"10.1016/j.optlaseng.2024.108582","DOIUrl":"10.1016/j.optlaseng.2024.108582","url":null,"abstract":"<div><p>A multi-line laser scanning system for 3D topography measurement is proposed. This method not only has the advantages of high precision of laser scanning technology, but also has high reconstruction efficiency. In this paper, speckle reconstruction technique, multi-line laser technique and Biocular reconstruction technique are used to construct a 3D reconstruction system, and test equipment is built, and the problems existing in the system establishment process are actually studied. In order to solve the problem of mismatching in binocular multi-line laser matching, a method to sort out the correspondence of multiple laser lines in binocular images based on speckle matching results is proposed. In order to optimize the multi-line laser matching effect, a speckle matching network based on deep learning is proposed, which integrates the grayscale images of the left and right cameras as supplementary information, and takes the speckle image and grayscale image as the input of the network model to obtain more accurate and edge-complete matching results. Finally, the matching results of the multi-line laser and the camera calibration parameters were used to reconstruct the object point cloud. Experimental results show that the proposed speckle matching method can make binocular multiline laser point cloud reconstruction more robust and stable than the traditional method, and through the accuracy analysis of the system, it is proved that the average measurement accuracy of the proposed method can reach 0.05 mm.</p></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142172729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Phase retrieval from random phase-shifting interferograms using neural network and least squares method 利用神经网络和最小二乘法从随机移相干涉图中检索相位
IF 3.5 2区 工程技术 Q2 OPTICS Pub Date : 2024-09-12 DOI: 10.1016/j.optlaseng.2024.108554

This paper proposes a neural network and least squares method to retrieve phase from three-frame random phase-shifting interferograms. The phase retrieval method involves two processes. Firstly, a neural network is utilized to predict phase shifts of the three-frame random phase-shifting interferograms. After the phase shifts are determined, the phase is retrieved using the least squares method. The method is simple, and does not require iterative calculation. The accuracy of the proposed method is verified by comparing the advanced iterative algorithm. Through the analysis of the simulated interferograms, the root mean square (RMS) of phase error can approach 0.1 rad. The interferograms recorded in the interferometer verifies the feasibility.

本文提出了一种从三帧随机移相干涉图中检索相位的神经网络和最小二乘法。相位检索方法包括两个过程。首先,利用神经网络预测三帧随机相移干涉图的相移。确定相移后,使用最小二乘法检索相位。该方法简单,无需迭代计算。通过比较先进的迭代算法,验证了所提方法的准确性。通过对模拟干涉图的分析,相位误差的均方根(RMS)可接近 0.1 rad。干涉仪记录的干涉图验证了这一方法的可行性。
{"title":"Phase retrieval from random phase-shifting interferograms using neural network and least squares method","authors":"","doi":"10.1016/j.optlaseng.2024.108554","DOIUrl":"10.1016/j.optlaseng.2024.108554","url":null,"abstract":"<div><p>This paper proposes a neural network and least squares method to retrieve phase from three-frame random phase-shifting interferograms. The phase retrieval method involves two processes. Firstly, a neural network is utilized to predict phase shifts of the three-frame random phase-shifting interferograms. After the phase shifts are determined, the phase is retrieved using the least squares method. The method is simple, and does not require iterative calculation. The accuracy of the proposed method is verified by comparing the advanced iterative algorithm. Through the analysis of the simulated interferograms, the root mean square (RMS) of phase error can approach 0.1 rad. The interferograms recorded in the interferometer verifies the feasibility.</p></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142172832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MFR-Net: A multi-feature fusion phase unwrapping method for different speckle noises MFR-Net:针对不同斑点噪声的多特征融合相位解除方法
IF 3.5 2区 工程技术 Q2 OPTICS Pub Date : 2024-09-12 DOI: 10.1016/j.optlaseng.2024.108585

Phase unwrapping is a crucial step in laser interferometry for obtaining accurate physical measurement of object. To reduce the impact of speckle noise on wrapped phase during actual measurement and improve the subsequent measurement accuracy, a multi-feature fusion phase unwrapping method for different speckle noises named MFR-Net is proposed in this paper. The network is composed of a front-end multi-module filter processing layer and a back-end network with dilated convolution and coordinate attention mechanism. By reducing random phase differences introduced by different levels of noise, the network enhances its capability to extract spatial features such as gradient information between pixels under speckle noise, so that it successfully unwraps the wrapped phase with different speckle noises and accurately recovers the real phase information. Taking the wrapped phases with multiplicative speckle noise and additive random noise as dataset, the results of ablation and comparison experiments show that the MFR-Net has superior unwrapped results. Under three different levels of speckle noise, the average values of MSE, SSIM, PSNR and AU for MFR-Net are at least improved by 84.80 %, 10.99 %, 29.00 % and 7.72 %, respectively, compared to PDVQG, TIE, DLPU and VURNet algorithms. When the standard deviation of speckle noise varies continuously in the range [1.0, 2.0], the average values of four indexes reaches 0.12 rad, 0.91, 31.80 dB and 99.96 %, respectively, indicating the stronger robustness of MFR-Net. In addition, the phase step unwrapping is performed by MFR-Net. Compared to DLPU and VURNet, MFR-Net method reduced MSE by 80 % and 87.35 %, respectively, demonstrating the outstanding generalization capability. The proposed MFR-Net can realize the correct phase unwrapping under different speckle noises. It may be applied in laser interferometry applications such as digital holography and interferometric synthetic aperture radar.

相位解包是激光干涉测量中获得物体精确物理测量值的关键步骤。为了减少实际测量过程中斑点噪声对包裹相位的影响,提高后续测量精度,本文提出了一种针对不同斑点噪声的多特征融合相位解包方法,命名为 MFR-Net。该网络由一个前端多模块滤波处理层和一个具有扩张卷积和坐标注意机制的后端网络组成。该网络通过减少不同程度噪声带来的随机相位差,增强了对斑点噪声下像素间梯度信息等空间特征的提取能力,从而成功地解开了不同斑点噪声下的包裹相位,准确地恢复了真实的相位信息。以乘法斑点噪声和加法随机噪声的包裹相位为数据集,消融和对比实验结果表明,MFR-Net 的解包裹效果更优。在三种不同程度的斑点噪声下,与 PDVQG、TIE、DLPU 和 VURNet 算法相比,MFR-Net 的 MSE、SSIM、PSNR 和 AU 平均值至少分别提高了 84.80 %、10.99 %、29.00 % 和 7.72 %。当斑点噪声的标准差在[1.0, 2.0]范围内连续变化时,四个指标的平均值分别达到 0.12 rad、0.91、31.80 dB 和 99.96 %,表明 MFR-Net 算法具有更强的鲁棒性。此外,MFR-Net 还能进行相位阶跃解包。与 DLPU 和 VURNet 相比,MFR-Net 方法的 MSE 分别降低了 80 % 和 87.35 %,显示了其出色的泛化能力。所提出的 MFR-Net 能在不同斑点噪声下实现正确的相位解包。它可应用于激光干涉测量,如数字全息和干涉合成孔径雷达。
{"title":"MFR-Net: A multi-feature fusion phase unwrapping method for different speckle noises","authors":"","doi":"10.1016/j.optlaseng.2024.108585","DOIUrl":"10.1016/j.optlaseng.2024.108585","url":null,"abstract":"<div><p>Phase unwrapping is a crucial step in laser interferometry for obtaining accurate physical measurement of object. To reduce the impact of speckle noise on wrapped phase during actual measurement and improve the subsequent measurement accuracy, a multi-feature fusion phase unwrapping method for different speckle noises named MFR-Net is proposed in this paper. The network is composed of a front-end multi-module filter processing layer and a back-end network with dilated convolution and coordinate attention mechanism. By reducing random phase differences introduced by different levels of noise, the network enhances its capability to extract spatial features such as gradient information between pixels under speckle noise, so that it successfully unwraps the wrapped phase with different speckle noises and accurately recovers the real phase information. Taking the wrapped phases with multiplicative speckle noise and additive random noise as dataset, the results of ablation and comparison experiments show that the MFR-Net has superior unwrapped results. Under three different levels of speckle noise, the average values of MSE, SSIM, PSNR and AU for MFR-Net are at least improved by 84.80 %, 10.99 %, 29.00 % and 7.72 %, respectively, compared to PDVQG, TIE, DLPU and VURNet algorithms. When the standard deviation of speckle noise varies continuously in the range [1.0, 2.0], the average values of four indexes reaches 0.12 rad, 0.91, 31.80 dB and 99.96 %, respectively, indicating the stronger robustness of MFR-Net. In addition, the phase step unwrapping is performed by MFR-Net. Compared to DLPU and VURNet, MFR-Net method reduced MSE by 80 % and 87.35 %, respectively, demonstrating the outstanding generalization capability. The proposed MFR-Net can realize the correct phase unwrapping under different speckle noises. It may be applied in laser interferometry applications such as digital holography and interferometric synthetic aperture radar.</p></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142172833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analytical equation for camera imaging with refractive interfaces 带有折射界面的照相机成像分析方程
IF 3.5 2区 工程技术 Q2 OPTICS Pub Date : 2024-09-11 DOI: 10.1016/j.optlaseng.2024.108581

Camera imaging through refractive interfaces is a crucial issue in photogrammetric measurements. Most past studies adopted numerical optimization algorithms based on refractive ray tracing procedures. In these studies, the camera and interface parameters are usually calculated iteratively with numerical optimization algorithms. Inappropriate initial values can cause iterations to diverge. Meanwhile, these iterations cannot efficiently reveal the accurate nature of refractive imaging. Therefore, obtaining camera calibration results that are both flexible and physically interpretable continues to be challenging. Consequently, in this study, we modeled refractive imaging by employing ray transfer matrix analysis. Subsequently, we deduced an analytical refractive imaging (ARI) equation that explicitly describes the refractive geometry in a matrix form. Although this equation is built upon the paraxial approximation, we executed a numerical experiment that shows that the developed analytical equation can accurately illustrate refractive imaging with a considerable object distance and a slightly tilted angle of the flat interface. This ARI equation can be used to define the expansion center and the normal vector of the flat interface. Finally, we also propose a flexible measurement method to determine the orientation of the flat interface, wherein the orientation can be measured rather than calculated by iterative procedures.

相机通过折射界面成像是摄影测量中的一个关键问题。以往的研究大多采用基于折射光线跟踪程序的数值优化算法。在这些研究中,相机和界面参数通常通过数值优化算法进行迭代计算。不恰当的初始值会导致迭代发散。同时,这些迭代无法有效揭示折射成像的精确本质。因此,获得既灵活又能从物理角度解释的相机校准结果仍然是一项挑战。因此,在本研究中,我们采用射线传递矩阵分析法对折射成像进行建模。随后,我们推导出一个分析折射成像(ARI)方程,该方程以矩阵形式明确描述了折射几何。虽然该方程是建立在准轴向近似基础上的,但我们进行的数值实验表明,所建立的分析方程可以准确地说明在物体距离较大、平面界面角度略微倾斜的情况下的折射成像。该 ARI 方程可用于定义平面界面的膨胀中心和法向量。最后,我们还提出了一种灵活的测量方法来确定平面界面的方向,其中方向可以通过测量而不是迭代程序计算得出。
{"title":"Analytical equation for camera imaging with refractive interfaces","authors":"","doi":"10.1016/j.optlaseng.2024.108581","DOIUrl":"10.1016/j.optlaseng.2024.108581","url":null,"abstract":"<div><p>Camera imaging through refractive interfaces is a crucial issue in photogrammetric measurements. Most past studies adopted numerical optimization algorithms based on refractive ray tracing procedures. In these studies, the camera and interface parameters are usually calculated iteratively with numerical optimization algorithms. Inappropriate initial values can cause iterations to diverge. Meanwhile, these iterations cannot efficiently reveal the accurate nature of refractive imaging. Therefore, obtaining camera calibration results that are both flexible and physically interpretable continues to be challenging. Consequently, in this study, we modeled refractive imaging by employing ray transfer matrix analysis. Subsequently, we deduced an analytical refractive imaging (ARI) equation that explicitly describes the refractive geometry in a matrix form. Although this equation is built upon the paraxial approximation, we executed a numerical experiment that shows that the developed analytical equation can accurately illustrate refractive imaging with a considerable object distance and a slightly tilted angle of the flat interface. This ARI equation can be used to define the expansion center and the normal vector of the flat interface. Finally, we also propose a flexible measurement method to determine the orientation of the flat interface, wherein the orientation can be measured rather than calculated by iterative procedures.</p></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142169551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multicolor imaging based on brightness coded set 基于亮度编码集的多色成像技术
IF 3.5 2区 工程技术 Q2 OPTICS Pub Date : 2024-09-11 DOI: 10.1016/j.optlaseng.2024.108552

Fluorescence imaging necessitates precise matching of excitation source, dichroic mirror, emission filter, detector and dyes, which is complex and time-consuming, especially for applications of probe multiplexing. We propose a novel method for multicolor imaging based on a brightness coded set. Each brightness code consists of 12 bits (OOOXXXYYYZTT), denoting probe type, cube, emission filter, imaging result and priority, respectively. The brightness of a probe in an imaging system is defined as the product of extinction coefficient, quantum yield and the filter transmittance. When the brightness exceeds the threshold, Z=1 indicates a clear image, otherwise Z=0. The higher the brightness value the higher the priority (TT). To validate the efficacy and efficiency of the coding method, we conducted two separate experiments involving four-color imaging. The proposed method offers a substantial simplification of the conventional approach to device matching in multicolor imaging by leveraging spectrograms, and presents a promising avenue for the advancement of intelligent multicolor imaging systems.

荧光成像需要精确匹配激发光源、分色镜、发射滤光片、探测器和染料,这既复杂又耗时,尤其是在探针复用的应用中。我们提出了一种基于亮度编码集的多色成像新方法。每个亮度代码由 12 位(OOXXXYYYZTT)组成,分别表示探针类型、立方体、发射滤波器、成像结果和优先级。成像系统中探头的亮度定义为消光系数、量子产率和滤光片透射率的乘积。亮度值越高,优先级(TT)越高。为了验证编码方法的功效和效率,我们分别进行了两次四色成像实验。通过利用光谱图,所提出的方法大大简化了多色成像中设备匹配的传统方法,为智能多色成像系统的发展提供了一个前景广阔的途径。
{"title":"Multicolor imaging based on brightness coded set","authors":"","doi":"10.1016/j.optlaseng.2024.108552","DOIUrl":"10.1016/j.optlaseng.2024.108552","url":null,"abstract":"<div><p>Fluorescence imaging necessitates precise matching of excitation source, dichroic mirror, emission filter, detector and dyes, which is complex and time-consuming, especially for applications of probe multiplexing. We propose a novel method for multicolor imaging based on a brightness coded set. Each brightness code consists of 12 bits (<span><math><mi>O</mi><mi>O</mi><mi>O</mi><mi>X</mi><mi>X</mi><mi>X</mi><mi>Y</mi><mi>Y</mi><mi>Y</mi><mi>Z</mi><mi>T</mi><mi>T</mi></math></span>), denoting probe type, cube, emission filter, imaging result and priority, respectively. The brightness of a probe in an imaging system is defined as the product of extinction coefficient, quantum yield and the filter transmittance. When the brightness exceeds the threshold, <span><math><mi>Z</mi><mo>=</mo><mn>1</mn></math></span> indicates a clear image, otherwise <span><math><mi>Z</mi><mo>=</mo><mn>0</mn></math></span>. The higher the brightness value the higher the priority (<em>TT</em>). To validate the efficacy and efficiency of the coding method, we conducted two separate experiments involving four-color imaging. The proposed method offers a substantial simplification of the conventional approach to device matching in multicolor imaging by leveraging spectrograms, and presents a promising avenue for the advancement of intelligent multicolor imaging systems.</p></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142169550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attenuated color channel adaptive correction and bilateral weight fusion for underwater image enhancement 用于水下图像增强的衰减色道自适应校正和双边权重融合
IF 3.5 2区 工程技术 Q2 OPTICS Pub Date : 2024-09-11 DOI: 10.1016/j.optlaseng.2024.108575

Due to the absorption and scattering of light and the influence of suspended particles, underwater images commonly exhibit color distortions, reduced contrast, and diminished details. This paper proposes an attenuated color channel adaptive correction and bilateral weight fusion approach called WLAB to address the aforementioned degradation issues. Specifically, a novel white balance method is first applied to balance the color channel of the input image. Moreover, a local-block-based fast non-local means method is proposed to obtain a denoised version of the color-corrected image. Then, an adaptive stretching method that considers the histogram's local features to get a contrast-enhanced version of the color-corrected image. Finally, a bilateral weight fusion method is proposed to fuse the above two image versions to obtain an output image with complementary advantages. Experimental studies are conducted on three benchmark underwater image datasets and compared with ten state-of-the-art methods. The results show that WLAB has a significant advantage over the comparative methods. Notably, WLAB exhibits a degree of independence from camera settings and enhances the precision of various image processing applications, including key points and saliency detection. Additionally, it demonstrates commendable adaptability in improving low-light and foggy images.

由于光的吸收和散射以及悬浮颗粒的影响,水下图像通常会出现色彩失真、对比度降低和细节减弱等问题。本文提出了一种名为 WLAB 的衰减色彩通道自适应校正和双边权重融合方法,以解决上述衰减问题。具体来说,首先采用一种新颖的白平衡方法来平衡输入图像的色彩通道。此外,还提出了一种基于局部块的快速非局部手段方法,以获得去噪版本的色彩校正图像。然后,一种考虑直方图局部特征的自适应拉伸方法可获得对比度增强版的色彩校正图像。最后,提出一种双边权重融合方法,对上述两个图像版本进行融合,以获得优势互补的输出图像。在三个基准水下图像数据集上进行了实验研究,并与十种最先进的方法进行了比较。结果表明,WLAB 与其他方法相比具有显著优势。值得注意的是,WLAB 在一定程度上不受相机设置的影响,提高了各种图像处理应用的精度,包括关键点和显著性检测。此外,WLAB 在改善弱光和多雾图像方面的适应性也值得称赞。
{"title":"Attenuated color channel adaptive correction and bilateral weight fusion for underwater image enhancement","authors":"","doi":"10.1016/j.optlaseng.2024.108575","DOIUrl":"10.1016/j.optlaseng.2024.108575","url":null,"abstract":"<div><p>Due to the absorption and scattering of light and the influence of suspended particles, underwater images commonly exhibit color distortions, reduced contrast, and diminished details. This paper proposes an attenuated color channel adaptive correction and bilateral weight fusion approach called WLAB to address the aforementioned degradation issues. Specifically, a novel white balance method is first applied to balance the color channel of the input image. Moreover, a local-block-based fast non-local means method is proposed to obtain a denoised version of the color-corrected image. Then, an adaptive stretching method that considers the histogram's local features to get a contrast-enhanced version of the color-corrected image. Finally, a bilateral weight fusion method is proposed to fuse the above two image versions to obtain an output image with complementary advantages. Experimental studies are conducted on three benchmark underwater image datasets and compared with ten state-of-the-art methods. The results show that WLAB has a significant advantage over the comparative methods. Notably, WLAB exhibits a degree of independence from camera settings and enhances the precision of various image processing applications, including key points and saliency detection. Additionally, it demonstrates commendable adaptability in improving low-light and foggy images.</p></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142169400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Optics and Lasers in Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1