Pub Date : 2024-09-13DOI: 10.1016/j.optlaseng.2024.108559
It is known that the Bessel beams are the solution to the Helmholtz equation, whose amplitude distributions should be strictly conformed with the Bessel functions of the first kind. In addition, the higher-order Bessel beams have helical phases of the same-order topological charges. However, the common methods can only generate approximate higher-order Bessel beams whose diffraction-free distances are shortened. In this paper, we introduce the concept of the high-fidelity higher-order Bessel beam (HHBB), a generated beam whose complex amplitude distribution is highly consistent with the theoretical expression of the corresponding higher-order Bessel beam. The generated HHBBs have the advantages of more compatible complex amplitude distributions with the corresponding theoretical expressions and enhanced diffraction-free distances which have potential applications in optical manipulation, laser processing, and high-resolution optical imaging.
{"title":"Realization of high-fidelity higher-order Bessel beams","authors":"","doi":"10.1016/j.optlaseng.2024.108559","DOIUrl":"10.1016/j.optlaseng.2024.108559","url":null,"abstract":"<div><p>It is known that the Bessel beams are the solution to the Helmholtz equation, whose amplitude distributions should be strictly conformed with the Bessel functions of the first kind. In addition, the higher-order Bessel beams have helical phases of the same-order topological charges. However, the common methods can only generate approximate higher-order Bessel beams whose diffraction-free distances are shortened. In this paper, we introduce the concept of the high-fidelity higher-order Bessel beam (HHBB), a generated beam whose complex amplitude distribution is highly consistent with the theoretical expression of the corresponding higher-order Bessel beam. The generated HHBBs have the advantages of more compatible complex amplitude distributions with the corresponding theoretical expressions and enhanced diffraction-free distances which have potential applications in optical manipulation, laser processing, and high-resolution optical imaging.</p></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142172730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-12DOI: 10.1016/j.optlaseng.2024.108590
Due to the light scattering and wavelength absorption in water, underwater images exhibit blurred details, low contrast, and color deviation. Existing underwater image enhancement methods are divided into traditional methods and deep learning-based methods. Traditional methods either rely on scene prior and lack robustness, or are not flexible enough resulting in poor enhancement effects. Deep learning methods have achieved good results in the field of underwater image enhancement due to their powerful feature representation ability. However, these methods cannot enhance underwater images with various degradations because they do not consider the inconsistent attenuation of different color channels and spatial regions. In this paper, we propose a novel asymmetric encoder-decoder network for underwater image enhancement, called CCM-Net. Concretely, we first introduce the prior knowledge-based encoder, which includes color compensation (CC) modules and feature extraction modules that consist of depth-wise separable convolution and global-local coordinate attention (GLCA). Then, we design a multi-scale feature aggregation (MFA) module to integrate shallow, middle, and deep features. Finally, we deploy a decoder to reconstruct the underwater images with the extracted features. Extensive experiments on publicly available datasets demonstrate that our CCM-Net effectively improves the visual quality of underwater images and achieves impressive performance.
{"title":"CCM-Net: Color compensation and coordinate attention guided underwater image enhancement with multi-scale feature aggregation","authors":"","doi":"10.1016/j.optlaseng.2024.108590","DOIUrl":"10.1016/j.optlaseng.2024.108590","url":null,"abstract":"<div><p>Due to the light scattering and wavelength absorption in water, underwater images exhibit blurred details, low contrast, and color deviation. Existing underwater image enhancement methods are divided into traditional methods and deep learning-based methods. Traditional methods either rely on scene prior and lack robustness, or are not flexible enough resulting in poor enhancement effects. Deep learning methods have achieved good results in the field of underwater image enhancement due to their powerful feature representation ability. However, these methods cannot enhance underwater images with various degradations because they do not consider the inconsistent attenuation of different color channels and spatial regions. In this paper, we propose a novel asymmetric encoder-decoder network for underwater image enhancement, called CCM-Net. Concretely, we first introduce the prior knowledge-based encoder, which includes color compensation (CC) modules and feature extraction modules that consist of depth-wise separable convolution and global-local coordinate attention (GLCA). Then, we design a multi-scale feature aggregation (MFA) module to integrate shallow, middle, and deep features. Finally, we deploy a decoder to reconstruct the underwater images with the extracted features. Extensive experiments on publicly available datasets demonstrate that our CCM-Net effectively improves the visual quality of underwater images and achieves impressive performance.</p></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142172732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-12DOI: 10.1016/j.optlaseng.2024.108563
The dual-swing laser head is essential for five-axis laser machining, yet its precision is greatly affected by the incident laser beam. Any positional or angular deviation in the laser can cause the focus spot position of the head to change continuously during rotation, thereby severely compromise the manufacturing performance of the head. However, the current calibration methods for the incident beam of dual-swing laser heads have issues with low accuracy and insufficient engineering applicability. This paper proposes an on-machine measurement and calibration method for incident laser error in dual-swing laser heads. An error model for the incident beam with a dual-swing laser head was established, from which the law of spot position changes caused by incident beam errors during the head's rotation was derived. Subsequently, following this law, a precision calibration method for the laser head's incident beam error was proposed, based on the theory of optical image height. Afterwards, an on-machine error measurement system was established on the dual-swing laser head, and the calibration method was verified through experiments. The results show that the use of this calibration method can improve the accuracy of the incident beam for dual-swing laser head to 0.071 mm, which is approximately 3–4 times better than traditional calibration methods, thereby significantly enhancing the manufacturing precision of the laser head.
{"title":"An on-machine measurement and calibration method for incident laser error in dual-swing laser heads","authors":"","doi":"10.1016/j.optlaseng.2024.108563","DOIUrl":"10.1016/j.optlaseng.2024.108563","url":null,"abstract":"<div><p>The dual-swing laser head is essential for five-axis laser machining, yet its precision is greatly affected by the incident laser beam. Any positional or angular deviation in the laser can cause the focus spot position of the head to change continuously during rotation, thereby severely compromise the manufacturing performance of the head. However, the current calibration methods for the incident beam of dual-swing laser heads have issues with low accuracy and insufficient engineering applicability. This paper proposes an on-machine measurement and calibration method for incident laser error in dual-swing laser heads. An error model for the incident beam with a dual-swing laser head was established, from which the law of spot position changes caused by incident beam errors during the head's rotation was derived. Subsequently, following this law, a precision calibration method for the laser head's incident beam error was proposed, based on the theory of optical image height. Afterwards, an on-machine error measurement system was established on the dual-swing laser head, and the calibration method was verified through experiments. The results show that the use of this calibration method can improve the accuracy of the incident beam for dual-swing laser head to 0.071 mm, which is approximately 3–4 times better than traditional calibration methods, thereby significantly enhancing the manufacturing precision of the laser head.</p></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142172734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-12DOI: 10.1016/j.optlaseng.2024.108553
In response to the challenges of acquiring spatial target position information and achieving high precision in existing methods, this paper proposes a multi-dimensional high-precision positioning method for spatial targets through multi-sensor fusion. Utilizing optical detection technology, the method extracts two-dimensional positional information of spatial targets on the observation plane. By deriving a fusion positioning formula for visible light and infrared based on the Gaussian mixture TPHD, the proposed method enhances positioning accuracy by 0.2 m compared to using visible light or infrared alone. Additionally, by integrating laser ranging for distance dimension information, precise target positioning in the world coordinate system is achieved. Outdoor experiments for spatial target positioning validate the method's effectiveness, utilizing visible light and infrared cameras along with laser ranging. Comparative analysis with a binary star angular measurement-only method demonstrates 17.9 % improvement in positioning accuracy, with the proposed method achieving 0.12 m accuracy for 5 cm spatial targets at 5 km distance.
{"title":"Research on high precision localization of space target with multi-sensor association","authors":"","doi":"10.1016/j.optlaseng.2024.108553","DOIUrl":"10.1016/j.optlaseng.2024.108553","url":null,"abstract":"<div><p>In response to the challenges of acquiring spatial target position information and achieving high precision in existing methods, this paper proposes a multi-dimensional high-precision positioning method for spatial targets through multi-sensor fusion. Utilizing optical detection technology, the method extracts two-dimensional positional information of spatial targets on the observation plane. By deriving a fusion positioning formula for visible light and infrared based on the Gaussian mixture TPHD, the proposed method enhances positioning accuracy by 0.2 m compared to using visible light or infrared alone. Additionally, by integrating laser ranging for distance dimension information, precise target positioning in the world coordinate system is achieved. Outdoor experiments for spatial target positioning validate the method's effectiveness, utilizing visible light and infrared cameras along with laser ranging. Comparative analysis with a binary star angular measurement-only method demonstrates 17.9 % improvement in positioning accuracy, with the proposed method achieving 0.12 m accuracy for 5 cm spatial targets at 5 km distance.</p></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142172733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-12DOI: 10.1016/j.optlaseng.2024.108582
A multi-line laser scanning system for 3D topography measurement is proposed. This method not only has the advantages of high precision of laser scanning technology, but also has high reconstruction efficiency. In this paper, speckle reconstruction technique, multi-line laser technique and Biocular reconstruction technique are used to construct a 3D reconstruction system, and test equipment is built, and the problems existing in the system establishment process are actually studied. In order to solve the problem of mismatching in binocular multi-line laser matching, a method to sort out the correspondence of multiple laser lines in binocular images based on speckle matching results is proposed. In order to optimize the multi-line laser matching effect, a speckle matching network based on deep learning is proposed, which integrates the grayscale images of the left and right cameras as supplementary information, and takes the speckle image and grayscale image as the input of the network model to obtain more accurate and edge-complete matching results. Finally, the matching results of the multi-line laser and the camera calibration parameters were used to reconstruct the object point cloud. Experimental results show that the proposed speckle matching method can make binocular multiline laser point cloud reconstruction more robust and stable than the traditional method, and through the accuracy analysis of the system, it is proved that the average measurement accuracy of the proposed method can reach 0.05 mm.
{"title":"Multi-line laser scanning reconstruction with binocularly speckle matching and trained deep neural networks","authors":"","doi":"10.1016/j.optlaseng.2024.108582","DOIUrl":"10.1016/j.optlaseng.2024.108582","url":null,"abstract":"<div><p>A multi-line laser scanning system for 3D topography measurement is proposed. This method not only has the advantages of high precision of laser scanning technology, but also has high reconstruction efficiency. In this paper, speckle reconstruction technique, multi-line laser technique and Biocular reconstruction technique are used to construct a 3D reconstruction system, and test equipment is built, and the problems existing in the system establishment process are actually studied. In order to solve the problem of mismatching in binocular multi-line laser matching, a method to sort out the correspondence of multiple laser lines in binocular images based on speckle matching results is proposed. In order to optimize the multi-line laser matching effect, a speckle matching network based on deep learning is proposed, which integrates the grayscale images of the left and right cameras as supplementary information, and takes the speckle image and grayscale image as the input of the network model to obtain more accurate and edge-complete matching results. Finally, the matching results of the multi-line laser and the camera calibration parameters were used to reconstruct the object point cloud. Experimental results show that the proposed speckle matching method can make binocular multiline laser point cloud reconstruction more robust and stable than the traditional method, and through the accuracy analysis of the system, it is proved that the average measurement accuracy of the proposed method can reach 0.05 mm.</p></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142172729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-12DOI: 10.1016/j.optlaseng.2024.108554
This paper proposes a neural network and least squares method to retrieve phase from three-frame random phase-shifting interferograms. The phase retrieval method involves two processes. Firstly, a neural network is utilized to predict phase shifts of the three-frame random phase-shifting interferograms. After the phase shifts are determined, the phase is retrieved using the least squares method. The method is simple, and does not require iterative calculation. The accuracy of the proposed method is verified by comparing the advanced iterative algorithm. Through the analysis of the simulated interferograms, the root mean square (RMS) of phase error can approach 0.1 rad. The interferograms recorded in the interferometer verifies the feasibility.
{"title":"Phase retrieval from random phase-shifting interferograms using neural network and least squares method","authors":"","doi":"10.1016/j.optlaseng.2024.108554","DOIUrl":"10.1016/j.optlaseng.2024.108554","url":null,"abstract":"<div><p>This paper proposes a neural network and least squares method to retrieve phase from three-frame random phase-shifting interferograms. The phase retrieval method involves two processes. Firstly, a neural network is utilized to predict phase shifts of the three-frame random phase-shifting interferograms. After the phase shifts are determined, the phase is retrieved using the least squares method. The method is simple, and does not require iterative calculation. The accuracy of the proposed method is verified by comparing the advanced iterative algorithm. Through the analysis of the simulated interferograms, the root mean square (RMS) of phase error can approach 0.1 rad. The interferograms recorded in the interferometer verifies the feasibility.</p></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142172832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-12DOI: 10.1016/j.optlaseng.2024.108585
Phase unwrapping is a crucial step in laser interferometry for obtaining accurate physical measurement of object. To reduce the impact of speckle noise on wrapped phase during actual measurement and improve the subsequent measurement accuracy, a multi-feature fusion phase unwrapping method for different speckle noises named MFR-Net is proposed in this paper. The network is composed of a front-end multi-module filter processing layer and a back-end network with dilated convolution and coordinate attention mechanism. By reducing random phase differences introduced by different levels of noise, the network enhances its capability to extract spatial features such as gradient information between pixels under speckle noise, so that it successfully unwraps the wrapped phase with different speckle noises and accurately recovers the real phase information. Taking the wrapped phases with multiplicative speckle noise and additive random noise as dataset, the results of ablation and comparison experiments show that the MFR-Net has superior unwrapped results. Under three different levels of speckle noise, the average values of MSE, SSIM, PSNR and AU for MFR-Net are at least improved by 84.80 %, 10.99 %, 29.00 % and 7.72 %, respectively, compared to PDVQG, TIE, DLPU and VURNet algorithms. When the standard deviation of speckle noise varies continuously in the range [1.0, 2.0], the average values of four indexes reaches 0.12 rad, 0.91, 31.80 dB and 99.96 %, respectively, indicating the stronger robustness of MFR-Net. In addition, the phase step unwrapping is performed by MFR-Net. Compared to DLPU and VURNet, MFR-Net method reduced MSE by 80 % and 87.35 %, respectively, demonstrating the outstanding generalization capability. The proposed MFR-Net can realize the correct phase unwrapping under different speckle noises. It may be applied in laser interferometry applications such as digital holography and interferometric synthetic aperture radar.
{"title":"MFR-Net: A multi-feature fusion phase unwrapping method for different speckle noises","authors":"","doi":"10.1016/j.optlaseng.2024.108585","DOIUrl":"10.1016/j.optlaseng.2024.108585","url":null,"abstract":"<div><p>Phase unwrapping is a crucial step in laser interferometry for obtaining accurate physical measurement of object. To reduce the impact of speckle noise on wrapped phase during actual measurement and improve the subsequent measurement accuracy, a multi-feature fusion phase unwrapping method for different speckle noises named MFR-Net is proposed in this paper. The network is composed of a front-end multi-module filter processing layer and a back-end network with dilated convolution and coordinate attention mechanism. By reducing random phase differences introduced by different levels of noise, the network enhances its capability to extract spatial features such as gradient information between pixels under speckle noise, so that it successfully unwraps the wrapped phase with different speckle noises and accurately recovers the real phase information. Taking the wrapped phases with multiplicative speckle noise and additive random noise as dataset, the results of ablation and comparison experiments show that the MFR-Net has superior unwrapped results. Under three different levels of speckle noise, the average values of MSE, SSIM, PSNR and AU for MFR-Net are at least improved by 84.80 %, 10.99 %, 29.00 % and 7.72 %, respectively, compared to PDVQG, TIE, DLPU and VURNet algorithms. When the standard deviation of speckle noise varies continuously in the range [1.0, 2.0], the average values of four indexes reaches 0.12 rad, 0.91, 31.80 dB and 99.96 %, respectively, indicating the stronger robustness of MFR-Net. In addition, the phase step unwrapping is performed by MFR-Net. Compared to DLPU and VURNet, MFR-Net method reduced MSE by 80 % and 87.35 %, respectively, demonstrating the outstanding generalization capability. The proposed MFR-Net can realize the correct phase unwrapping under different speckle noises. It may be applied in laser interferometry applications such as digital holography and interferometric synthetic aperture radar.</p></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142172833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-11DOI: 10.1016/j.optlaseng.2024.108581
Camera imaging through refractive interfaces is a crucial issue in photogrammetric measurements. Most past studies adopted numerical optimization algorithms based on refractive ray tracing procedures. In these studies, the camera and interface parameters are usually calculated iteratively with numerical optimization algorithms. Inappropriate initial values can cause iterations to diverge. Meanwhile, these iterations cannot efficiently reveal the accurate nature of refractive imaging. Therefore, obtaining camera calibration results that are both flexible and physically interpretable continues to be challenging. Consequently, in this study, we modeled refractive imaging by employing ray transfer matrix analysis. Subsequently, we deduced an analytical refractive imaging (ARI) equation that explicitly describes the refractive geometry in a matrix form. Although this equation is built upon the paraxial approximation, we executed a numerical experiment that shows that the developed analytical equation can accurately illustrate refractive imaging with a considerable object distance and a slightly tilted angle of the flat interface. This ARI equation can be used to define the expansion center and the normal vector of the flat interface. Finally, we also propose a flexible measurement method to determine the orientation of the flat interface, wherein the orientation can be measured rather than calculated by iterative procedures.
相机通过折射界面成像是摄影测量中的一个关键问题。以往的研究大多采用基于折射光线跟踪程序的数值优化算法。在这些研究中,相机和界面参数通常通过数值优化算法进行迭代计算。不恰当的初始值会导致迭代发散。同时,这些迭代无法有效揭示折射成像的精确本质。因此,获得既灵活又能从物理角度解释的相机校准结果仍然是一项挑战。因此,在本研究中,我们采用射线传递矩阵分析法对折射成像进行建模。随后,我们推导出一个分析折射成像(ARI)方程,该方程以矩阵形式明确描述了折射几何。虽然该方程是建立在准轴向近似基础上的,但我们进行的数值实验表明,所建立的分析方程可以准确地说明在物体距离较大、平面界面角度略微倾斜的情况下的折射成像。该 ARI 方程可用于定义平面界面的膨胀中心和法向量。最后,我们还提出了一种灵活的测量方法来确定平面界面的方向,其中方向可以通过测量而不是迭代程序计算得出。
{"title":"Analytical equation for camera imaging with refractive interfaces","authors":"","doi":"10.1016/j.optlaseng.2024.108581","DOIUrl":"10.1016/j.optlaseng.2024.108581","url":null,"abstract":"<div><p>Camera imaging through refractive interfaces is a crucial issue in photogrammetric measurements. Most past studies adopted numerical optimization algorithms based on refractive ray tracing procedures. In these studies, the camera and interface parameters are usually calculated iteratively with numerical optimization algorithms. Inappropriate initial values can cause iterations to diverge. Meanwhile, these iterations cannot efficiently reveal the accurate nature of refractive imaging. Therefore, obtaining camera calibration results that are both flexible and physically interpretable continues to be challenging. Consequently, in this study, we modeled refractive imaging by employing ray transfer matrix analysis. Subsequently, we deduced an analytical refractive imaging (ARI) equation that explicitly describes the refractive geometry in a matrix form. Although this equation is built upon the paraxial approximation, we executed a numerical experiment that shows that the developed analytical equation can accurately illustrate refractive imaging with a considerable object distance and a slightly tilted angle of the flat interface. This ARI equation can be used to define the expansion center and the normal vector of the flat interface. Finally, we also propose a flexible measurement method to determine the orientation of the flat interface, wherein the orientation can be measured rather than calculated by iterative procedures.</p></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142169551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-11DOI: 10.1016/j.optlaseng.2024.108552
Fluorescence imaging necessitates precise matching of excitation source, dichroic mirror, emission filter, detector and dyes, which is complex and time-consuming, especially for applications of probe multiplexing. We propose a novel method for multicolor imaging based on a brightness coded set. Each brightness code consists of 12 bits (), denoting probe type, cube, emission filter, imaging result and priority, respectively. The brightness of a probe in an imaging system is defined as the product of extinction coefficient, quantum yield and the filter transmittance. When the brightness exceeds the threshold, indicates a clear image, otherwise . The higher the brightness value the higher the priority (TT). To validate the efficacy and efficiency of the coding method, we conducted two separate experiments involving four-color imaging. The proposed method offers a substantial simplification of the conventional approach to device matching in multicolor imaging by leveraging spectrograms, and presents a promising avenue for the advancement of intelligent multicolor imaging systems.
{"title":"Multicolor imaging based on brightness coded set","authors":"","doi":"10.1016/j.optlaseng.2024.108552","DOIUrl":"10.1016/j.optlaseng.2024.108552","url":null,"abstract":"<div><p>Fluorescence imaging necessitates precise matching of excitation source, dichroic mirror, emission filter, detector and dyes, which is complex and time-consuming, especially for applications of probe multiplexing. We propose a novel method for multicolor imaging based on a brightness coded set. Each brightness code consists of 12 bits (<span><math><mi>O</mi><mi>O</mi><mi>O</mi><mi>X</mi><mi>X</mi><mi>X</mi><mi>Y</mi><mi>Y</mi><mi>Y</mi><mi>Z</mi><mi>T</mi><mi>T</mi></math></span>), denoting probe type, cube, emission filter, imaging result and priority, respectively. The brightness of a probe in an imaging system is defined as the product of extinction coefficient, quantum yield and the filter transmittance. When the brightness exceeds the threshold, <span><math><mi>Z</mi><mo>=</mo><mn>1</mn></math></span> indicates a clear image, otherwise <span><math><mi>Z</mi><mo>=</mo><mn>0</mn></math></span>. The higher the brightness value the higher the priority (<em>TT</em>). To validate the efficacy and efficiency of the coding method, we conducted two separate experiments involving four-color imaging. The proposed method offers a substantial simplification of the conventional approach to device matching in multicolor imaging by leveraging spectrograms, and presents a promising avenue for the advancement of intelligent multicolor imaging systems.</p></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142169550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-11DOI: 10.1016/j.optlaseng.2024.108575
Due to the absorption and scattering of light and the influence of suspended particles, underwater images commonly exhibit color distortions, reduced contrast, and diminished details. This paper proposes an attenuated color channel adaptive correction and bilateral weight fusion approach called WLAB to address the aforementioned degradation issues. Specifically, a novel white balance method is first applied to balance the color channel of the input image. Moreover, a local-block-based fast non-local means method is proposed to obtain a denoised version of the color-corrected image. Then, an adaptive stretching method that considers the histogram's local features to get a contrast-enhanced version of the color-corrected image. Finally, a bilateral weight fusion method is proposed to fuse the above two image versions to obtain an output image with complementary advantages. Experimental studies are conducted on three benchmark underwater image datasets and compared with ten state-of-the-art methods. The results show that WLAB has a significant advantage over the comparative methods. Notably, WLAB exhibits a degree of independence from camera settings and enhances the precision of various image processing applications, including key points and saliency detection. Additionally, it demonstrates commendable adaptability in improving low-light and foggy images.
{"title":"Attenuated color channel adaptive correction and bilateral weight fusion for underwater image enhancement","authors":"","doi":"10.1016/j.optlaseng.2024.108575","DOIUrl":"10.1016/j.optlaseng.2024.108575","url":null,"abstract":"<div><p>Due to the absorption and scattering of light and the influence of suspended particles, underwater images commonly exhibit color distortions, reduced contrast, and diminished details. This paper proposes an attenuated color channel adaptive correction and bilateral weight fusion approach called WLAB to address the aforementioned degradation issues. Specifically, a novel white balance method is first applied to balance the color channel of the input image. Moreover, a local-block-based fast non-local means method is proposed to obtain a denoised version of the color-corrected image. Then, an adaptive stretching method that considers the histogram's local features to get a contrast-enhanced version of the color-corrected image. Finally, a bilateral weight fusion method is proposed to fuse the above two image versions to obtain an output image with complementary advantages. Experimental studies are conducted on three benchmark underwater image datasets and compared with ten state-of-the-art methods. The results show that WLAB has a significant advantage over the comparative methods. Notably, WLAB exhibits a degree of independence from camera settings and enhances the precision of various image processing applications, including key points and saliency detection. Additionally, it demonstrates commendable adaptability in improving low-light and foggy images.</p></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142169400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}