Pub Date : 2024-10-05DOI: 10.1016/j.optlaseng.2024.108621
Previous studies on refractive index sensors have shown that their sensing characteristics are limited by variations in the background environment refractive index, resulting in a significant decrease in the figure of merit (FOM) and sensitivity of the sensor. Here, we design a high-Q refractive index sensor, which is composed of a Dirac semimetal. The proposed sensor is based on topological bound states in the continuum (BICs), which have a diverging quality factor, and exhibits extremely high FOM and detection sensitivity over a wide variation range of the background environment refractive index. Its operation is based on the reciprocating motion of two pairs of BICs in the kx and ky high-symmetry lines of the momentum space. Specifically, two pairs of BICs, which are characterized by topological charges, can be merged and generated by varying the Fermi energy of the Dirac semimetal. Furthermore, we extract the relation between the Fermi energy and the background environment refractive index for the merging-BIC. This ensures that the FOM is extremely high over a very wide variation range of the background environment refractive index. Our findings provide a perspective for investigating ultrahigh performance refractive index sensors based on merging-BICs.
{"title":"High-Q refractive index sensor with an ultrawide detection range based on topological bound states in the continuum","authors":"","doi":"10.1016/j.optlaseng.2024.108621","DOIUrl":"10.1016/j.optlaseng.2024.108621","url":null,"abstract":"<div><div>Previous studies on refractive index sensors have shown that their sensing characteristics are limited by variations in the background environment refractive index, resulting in a significant decrease in the figure of merit (FOM) and sensitivity of the sensor. Here, we design a high-Q refractive index sensor, which is composed of a Dirac semimetal. The proposed sensor is based on topological bound states in the continuum (BICs), which have a diverging quality factor, and exhibits extremely high FOM and detection sensitivity over a wide variation range of the background environment refractive index. Its operation is based on the reciprocating motion of two pairs of BICs in the <em>k</em><sub><em>x</em></sub> and <em>k</em><sub><em>y</em></sub> high-symmetry lines of the momentum space. Specifically, two pairs of BICs, which are characterized by topological charges, can be merged and generated by varying the Fermi energy of the Dirac semimetal. Furthermore, we extract the relation between the Fermi energy and the background environment refractive index for the merging-BIC. This ensures that the FOM is extremely high over a very wide variation range of the background environment refractive index. Our findings provide a perspective for investigating ultrahigh performance refractive index sensors based on merging-BICs.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-05DOI: 10.1016/j.optlaseng.2024.108616
Achieving high quality 3D imaging with single exposure has always been the goal of Fresnel incoherent correlation digital holography (FINCH). However, there is a trade-off between space-time bandwidth product and system complexity, resulting in lower reconstruction quality of FINCH. Here, we propose a single-shot FINCH method based on digital self-calibrated point source holograms (PSHs) to achieve dynamic 3D imaging. Firstly, it demonstrates that a single FINCH hologram integrates information from multiple incoherently superimposed PSHs, so that the reconstructed images exhibit significant sparsity variations in the gradient domain when correlated with the PSHs to be calibrated. As a result, we can conveniently achieve accurate PSHs of objects at different depth planes by digital self-calibration algorithm. Furthermore, by combining the digital self-calibrated PSHs with a compressive sensing (CS) reconstruction algorithm, the quality of the 3D reconstruction can be effectively enhanced, showing excellent performance in improving lateral and axial resolution. Importantly, this method offers a new strategy for simplifying implementation system and improving space-time bandwidth product of FINCH technology, and then achieves high quality 3D imaging of dynamic scene.
{"title":"Single-shot Fresnel incoherent correlation holography based on digital self-calibrated point source holograms","authors":"","doi":"10.1016/j.optlaseng.2024.108616","DOIUrl":"10.1016/j.optlaseng.2024.108616","url":null,"abstract":"<div><div>Achieving high quality 3D imaging with single exposure has always been the goal of Fresnel incoherent correlation digital holography (FINCH). However, there is a trade-off between space-time bandwidth product and system complexity, resulting in lower reconstruction quality of FINCH. Here, we propose a single-shot FINCH method based on digital self-calibrated point source holograms (PSHs) to achieve dynamic 3D imaging. Firstly, it demonstrates that a single FINCH hologram integrates information from multiple incoherently superimposed PSHs, so that the reconstructed images exhibit significant sparsity variations in the gradient domain when correlated with the PSHs to be calibrated. As a result, we can conveniently achieve accurate PSHs of objects at different depth planes by digital self-calibration algorithm. Furthermore, by combining the digital self-calibrated PSHs with a compressive sensing (CS) reconstruction algorithm, the quality of the 3D reconstruction can be effectively enhanced, showing excellent performance in improving lateral and axial resolution. Importantly, this method offers a new strategy for simplifying implementation system and improving space-time bandwidth product of FINCH technology, and then achieves high quality 3D imaging of dynamic scene.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-04DOI: 10.1016/j.optlaseng.2024.108609
This paper presents a novel method to speed up error detection in an additive manufacturing (AM) process by minimizing the necessary three-dimensional (3D) reconstruction and comparison. We develop a structured light 3D imaging technique that has native pixel-by-pixel mapping between the captured two-dimensional (2D) image and the reconstructed 3D point cloud. This 3D imaging technique allows error detection to be performed in the 2D image domain prior to 3D point cloud generation, which drastically reduces complexity and computational time. Compared to an existing AM error detection method based on 3D reconstruction and point cloud processing, experimental results from a material extrusion (MEX) AM process demonstrate that our proposed method significantly increases the error detection speed.
本文提出了一种新方法,通过最大限度地减少必要的三维(3D)重建和比较,加快增材制造(AM)工艺中的误差检测。我们开发了一种结构光三维成像技术,该技术可在捕获的二维(2D)图像和重建的三维点云之间进行原生逐像素映射。这种三维成像技术允许在生成三维点云之前在二维图像域中进行误差检测,从而大大降低了复杂性和计算时间。与基于三维重建和点云处理的现有 AM 错误检测方法相比,材料挤压 (MEX) AM 过程的实验结果表明,我们提出的方法显著提高了错误检测速度。
{"title":"Fast error detection method for additive manufacturing process monitoring using structured light three dimensional imaging technique","authors":"","doi":"10.1016/j.optlaseng.2024.108609","DOIUrl":"10.1016/j.optlaseng.2024.108609","url":null,"abstract":"<div><div>This paper presents a novel method to speed up error detection in an additive manufacturing (AM) process by minimizing the necessary three-dimensional (3D) reconstruction and comparison. We develop a structured light 3D imaging technique that has native pixel-by-pixel mapping between the captured two-dimensional (2D) image and the reconstructed 3D point cloud. This 3D imaging technique allows error detection to be performed in the 2D image domain prior to 3D point cloud generation, which drastically reduces complexity and computational time. Compared to an existing AM error detection method based on 3D reconstruction and point cloud processing, experimental results from a material extrusion (MEX) AM process demonstrate that our proposed method significantly increases the error detection speed.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-04DOI: 10.1016/j.optlaseng.2024.108617
The realization of high-speed, low-power optical phased array (OPA) on thin-film lithium niobate on insulator (LNOI) is considered an ideal solution for the next generation of solid-state beam steering. Most reported on-chip two-dimensional optical phased arrays suffer from issues such as large antenna spacing, high power consumption and complex wiring due to independent control of array elements. To address these challenges while fully utilizing the benefits of the LNOI platform, we propose a two-dimensional beam-scanning OPA based on lithium niobate (LN) waveguides. We design a multi-layer cascaded domain engineering structure inside the LN waveguide, combined with wavelength tuning, to enable two-dimensional beam scanning with single electrode controlling the OPA. Through simulation, we achieve a 42°×9.2° two-dimensional beam steering. Compared to existing on-chip integrated OPAs, this work offers significant advantages in increasing integration, simplifying control units and reducing power consumption.
{"title":"Design and analysis of single-electrode integrated lithium niobate optical phased array for two-dimensional beam steering","authors":"","doi":"10.1016/j.optlaseng.2024.108617","DOIUrl":"10.1016/j.optlaseng.2024.108617","url":null,"abstract":"<div><div>The realization of high-speed, low-power optical phased array (OPA) on thin-film lithium niobate on insulator (LNOI) is considered an ideal solution for the next generation of solid-state beam steering. Most reported on-chip two-dimensional optical phased arrays suffer from issues such as large antenna spacing, high power consumption and complex wiring due to independent control of array elements. To address these challenges while fully utilizing the benefits of the LNOI platform, we propose a two-dimensional beam-scanning OPA based on lithium niobate (LN) waveguides. We design a multi-layer cascaded domain engineering structure inside the LN waveguide, combined with wavelength tuning, to enable two-dimensional beam scanning with single electrode controlling the OPA. Through simulation, we achieve a 42°×9.2° two-dimensional beam steering. Compared to existing on-chip integrated OPAs, this work offers significant advantages in increasing integration, simplifying control units and reducing power consumption.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-03DOI: 10.1016/j.optlaseng.2024.108547
Fluorescence lifetime imaging microscopy (FLIM) has been proposed as an important technique for understanding the chemical microenvironment in cells and tissues, as it provides additional information compared to conventional fluorescence imaging. However, it is often hindered by limited spatial resolution and signal-to-noise ratio (SNR). In this study, we introduce a dual-color super-resolution FLIM method, termed Parallel Detection and Fluorescence Emission Difference (PDFED) FLIM. The integration of parallel detection with photon reassignment enhances photon efficiency, SNR, and resolution effectively. Additionally, differential imaging employing polarization modulation effectively reduces artifacts resulting from sample changes during live-cell imaging. PDFED-FLIM demonstrates enhancements in spatial resolution by approximately 1.6 times and peak signal-to-noise ratio (PSNR) by around 1.3 times. Furthermore, live-cell imaging showcases improved resolution and image quality, signifying the extensive potential of PDFED-FLIM in biomedical applications.
{"title":"Dual-color live-cell super-resolution fluorescence lifetime imaging via polarization modulation-based fluorescence emission difference","authors":"","doi":"10.1016/j.optlaseng.2024.108547","DOIUrl":"10.1016/j.optlaseng.2024.108547","url":null,"abstract":"<div><div>Fluorescence lifetime imaging microscopy (FLIM) has been proposed as an important technique for understanding the chemical microenvironment in cells and tissues, as it provides additional information compared to conventional fluorescence imaging. However, it is often hindered by limited spatial resolution and signal-to-noise ratio (SNR). In this study, we introduce a dual-color super-resolution FLIM method, termed Parallel Detection and Fluorescence Emission Difference (PDFED) FLIM. The integration of parallel detection with photon reassignment enhances photon efficiency, SNR, and resolution effectively. Additionally, differential imaging employing polarization modulation effectively reduces artifacts resulting from sample changes during live-cell imaging. PDFED-FLIM demonstrates enhancements in spatial resolution by approximately 1.6 times and peak signal-to-noise ratio (PSNR) by around 1.3 times. Furthermore, live-cell imaging showcases improved resolution and image quality, signifying the extensive potential of PDFED-FLIM in biomedical applications.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-02DOI: 10.1016/j.optlaseng.2024.108614
Using a modeled look-up chart developed in this work, we show that accurate photometric measurements of characteristics like illuminance or luminous intensity of test LED sources can be measured in one step. This is a simple broadband measurement that may substitute for complicated and costly spectral measurements currently in place. To develop the look-up chart, the typical minimum and maximum of the Spectral Mismatch Correction Factor (SMCF) for a given photometers was estimated in relation to catalog parameters of the LEDs such as correlated color temperature (CCT) and melanopic daylight efficacy ratio (). This research was based on the unique and large dataset of real photometers spectral response collected and measured at accredited laboratories located at America, Asia and Europe and modeled LED's spectral power distribution (SPD) data. Independent look-up tables were developed for color-mixed LEDs (cm-LEDs) and white phosphor-converted LEDs (pc-LEDs), two common LED types.
通过使用在这项工作中开发的模型查找表,我们展示了可以通过一个步骤对测试 LED 光源的照度或发光强度等特性进行精确的光度测量。这是一种简单的宽带测量方法,可替代目前复杂而昂贵的光谱测量方法。为了开发查询表,我们根据 LED 的目录参数(如相关色温 (CCT) 和黑色素日光效率比 (mDER))估算了特定光度计的光谱失配校正系数 (SMCF) 的典型最小值和最大值。这项研究基于在美洲、亚洲和欧洲的认可实验室收集和测量的独特而庞大的真实光度计光谱响应 srel(λ) 数据集,以及建模的 LED 光谱功率分布 (SPD) 数据。针对混色发光二极管(cm-LED)和白荧光粉转换发光二极管(pc-LED)这两种常见的发光二极管类型开发了独立的查询表。
{"title":"The spectral mismatch correction factor estimation using broadband photometer measurements and catalog parameters for tested white LED sources","authors":"","doi":"10.1016/j.optlaseng.2024.108614","DOIUrl":"10.1016/j.optlaseng.2024.108614","url":null,"abstract":"<div><div>Using a modeled look-up chart developed in this work, we show that accurate photometric measurements of characteristics like illuminance or luminous intensity of test LED sources can be measured in one step. This is a simple broadband measurement that may substitute for complicated and costly spectral measurements currently in place. To develop the look-up chart, the typical minimum and maximum of the Spectral Mismatch Correction Factor (SMCF) for a given photometers was estimated in relation to catalog parameters of the LEDs such as correlated color temperature (CCT) and melanopic daylight efficacy ratio (<span><math><mtext>mDER</mtext></math></span>). This research was based on the unique and large dataset of real photometers spectral response <span><math><mrow><msub><mi>s</mi><mtext>rel</mtext></msub><mrow><mo>(</mo><mi>λ</mi><mo>)</mo></mrow></mrow></math></span> collected and measured at accredited laboratories located at America, Asia and Europe and modeled LED's spectral power distribution (SPD) data. Independent look-up tables were developed for color-mixed LEDs (cm-LEDs) and white phosphor-converted LEDs (pc-LEDs), two common LED types.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-02DOI: 10.1016/j.optlaseng.2024.108615
Some metrological means, such as Shack-Hartmann, deflectometry sensors or fringe projection profilometry, measure the shape of an optical surface indirectly from slope measurements. Zonal shape reconstruction, a method to reconstruct shape with a high number of degrees of freedom, is used for all of these applications. It has risen in interest with the use of deflectometers for the acquisition of high resolution slope data for optical manufacturing, especially because shape reconstruction is limiting in terms of shape estimation error.
Zonal reconstruction methods all rely on the choice of a data formation model, a basis on which the shape will be decomposed, and an estimator. In this paper, we first study the canonical Fried and Southwell models of the literature and analyze their limitations. We show that modeling the slope measurement by a point-wise derivative as they both do can induce a bias on the shape estimation, and that the bases on which the shape is decomposed are imposed because of this assumption.
In the second part of this paper, we propose to build an unbiased model of the data formation, without constraints on the choice of the decomposition basis. We then compare these models to the canonical models of Fried and Southwell.
Lastly, we perform a regularized MAP reconstruction, and compare the performance in terms of total shape error of this method to the state of the art for the Southwell and Fried models, first by simulation, then on experimental data. We demonstrate that the suggested method outperforms the canonical models in terms of total shape reconstruction error on a deflectometry measurement of the high-frequency content of a freeform mirror.
{"title":"Zonal shape reconstruction for Shack-Hartmann sensors and deflectometry","authors":"","doi":"10.1016/j.optlaseng.2024.108615","DOIUrl":"10.1016/j.optlaseng.2024.108615","url":null,"abstract":"<div><div>Some metrological means, such as Shack-Hartmann, deflectometry sensors or fringe projection profilometry, measure the shape of an optical surface indirectly from slope measurements. Zonal shape reconstruction, a method to reconstruct shape with a high number of degrees of freedom, is used for all of these applications. It has risen in interest with the use of deflectometers for the acquisition of high resolution slope data for optical manufacturing, especially because shape reconstruction is limiting in terms of shape estimation error.</div><div>Zonal reconstruction methods all rely on the choice of a data formation model, a basis on which the shape will be decomposed, and an estimator. In this paper, we first study the canonical Fried and Southwell models of the literature and analyze their limitations. We show that modeling the slope measurement by a point-wise derivative as they both do can induce a bias on the shape estimation, and that the bases on which the shape is decomposed are imposed because of this assumption.</div><div>In the second part of this paper, we propose to build an unbiased model of the data formation, without constraints on the choice of the decomposition basis. We then compare these models to the canonical models of Fried and Southwell.</div><div>Lastly, we perform a regularized MAP reconstruction, and compare the performance in terms of total shape error of this method to the state of the art for the Southwell and Fried models, first by simulation, then on experimental data. We demonstrate that the suggested method outperforms the canonical models in terms of total shape reconstruction error on a deflectometry measurement of the high-frequency content of a freeform mirror.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-02DOI: 10.1016/j.optlaseng.2024.108627
Interferometric null test of optical aspheres and freeforms is one of the most established methods, in spite of recognized limitations related to positioning of the test surface. Slight misalignment of the surface in the null test system can introduce remarkable wave aberrations. Current approaches for positioning the test surface are based on a series of fiducial marks or retroreflectors set at given spots around the test surface, which consequently suffer the problem of datum transformation. We propose a method for positioning the test surface in a null test by cat's eye interference without any fiducial marks or retroreflectors. A computer-generated hologram (CGH) is used as null optics fabricated with null test pattern, alignment pattern and positioning pattern. A part of test beam with relatively small f/number is diffracted through the positioning pattern and then focuses on certain spots of the test surface. The cat's eye reflection from the test surface returns to the interferometer and interferes with the reference beam. It is then possible to precisely position the test surface by measuring the wavefront error of the cat's eye interference. The design method for such a CGH positioning pattern is presented, following the physical constraint that the surface normal at the cay's eye reflection is right the angular bisector of the positioning beam. Analysis on the positioning performance is then presented to show the sensitivity to misalignment including defocus, lateral shift and tip-tilt, which at last is experimentally verified by measuring an even asphere with a CGH.
{"title":"Positioning of the test surface in a CGH null test by cat's eye interference","authors":"","doi":"10.1016/j.optlaseng.2024.108627","DOIUrl":"10.1016/j.optlaseng.2024.108627","url":null,"abstract":"<div><div>Interferometric null test of optical aspheres and freeforms is one of the most established methods, in spite of recognized limitations related to positioning of the test surface. Slight misalignment of the surface in the null test system can introduce remarkable wave aberrations. Current approaches for positioning the test surface are based on a series of fiducial marks or retroreflectors set at given spots around the test surface, which consequently suffer the problem of datum transformation. We propose a method for positioning the test surface in a null test by cat's eye interference without any fiducial marks or retroreflectors. A computer-generated hologram (CGH) is used as null optics fabricated with null test pattern, alignment pattern and positioning pattern. A part of test beam with relatively small f/number is diffracted through the positioning pattern and then focuses on certain spots of the test surface. The cat's eye reflection from the test surface returns to the interferometer and interferes with the reference beam. It is then possible to precisely position the test surface by measuring the wavefront error of the cat's eye interference. The design method for such a CGH positioning pattern is presented, following the physical constraint that the surface normal at the cay's eye reflection is right the angular bisector of the positioning beam. Analysis on the positioning performance is then presented to show the sensitivity to misalignment including defocus, lateral shift and tip-tilt, which at last is experimentally verified by measuring an even asphere with a CGH.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.optlaseng.2024.108619
The accurate calibration of composite surface measurement systems for specular and diffused surface is of great importance in a number of industrial applications. This paper addresses the challenges in calibrating composite surface reconstruction by proposing a comprehensive full-field calibration method. The principal innovation is the construction of a comprehensive reference phase and the pixel-by-pixel calibration of system parameters within the same coordinate system. The details are as follows: (1) construct a complete reference phase for the composite surfaces through external parameter constraints phase fusion and interpolation; (2) calibrate the mapping relationship coefficients, compensate the accuracy degradation of the partial reflection. Experimental results validate the accuracy of the full-field calibration data, demonstrating the effectiveness and feasibility.
{"title":"Full-field three-dimensional system calibration for composite surfaces reconstruction","authors":"","doi":"10.1016/j.optlaseng.2024.108619","DOIUrl":"10.1016/j.optlaseng.2024.108619","url":null,"abstract":"<div><div>The accurate calibration of composite surface measurement systems for specular and diffused surface is of great importance in a number of industrial applications. This paper addresses the challenges in calibrating composite surface reconstruction by proposing a comprehensive full-field calibration method. The principal innovation is the construction of a comprehensive reference phase and the pixel-by-pixel calibration of system parameters within the same coordinate system. The details are as follows: (1) construct a complete reference phase for the composite surfaces through external parameter constraints phase fusion and interpolation; (2) calibrate the mapping relationship coefficients, compensate the accuracy degradation of the partial reflection. Experimental results validate the accuracy of the full-field calibration data, demonstrating the effectiveness and feasibility.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.optlaseng.2024.108620
Recently, terahertz imaging technology has shown widespread potential applications in the public security screening field. Previous terahertz hazardous object detection techniques primarily relied on manual recognition and image processing methods. Some solutions incorporating deep learning technologies depend on large quantities of high-quality data, making it challenging to achieve low-cost and high-performance detection. To solve the issue of dependency on large amounts of high-quality data, we propose a diversified dynamic structure You Only Look Once (DDS-YOLO) model based on masked image modeling and structural reparameterization for the instance segmentation of hazardous objects in terahertz security inspection images. To address the scarcity and annotating difficulty problems of terahertz security inspection image samples, we combine automatic data strategies with masked image modeling for self-supervised learning. We propose a multilevel feature refinement fusion mechanism to enhance the quality of learned feature representations. The backbone parameter transfer and fine-tuning training strategies are employed to achieve hazardous object instance segmentation on the terahertz dataset. To address the low detection accuracy issue caused by the poor terahertz image quality, we develop a reparameterizable hierarchical structure for the backbone, improve a multi-scale feature-integrating neck, and design a dynamically decoupled head with lower computational requirements to enhance the performance of the instance segmentation model. Experimental results demonstrate that the proposed model accurately outputs detection boxes, categories, and segmentation masks for hazardous objects with minimal training samples. The comparative experimental results indicate that the proposed model outperforms existing state-of-the-art methods in terms of detection performance. The proposed DDS-YOLO model achieves 59.3 % in mask mean Average Precision (mAP) and 61.3 % in box mAP, and the model parameters and computational requirements also meet practical application scenarios.
{"title":"Self-supervised learning-based re-parameterization instance segmentation for hazardous object in terahertz image","authors":"","doi":"10.1016/j.optlaseng.2024.108620","DOIUrl":"10.1016/j.optlaseng.2024.108620","url":null,"abstract":"<div><div>Recently, terahertz imaging technology has shown widespread potential applications in the public security screening field. Previous terahertz hazardous object detection techniques primarily relied on manual recognition and image processing methods. Some solutions incorporating deep learning technologies depend on large quantities of high-quality data, making it challenging to achieve low-cost and high-performance detection. To solve the issue of dependency on large amounts of high-quality data, we propose a diversified dynamic structure You Only Look Once (DDS-YOLO) model based on masked image modeling and structural reparameterization for the instance segmentation of hazardous objects in terahertz security inspection images. To address the scarcity and annotating difficulty problems of terahertz security inspection image samples, we combine automatic data strategies with masked image modeling for self-supervised learning. We propose a multilevel feature refinement fusion mechanism to enhance the quality of learned feature representations. The backbone parameter transfer and fine-tuning training strategies are employed to achieve hazardous object instance segmentation on the terahertz dataset. To address the low detection accuracy issue caused by the poor terahertz image quality, we develop a reparameterizable hierarchical structure for the backbone, improve a multi-scale feature-integrating neck, and design a dynamically decoupled head with lower computational requirements to enhance the performance of the instance segmentation model. Experimental results demonstrate that the proposed model accurately outputs detection boxes, categories, and segmentation masks for hazardous objects with minimal training samples. The comparative experimental results indicate that the proposed model outperforms existing state-of-the-art methods in terms of detection performance. The proposed DDS-YOLO model achieves 59.3 % in mask mean Average Precision (mAP) and 61.3 % in box mAP, and the model parameters and computational requirements also meet practical application scenarios.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}