Pub Date : 2026-01-20DOI: 10.1016/j.optlaseng.2026.109632
Wei Gao , Pei Ju , Yanpeng Zhang , Haoyu Wang , Zhaohui Li , Zhe Li , Pei Huang , Aifeng He , Qi Gao , Wenhui Fan
We propose a high-gain fiber amplifier based on a tandem pump cavity tailored for small-signal lasers. The small-signal laser can obtain high gain when passing through the tandem pump cavity and is then continuously amplified by the tandem core-pumping process. In our experimental setup, a 50 µW signal laser is amplified to 128.5 W, achieving a maximum gain of 64 dB, surpassing conventional fiber amplifier capabilities. This approach offers a promising pathway toward compact, reliable, and cost-effective high-gain fiber amplifiers.
{"title":"64 dB gain Yb-doped fiber amplifier for small-signal lasers based on tandem pump cavity","authors":"Wei Gao , Pei Ju , Yanpeng Zhang , Haoyu Wang , Zhaohui Li , Zhe Li , Pei Huang , Aifeng He , Qi Gao , Wenhui Fan","doi":"10.1016/j.optlaseng.2026.109632","DOIUrl":"10.1016/j.optlaseng.2026.109632","url":null,"abstract":"<div><div>We propose a high-gain fiber amplifier based on a tandem pump cavity tailored for small-signal lasers. The small-signal laser can obtain high gain when passing through the tandem pump cavity and is then continuously amplified by the tandem core-pumping process. In our experimental setup, a 50 µW signal laser is amplified to 128.5 W, achieving a maximum gain of 64 dB, surpassing conventional fiber amplifier capabilities. This approach offers a promising pathway toward compact, reliable, and cost-effective high-gain fiber amplifiers.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109632"},"PeriodicalIF":3.7,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-20DOI: 10.1016/j.optlaseng.2026.109633
Jing Xin , Fuqian Li , Qican Zhang , Yajun Wang , Haojie Wei
In optical three-dimensional (3D) measurement techniques, fringe projection is one of the most reliable techniques for recovering the shape of objects. One challenge of the fringe projection is the measurement of high dynamic range (HDR) surfaces. Current non-learning-based and learning-based methods typically require acquiring multiple phase-shifting (PS) fringe patterns to achieve high-precision HDR 3-D reconstruction, which leads to the limited measurement efficiency. To overcome this limitation, a single-shot HDR method (PI-SSM) based on color coding and spatial-frequency domain learning is proposed. There are two key contributions in this work. First, the physics-informed single-shot measurement framework which integrates hardware modulation and deep-learning algorithms in a novel way for HDR 3-D reconstruction is proposed. Specially, a color coding strategy is utilized to realize single-shot measurement, which acquires three fringe patterns simultaneously. Furthermore, by the network training, the color crosstalk problem caused by color coding can be suppressed, and the damaged phase of the HDR surfaces can be repaired. Second, to enhance fringe quality under severe HDR degradation, a spatial-frequency domain fringe enhancement network (SFENet) is specifically designed. SFENet restores degraded fringe by jointly modeling local noise-induced distortions in the spatial domain and enforcing global periodic consistency in the frequency domain. And a joint spatial-frequency loss further improves fringe enhancement quality and phase accuracy. Experiments demonstrate that the proposed PI-SSM method enables more accurate and efficient single-shot phase retrieval, exhibiting its excellent generalization to various unseen HDR surfaces.
{"title":"Single-shot high-dynamic-range 3-D measurement via color coding and spatial-frequency domain learning","authors":"Jing Xin , Fuqian Li , Qican Zhang , Yajun Wang , Haojie Wei","doi":"10.1016/j.optlaseng.2026.109633","DOIUrl":"10.1016/j.optlaseng.2026.109633","url":null,"abstract":"<div><div>In optical three-dimensional (3D) measurement techniques, fringe projection is one of the most reliable techniques for recovering the shape of objects. One challenge of the fringe projection is the measurement of high dynamic range (HDR) surfaces. Current non-learning-based and learning-based methods typically require acquiring multiple phase-shifting (PS) fringe patterns to achieve high-precision HDR 3-D reconstruction, which leads to the limited measurement efficiency. To overcome this limitation, a single-shot HDR method (PI-SSM) based on color coding and spatial-frequency domain learning is proposed. There are two key contributions in this work. First, the physics-informed single-shot measurement framework which integrates hardware modulation and deep-learning algorithms in a novel way for HDR 3-D reconstruction is proposed. Specially, a color coding strategy is utilized to realize single-shot measurement, which acquires three fringe patterns simultaneously. Furthermore, by the network training, the color crosstalk problem caused by color coding can be suppressed, and the damaged phase of the HDR surfaces can be repaired. Second, to enhance fringe quality under severe HDR degradation, a spatial-frequency domain fringe enhancement network (SFENet) is specifically designed. SFENet restores degraded fringe by jointly modeling local noise-induced distortions in the spatial domain and enforcing global periodic consistency in the frequency domain. And a joint spatial-frequency loss further improves fringe enhancement quality and phase accuracy. Experiments demonstrate that the proposed PI-SSM method enables more accurate and efficient single-shot phase retrieval, exhibiting its excellent generalization to various unseen HDR surfaces.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109633"},"PeriodicalIF":3.7,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-20DOI: 10.1016/j.optlaseng.2026.109629
Jinfeng Xu , Qingsheng Xue , Fengqin Lu , Junhong Song , Xing Li
Single-photon counting lidar enables highly sensitive imaging by detecting extremely weak photon returns, but reconstructing reliable depth and reflectance from sparse photon data under extreme low signal-to-background ratio (SBR) conditions remains challenging. We propose a reflectance-guided multi-scale joint optimization framework built on the Sparse Poisson Intensity Reconstruction Algorithm (SPIRAL-TAP), which achieves collaborative reconstruction of reflectance and depth, and thus avoids the edge blurring and structural inconsistencies often observed when reflectance and depth are reconstructed independently. In the depth update, reflectance-weighted guidance is introduced to improve reconstruction quality. Compared with several signal reconstruction algorithms, the proposed algorithm achieves high-quality 3D reconstruction with a reflectance root mean square error (RMSE) of 0.11 and depth RMSE 0.2 m at an extreme SBR of 0.04, representing a 48% reduction relative to the single-scale SPIRAL-TAP method. The effectiveness and generality of the framework are validated on publicly available sparse photon datasets. The experimental results demonstrate that the method significantly improves reconstruction accuracy while preserving fine spatial details, and provides a practical solution for 3D imaging under low-SBR single-photon lidar conditions.
{"title":"Single-photon lidar system and multiscale optimization algorithm for extreme SBR","authors":"Jinfeng Xu , Qingsheng Xue , Fengqin Lu , Junhong Song , Xing Li","doi":"10.1016/j.optlaseng.2026.109629","DOIUrl":"10.1016/j.optlaseng.2026.109629","url":null,"abstract":"<div><div>Single-photon counting lidar enables highly sensitive imaging by detecting extremely weak photon returns, but reconstructing reliable depth and reflectance from sparse photon data under extreme low signal-to-background ratio (SBR) conditions remains challenging. We propose a reflectance-guided multi-scale joint optimization framework built on the Sparse Poisson Intensity Reconstruction Algorithm (SPIRAL-TAP), which achieves collaborative reconstruction of reflectance and depth, and thus avoids the edge blurring and structural inconsistencies often observed when reflectance and depth are reconstructed independently. In the depth update, reflectance-weighted guidance is introduced to improve reconstruction quality. Compared with several signal reconstruction algorithms, the proposed algorithm achieves high-quality 3D reconstruction with a reflectance root mean square error (RMSE) of 0.11 and depth RMSE 0.2 m at an extreme SBR of 0.04, representing a 48% reduction relative to the single-scale SPIRAL-TAP method. The effectiveness and generality of the framework are validated on publicly available sparse photon datasets. The experimental results demonstrate that the method significantly improves reconstruction accuracy while preserving fine spatial details, and provides a practical solution for 3D imaging under low-SBR single-photon lidar conditions.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109629"},"PeriodicalIF":3.7,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-19DOI: 10.1016/j.optlaseng.2026.109636
Di Yang , Zhuoqun Yuan , Xinyi Li , Yapeng Sun , Qiunan Yang , Yanmei Liang
Surface topography influences the functional properties of material interfaces, including optical, mechanical, and biological properties. Nanoscale topographic measurement is essential for precision manufacturing and surface defect analysis. While optical coherence tomography (OCT) offers a millimeter-scale field of view without mechanical scanning, its effective topographic measurement range is limited by a shallow depth of focus. Defocusing could degrade the accuracy and resolution of the topographic measurement. To address this limitation, we proposed an automatic refocusing method for nanoscale topographic measurement with a large axial measurement range. By analyzing the frequency of topographic information, we identified a robust frequency feature, termed averaging low-frequency intensity, that enables precise estimation of defocus distance. Based on this, a refocusing algorithm is developed to automatically determine and correct defocus without additional hardware. Experimental results of a USAF resolution target demonstrated that the proposed method can extend the axial measurement range up to six times the depth of focus while preserving lateral resolution and suppressing side lobes. Further validation on scratched glass showed that the method can accurately recover nanoscale surface damage at a 6 times defocus position. Finally, the measurement results of the metal with a rough surface demonstrated that the proposed method can recover the nanoscale topography of complex samples under defocus. This approach offers a promising solution for high-precision surface profiling in industrial inspection and biomedical imaging.
{"title":"Automatic refocusing method for nanoscale topographic measurement by optical coherence tomography","authors":"Di Yang , Zhuoqun Yuan , Xinyi Li , Yapeng Sun , Qiunan Yang , Yanmei Liang","doi":"10.1016/j.optlaseng.2026.109636","DOIUrl":"10.1016/j.optlaseng.2026.109636","url":null,"abstract":"<div><div>Surface topography influences the functional properties of material interfaces, including optical, mechanical, and biological properties. Nanoscale topographic measurement is essential for precision manufacturing and surface defect analysis. While optical coherence tomography (OCT) offers a millimeter-scale field of view without mechanical scanning, its effective topographic measurement range is limited by a shallow depth of focus. Defocusing could degrade the accuracy and resolution of the topographic measurement. To address this limitation, we proposed an automatic refocusing method for nanoscale topographic measurement with a large axial measurement range. By analyzing the frequency of topographic information, we identified a robust frequency feature, termed averaging low-frequency intensity, that enables precise estimation of defocus distance. Based on this, a refocusing algorithm is developed to automatically determine and correct defocus without additional hardware. Experimental results of a USAF resolution target demonstrated that the proposed method can extend the axial measurement range up to six times the depth of focus while preserving lateral resolution and suppressing side lobes. Further validation on scratched glass showed that the method can accurately recover nanoscale surface damage at a 6 times defocus position. Finally, the measurement results of the metal with a rough surface demonstrated that the proposed method can recover the nanoscale topography of complex samples under defocus. This approach offers a promising solution for high-precision surface profiling in industrial inspection and biomedical imaging.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109636"},"PeriodicalIF":3.7,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-19DOI: 10.1016/j.optlaseng.2026.109631
Zhichen Tang , Canyu Zhu , Guanyu Cheng , Shihai Lan , Yong Su
Accurate characterization of spatial resolution is essential for assessing the performance of digital image correlation (DIC) methods. Inspired by the classical Rayleigh criterion, this study considers a double-step displacement field and defines the spatial resolution as the separation at which two adjacent displacement discontinuities can just be resolved in the measured strain field. Based on this definition, theoretical models are developed to quantify the spatial resolution. For conventional DIC with a subset size of , the spatial resolution for the first-order shape function is approximately , while that for the second-order shape function is about . For Gaussian-weighted DIC with a weighting radius of R, the spatial resolution is approximately 2R for the first-order shape function and 1.53R for the second-order shape function. The proposed criterion is further applied to deep learning-based DIC methods (e.g., U-DICNet, DICTr, and USDICNet) to demonstrate its universality. In summary, this work establishes a physically grounded and quantitatively reliable framework for evaluating spatial resolution in conventional, weighted, and learning-based DIC methods, with potential applications in performance assessment, algorithm optimization, and parameter selection.
{"title":"A criterion for assessing spatial resolution in digital image correlation: Applications to conventional, Gaussian-weighted, and deep learning-based methods","authors":"Zhichen Tang , Canyu Zhu , Guanyu Cheng , Shihai Lan , Yong Su","doi":"10.1016/j.optlaseng.2026.109631","DOIUrl":"10.1016/j.optlaseng.2026.109631","url":null,"abstract":"<div><div>Accurate characterization of spatial resolution is essential for assessing the performance of digital image correlation (DIC) methods. Inspired by the classical Rayleigh criterion, this study considers a double-step displacement field and defines the spatial resolution as the separation at which two adjacent displacement discontinuities can just be resolved in the measured strain field. Based on this definition, theoretical models are developed to quantify the spatial resolution. For conventional DIC with a subset size of <span><math><mrow><mn>2</mn><mi>M</mi><mo>+</mo><mn>1</mn></mrow></math></span>, the spatial resolution for the first-order shape function is approximately <span><math><mrow><mn>2</mn><mi>M</mi><mo>+</mo><mn>2</mn></mrow></math></span>, while that for the second-order shape function is about <span><math><mrow><mn>0.66</mn><mo>(</mo><mn>2</mn><mi>M</mi><mo>+</mo><mn>1</mn><mo>)</mo></mrow></math></span>. For Gaussian-weighted DIC with a weighting radius of <em>R</em>, the spatial resolution is approximately 2<em>R</em> for the first-order shape function and 1.53<em>R</em> for the second-order shape function. The proposed criterion is further applied to deep learning-based DIC methods (e.g., U-DICNet, DICTr, and USDICNet) to demonstrate its universality. In summary, this work establishes a physically grounded and quantitatively reliable framework for evaluating spatial resolution in conventional, weighted, and learning-based DIC methods, with potential applications in performance assessment, algorithm optimization, and parameter selection.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109631"},"PeriodicalIF":3.7,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-19DOI: 10.1016/j.optlaseng.2026.109628
Junzhuo Zhou , Jun Zou , Ye Qiu , Zhihe Liu , Jia Hao , Wenli Li , Yiting Yu
Polarization imaging shows great potential for defect detection on highly reflective and low-contrast industrial surfaces. However, existing image fusion algorithms struggle to address the challenges of feature conflicts and polarization noise interference during the polarization fusion process. This paper proposes a polarization image fusion method based on analytical attention heads, aiming to integrate complementary information from different sources while enhancing the prominent features of the main source and suppressing polarization noise. The innovations of this paper are: 1) designing analytical attention heads based on mathematical principles to extract low-level image features such as gradients, textures, information, semantics, and noise; 2) detecting and enhancing prominent features in the main source image to solve the problem of feature loss caused by conflicting feature fusion from different sources; 3) detecting noisy regions in polarization image and reducing their fusion weights to avoid interference from polarization noise. We evaluated our method on both a self-built polarization image dataset and public datasets, and the results demonstrate the advanced nature of our approach. The source code and datasets are publicly available at: https://github.com/FiredTable/DeepFusion.
{"title":"Polarization image fusion via analytical attention heads: A multi-scale feature integration framework","authors":"Junzhuo Zhou , Jun Zou , Ye Qiu , Zhihe Liu , Jia Hao , Wenli Li , Yiting Yu","doi":"10.1016/j.optlaseng.2026.109628","DOIUrl":"10.1016/j.optlaseng.2026.109628","url":null,"abstract":"<div><div>Polarization imaging shows great potential for defect detection on highly reflective and low-contrast industrial surfaces. However, existing image fusion algorithms struggle to address the challenges of feature conflicts and polarization noise interference during the polarization fusion process. This paper proposes a polarization image fusion method based on analytical attention heads, aiming to integrate complementary information from different sources while enhancing the prominent features of the main source and suppressing polarization noise. The innovations of this paper are: 1) designing analytical attention heads based on mathematical principles to extract low-level image features such as gradients, textures, information, semantics, and noise; 2) detecting and enhancing prominent features in the main source image to solve the problem of feature loss caused by conflicting feature fusion from different sources; 3) detecting noisy regions in polarization image and reducing their fusion weights to avoid interference from polarization noise. We evaluated our method on both a self-built polarization image dataset and public datasets, and the results demonstrate the advanced nature of our approach. The source code and datasets are publicly available at: <span><span>https://github.com/FiredTable/DeepFusion</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109628"},"PeriodicalIF":3.7,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-19DOI: 10.1016/j.optlaseng.2026.109626
Rifan Chen , Zongxuan Li , Shuping Tao , Qing Luo , Youhan Peng , Shuhui Ren , Zhiyuan Gu
To quantitatively evaluate the temporal variations of on-orbit imaging quality of space telescopes under dynamic disturbances, this study first establishes the structural dynamics state-space model of the telescope and validates its accuracy through comparative analysis with traditional finite element methods. The results demonstrate that the mean relative errors are 0.96 % for frequency response analysis and 1.22 % for transient response analysis. Subsequently, the instantaneous rigid-body displacements of the mirror surfaces are fitted based on the transient response results, with the mean relative error between the fitting results and those from the Sigfit being <2 %, thereby validating the accuracy of the dynamic response solving and rigid-body displacement fitting. Then, the offset of the image point is used to describe the dynamic LOS error of the optical system. Based on the opto-mechanical coupled ray-tracing theory, real-time reconstruction of the opto-mechanical system and ray-tracing analysis are performed, revealing that the maximum relative displacement of image points during imaging is 0.53 μm (<1/6 of the pixel). Quantitative assessment reveals that the mean relative errors for image point offsets in the X and Y directions compared with Zemax simulation are 2.53 % and 3.14 %, respectively. Furthermore, the edge method was used to calculate the MTF of the imaging system under the sole influence of micro-vibrations, which was 0.9833@143 lp/mm. This indicates that the actual impact of micro-vibrations on the overall imaging quality of the system is small. The developed framework enables accurate micro-vibration simulation and provides theoretical guidance for the optimization of vibration isolation of space telescopes.
{"title":"Dynamic opto-mechanical integrated modeling and simulation of high-resolution space telescopes","authors":"Rifan Chen , Zongxuan Li , Shuping Tao , Qing Luo , Youhan Peng , Shuhui Ren , Zhiyuan Gu","doi":"10.1016/j.optlaseng.2026.109626","DOIUrl":"10.1016/j.optlaseng.2026.109626","url":null,"abstract":"<div><div>To quantitatively evaluate the temporal variations of on-orbit imaging quality of space telescopes under dynamic disturbances, this study first establishes the structural dynamics state-space model of the telescope and validates its accuracy through comparative analysis with traditional finite element methods. The results demonstrate that the mean relative errors are 0.96 % for frequency response analysis and 1.22 % for transient response analysis. Subsequently, the instantaneous rigid-body displacements of the mirror surfaces are fitted based on the transient response results, with the mean relative error between the fitting results and those from the Sigfit being <2 %, thereby validating the accuracy of the dynamic response solving and rigid-body displacement fitting. Then, the offset of the image point is used to describe the dynamic LOS error of the optical system. Based on the opto-mechanical coupled ray-tracing theory, real-time reconstruction of the opto-mechanical system and ray-tracing analysis are performed, revealing that the maximum relative displacement of image points during imaging is 0.53 μm (<1/6 of the pixel). Quantitative assessment reveals that the mean relative errors for image point offsets in the X and Y directions compared with Zemax simulation are 2.53 % and 3.14 %, respectively. Furthermore, the edge method was used to calculate the MTF of the imaging system under the sole influence of micro-vibrations, which was 0.9833@143 lp/mm. This indicates that the actual impact of micro-vibrations on the overall imaging quality of the system is small. The developed framework enables accurate micro-vibration simulation and provides theoretical guidance for the optimization of vibration isolation of space telescopes.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109626"},"PeriodicalIF":3.7,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-16DOI: 10.1016/j.optlaseng.2026.109604
Jianchao Guo , Mingguang Shan , Zhi Zhong , Bin Liu , Lei Yu , Lijing Wang , Lei Liu
The point diffraction interferometer (PDI) is a promising quantitative phase imaging (QPI) method, which has the advantages of compactness and stability. However, the field-of-view (FOV) of PDI is always compromised between the size of the sensor and the magnification. To solve this problem, a PDI with doubled FOV is set up by a grating placed outside the Fourier plane in a 4f system, which has a simple optical setup and larger FOV without decreasing the magnification. First, a 4f system is built up by two Lens. Then, a grating is placed outside the Fourier plane of the 4f system, while a hole array is placed exactly at the Fourier plane. The grating diffracts the object beam into several duplicates with relative offsets along its periodicity, each of which carries a different region of the object. The hole array comprises one pinhole and two large holes. One of ±1 diffraction orders is low-pass filtering by the pinhole to form the reference beam, while the other one of ±1 diffraction orders and 0th diffraction order pass through the large holes and act as the object beams with different FOV. The image sensor is placed at an overlapping area of two FOVs, which enables two distinct regions of the object to be captured simultaneously in a single shot. Moreover, induced by the different angles between the reference beam and the object beams, object beams with different FOVs have different spatial carrier frequencies in the multiplexed interferogram. To avoid crosstalk between the object beams, two object beams are modulated into orthogonal polarization states to avoid interference. The validity and feasibility of this PDI are verified by conducting experiments on a 1951USAF resolution plate, a bee wing, and onion epidermal cells. The experimental results show that this proposed PDI can double FOV without sacrificing image quality, which demonstrates various future applications in microscopic imaging and optical metrology.
{"title":"Doubled field-of-view of point diffraction interferometer with a grating outside the Fourier plane","authors":"Jianchao Guo , Mingguang Shan , Zhi Zhong , Bin Liu , Lei Yu , Lijing Wang , Lei Liu","doi":"10.1016/j.optlaseng.2026.109604","DOIUrl":"10.1016/j.optlaseng.2026.109604","url":null,"abstract":"<div><div>The point diffraction interferometer (PDI) is a promising quantitative phase imaging (QPI) method, which has the advantages of compactness and stability. However, the field-of-view (FOV) of PDI is always compromised between the size of the sensor and the magnification. To solve this problem, a PDI with doubled FOV is set up by a grating placed outside the Fourier plane in a 4<em>f</em> system, which has a simple optical setup and larger FOV without decreasing the magnification. First, a 4<em>f</em> system is built up by two Lens. Then, a grating is placed outside the Fourier plane of the 4<em>f</em> system, while a hole array is placed exactly at the Fourier plane. The grating diffracts the object beam into several duplicates with relative offsets along its periodicity, each of which carries a different region of the object. The hole array comprises one pinhole and two large holes. One of ±1 diffraction orders is low-pass filtering by the pinhole to form the reference beam, while the other one of ±1 diffraction orders and 0th diffraction order pass through the large holes and act as the object beams with different FOV. The image sensor is placed at an overlapping area of two FOVs, which enables two distinct regions of the object to be captured simultaneously in a single shot. Moreover, induced by the different angles between the reference beam and the object beams, object beams with different FOVs have different spatial carrier frequencies in the multiplexed interferogram. To avoid crosstalk between the object beams, two object beams are modulated into orthogonal polarization states to avoid interference. The validity and feasibility of this PDI are verified by conducting experiments on a 1951USAF resolution plate, a bee wing, and onion epidermal cells. The experimental results show that this proposed PDI can double FOV without sacrificing image quality, which demonstrates various future applications in microscopic imaging and optical metrology.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109604"},"PeriodicalIF":3.7,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145981842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-16DOI: 10.1016/j.optlaseng.2026.109620
Yingdong He , Wei Liu , Jiahe Ouyang , Jianhui Zhong , Chengbin Li , Yi Li , Yun Lin , Hao Dai , Zhijun Wu , Xining Zhang
A PDMS-encapsulated microfiber loop cavity (MLC) temperature sensor combined with a random forest (RF) model is proposed to achieve precise multipoint temperature prediction within millimeter-scale micro-regions. By constructing anisotropic thermal fields using orthogonal heating wires, the MLC’s optical responses were analyzed to infer temperatures at multiple discrete locations, including on- and off-microfiber positions. The RF model, trained with structural parameters and integrated optical intensity, achieved high prediction accuracy (RMSE≈2.5°C, R2≈0.97 for the horizontal heating) across multiple sensing points. Temperature gradients and their vector characteristics were subsequently derived from the predicted temperatures, revealing distinct spatial characteristics under horizontal and vertical heating that are strongly correlated with device geometry. This study demonstrates that integrating optical microcavity sensing with machine learning enables stable thermal analysis without requiring multi-sensor arrays, offering a promising route for microelectronic thermal management, structural health monitoring, and high-temperature warning in micro-nano devices.
{"title":"Machine-learning-enabled loop microcavity for multipoint sensing of microscale nonlinear thermal fields","authors":"Yingdong He , Wei Liu , Jiahe Ouyang , Jianhui Zhong , Chengbin Li , Yi Li , Yun Lin , Hao Dai , Zhijun Wu , Xining Zhang","doi":"10.1016/j.optlaseng.2026.109620","DOIUrl":"10.1016/j.optlaseng.2026.109620","url":null,"abstract":"<div><div>A PDMS-encapsulated microfiber loop cavity (MLC) temperature sensor combined with a random forest (RF) model is proposed to achieve precise multipoint temperature prediction within millimeter-scale micro-regions. By constructing anisotropic thermal fields using orthogonal heating wires, the MLC’s optical responses were analyzed to infer temperatures at multiple discrete locations, including on- and off-microfiber positions. The RF model, trained with structural parameters and integrated optical intensity, achieved high prediction accuracy (RMSE≈2.5°C, R<sup>2</sup>≈0.97 for the horizontal heating) across multiple sensing points. Temperature gradients and their vector characteristics were subsequently derived from the predicted temperatures, revealing distinct spatial characteristics under horizontal and vertical heating that are strongly correlated with device geometry. This study demonstrates that integrating optical microcavity sensing with machine learning enables stable thermal analysis without requiring multi-sensor arrays, offering a promising route for microelectronic thermal management, structural health monitoring, and high-temperature warning in micro-nano devices.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109620"},"PeriodicalIF":3.7,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145969393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-15DOI: 10.1016/j.optlaseng.2026.109601
Jinghao Xu , Yizheng Liao , Tianci Feng , Siyuan Wang , Duan Luo , An Pan
This paper proposes a novel Digital Incoherent Fourier Ptychography (DI-FP) technique that effectively addresses the speckle noise challenge in long-range Fourier ptychographic imaging through an innovative batch gradient summation mechanism. Compared with conventional methods, this study makes several key contributions: First, we develop a feature-domain batch gradient summation algorithm that exploits the randomness of multi-angle speckles to achieve automatic noise cancellation without requiring additional preprocessing. Second, we construct a new reconstruction framework integrating incoherent imaging with feature extraction, which significantly enhances image contrast while maintaining resolution. Experimental results demonstrate that for imaging at distances of 12.8m and 65m, our method improves reconstruction quality (PSNR) from 5.42dB (conventional method) to 13.98dB, substantially reduces speckle contrast, and decreases single reconstruction time from 150s to 44s. This work provides a new solution for long-range high-resolution optical imaging that combines excellent anti-noise performance with computational efficiency, showing significant application potential in remote sensing monitoring and target recognition fields.
{"title":"DI-FP: Digital incoherent Fourier ptychography for far-field imaging","authors":"Jinghao Xu , Yizheng Liao , Tianci Feng , Siyuan Wang , Duan Luo , An Pan","doi":"10.1016/j.optlaseng.2026.109601","DOIUrl":"10.1016/j.optlaseng.2026.109601","url":null,"abstract":"<div><div>This paper proposes a novel Digital Incoherent Fourier Ptychography (DI-FP) technique that effectively addresses the speckle noise challenge in long-range Fourier ptychographic imaging through an innovative batch gradient summation mechanism. Compared with conventional methods, this study makes several key contributions: First, we develop a feature-domain batch gradient summation algorithm that exploits the randomness of multi-angle speckles to achieve automatic noise cancellation without requiring additional preprocessing. Second, we construct a new reconstruction framework integrating incoherent imaging with feature extraction, which significantly enhances image contrast while maintaining resolution. Experimental results demonstrate that for imaging at distances of 12.8m and 65m, our method improves reconstruction quality (PSNR) from 5.42dB (conventional method) to 13.98dB, substantially reduces speckle contrast, and decreases single reconstruction time from 150s to 44s. This work provides a new solution for long-range high-resolution optical imaging that combines excellent anti-noise performance with computational efficiency, showing significant application potential in remote sensing monitoring and target recognition fields.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"200 ","pages":"Article 109601"},"PeriodicalIF":3.7,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145979641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}