Pub Date : 2026-01-24DOI: 10.1016/j.optlaseng.2026.109651
Yunsong Gu , Huahua Wang , Han Zhuang , Bingjie Li , Hongyue Xiao , Changqi Zhang , Hongwei Jiang , Haochong Huang , Zhiyuan Zheng , Ze Zhang , Lu Gao
Real-time tracking of key parts of moving objects remains a major challenge in practical single-pixel imaging systems. In this work, we propose a far-field single-pixel image-free tracking framework for moving objects based on an image-free tracking network (IFTN). By jointly optimizing structured illumination and network training, the proposed method directly maps one-dimensional single-pixel measurements to target coordinates without image reconstruction, effectively mitigating motion-blur-induced accuracy degradation. Experimental results demonstrate that the proposed approach achieves a tracking accuracy of 85.8% with a mean absolute percentage error of 8.4% at a sampling rate of 6.25%, and a tracking speed of 95.2 Hz. Furthermore, the system enables far-field tracking at distances up to 80 m. Compared with traditional single-pixel imaging-based tracking methods that rely on image reconstruction, the proposed method improves tracking accuracy by more than six times. This work provides an efficient and flexible solution for image-free tracking in remote sensing and related applications.
{"title":"Far field single-pixel image-free tracking for moving object","authors":"Yunsong Gu , Huahua Wang , Han Zhuang , Bingjie Li , Hongyue Xiao , Changqi Zhang , Hongwei Jiang , Haochong Huang , Zhiyuan Zheng , Ze Zhang , Lu Gao","doi":"10.1016/j.optlaseng.2026.109651","DOIUrl":"10.1016/j.optlaseng.2026.109651","url":null,"abstract":"<div><div>Real-time tracking of key parts of moving objects remains a major challenge in practical single-pixel imaging systems. In this work, we propose a far-field single-pixel image-free tracking framework for moving objects based on an image-free tracking network (IFTN). By jointly optimizing structured illumination and network training, the proposed method directly maps one-dimensional single-pixel measurements to target coordinates without image reconstruction, effectively mitigating motion-blur-induced accuracy degradation. Experimental results demonstrate that the proposed approach achieves a tracking accuracy of 85.8% with a mean absolute percentage error of 8.4% at a sampling rate of 6.25%, and a tracking speed of 95.2 Hz. Furthermore, the system enables far-field tracking at distances up to 80 m. Compared with traditional single-pixel imaging-based tracking methods that rely on image reconstruction, the proposed method improves tracking accuracy by more than six times. This work provides an efficient and flexible solution for image-free tracking in remote sensing and related applications.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109651"},"PeriodicalIF":3.7,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-24DOI: 10.1016/j.optlaseng.2026.109657
Mingtao You , Yiming Yao , Dong Zhao , Zhe Zhao , Pattathal V. Arun , Yian Wang , Huixin Zhou , Ronghua Chi
Hyperspectral anomaly detection techniques aim to effectively separate anomalies from the background. Most of the existing approaches do not focus on the contours of anomalous targets, resulting in blurred detection results. In order to overcome this challenge, we propose a superpixel-guided background inpainting and spatial-spectral constrained representation method for hyperspectral anomaly detection (S3CRAD). Specifically, we propose a superpixel-guided strategy that highlights the boundary information between anomalies and the background. Moreover, the existing methods do not fully exploit the differences between anomalies and the background during background reconstruction. Hence, we propose a multi-feature fusion strategy that considers the differences in image contrasts, further emphasizing the difference between anomaly and background pixels. Finally, we propose a spatial-spectral weighting scheme to regularize the representation coefficients, thereby exploiting spatial and spectral information more effectively than existing methods. With the regularized coefficients, the target pixel is better reconstructed via representation. The anomaly result is obtained by computing the residual between the original and reconstructed pixels. The key advantage of our method lies in its ability to fully utilize both spatial and spectral information while effectively reducing the impact of noise on anomaly detection results. Experimental results demonstrate that our approach outperforms nine state-of-the-art methods.
{"title":"S3CRAD: Superpixel-guided background inpainting and spatial-spectral constrained representation for hyperspectral anomaly detection","authors":"Mingtao You , Yiming Yao , Dong Zhao , Zhe Zhao , Pattathal V. Arun , Yian Wang , Huixin Zhou , Ronghua Chi","doi":"10.1016/j.optlaseng.2026.109657","DOIUrl":"10.1016/j.optlaseng.2026.109657","url":null,"abstract":"<div><div>Hyperspectral anomaly detection techniques aim to effectively separate anomalies from the background. Most of the existing approaches do not focus on the contours of anomalous targets, resulting in blurred detection results. In order to overcome this challenge, we propose a superpixel-guided background inpainting and spatial-spectral constrained representation method for hyperspectral anomaly detection (S3CRAD). Specifically, we propose a superpixel-guided strategy that highlights the boundary information between anomalies and the background. Moreover, the existing methods do not fully exploit the differences between anomalies and the background during background reconstruction. Hence, we propose a multi-feature fusion strategy that considers the differences in image contrasts, further emphasizing the difference between anomaly and background pixels. Finally, we propose a spatial-spectral weighting scheme to regularize the representation coefficients, thereby exploiting spatial and spectral information more effectively than existing methods. With the regularized coefficients, the target pixel is better reconstructed via representation. The anomaly result is obtained by computing the residual between the original and reconstructed pixels. The key advantage of our method lies in its ability to fully utilize both spatial and spectral information while effectively reducing the impact of noise on anomaly detection results. Experimental results demonstrate that our approach outperforms nine state-of-the-art methods.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109657"},"PeriodicalIF":3.7,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Augmented reality head-up displays (AR-HUDs) serve as in-vehicle human-machine interfaces, with a critical requirement for a continuously adjustable virtual image distance (VID) across a large field of view (FOV). This adaptability allows the virtual image to align with objects at varying depths in the road environment, thereby mitigating visual fatigue. An AR-HUD employs a wide FOV and a large eyebox, resulting in a high étendue, which necessitates optical elements spanning more than 10 centimeters. Consequently, the varifocal component must feature a large aperture and be thin to ensure a compact form factor suitable for automotive integration. Among various slim varifocal optics, Alvarez lenses, which work through transverse lens displacement, offer good scalability due to the mature fabrication of freeform optics. However, increasing the aperture introduces significant aberrations regarding the conventional cubic-form Alvarez lens design. This study presents optimal design rules for applying Alvarez lenses to AR-HUDs, including the displacement method, higher-order terms beyond the classic cubic form, and co-optimization with the freeform mirror. An AR-HUD prototype with a FOV of 13° by 5° and an eyebox of 130 mm by 60 mm was built using Alvarez lenses. A continuously variable VID ranging from 2.5 to 7.5 m was achieved with a resolution of more than 60 pixels per degree. The Alvarez lenses span more than 20 cm transversely and are only 2.5 cm thick, enabling a compact volume of 10.4 liters—almost no increase from that of a single-focal HUD.
{"title":"Large-aperture nonparaxial Alvarez lenses enabling varifocal augmented reality head-up displays with a wide field of view","authors":"Yi Liu, Haoteng Liu, Zhiqing Zhao, Qimeng Wang, Weiji Liang, Bo-Ru Yang, Zong Qin","doi":"10.1016/j.optlaseng.2026.109638","DOIUrl":"10.1016/j.optlaseng.2026.109638","url":null,"abstract":"<div><div>Augmented reality head-up displays (AR-HUDs) serve as in-vehicle human-machine interfaces, with a critical requirement for a continuously adjustable virtual image distance (VID) across a large field of view (FOV). This adaptability allows the virtual image to align with objects at varying depths in the road environment, thereby mitigating visual fatigue. An AR-HUD employs a wide FOV and a large eyebox, resulting in a high étendue, which necessitates optical elements spanning more than 10 centimeters. Consequently, the varifocal component must feature a large aperture and be thin to ensure a compact form factor suitable for automotive integration. Among various slim varifocal optics, Alvarez lenses, which work through transverse lens displacement, offer good scalability due to the mature fabrication of freeform optics. However, increasing the aperture introduces significant aberrations regarding the conventional cubic-form Alvarez lens design. This study presents optimal design rules for applying Alvarez lenses to AR-HUDs, including the displacement method, higher-order terms beyond the classic cubic form, and co-optimization with the freeform mirror. An AR-HUD prototype with a FOV of 13° by 5° and an eyebox of 130 mm by 60 mm was built using Alvarez lenses. A continuously variable VID ranging from 2.5 to 7.5 m was achieved with a resolution of more than 60 pixels per degree. The Alvarez lenses span more than 20 cm transversely and are only 2.5 cm thick, enabling a compact volume of 10.4 liters—almost no increase from that of a single-focal HUD.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109638"},"PeriodicalIF":3.7,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-23DOI: 10.1016/j.optlaseng.2026.109634
Zhenyang Zhang , Xianrui Feng , Hao Jiang , Jian Wang , Zhengqiong Dong , Lei Nie , Jinlong Zhu , Shiyuan Liu
We propose a wavefront-driven optimization design method (WDOD) based on an indirect optimization strategy for computer-generated holograms, which optimizes the wavefront of a virtual object to enhance the imaging quality and to accelerate the convergence speed of amplitude-only holograms (AOHs). We experimentally demonstrated that the proposed method achieves faster convergence, higher imaging quality, and superior robustness compared to its traditional counterparts, such as gradient descent and modified Gerchberg-Saxton algorithms. Moreover, we have demonstrated that this method can be further extended to multi-plane and tilted-plane holography, which is critical to fields such as holographic displays, optical manipulation, and holographic lithography.
{"title":"Wavefront-driven optimization for high-quality hologram generation","authors":"Zhenyang Zhang , Xianrui Feng , Hao Jiang , Jian Wang , Zhengqiong Dong , Lei Nie , Jinlong Zhu , Shiyuan Liu","doi":"10.1016/j.optlaseng.2026.109634","DOIUrl":"10.1016/j.optlaseng.2026.109634","url":null,"abstract":"<div><div>We propose a wavefront-driven optimization design method (WDOD) based on an indirect optimization strategy for computer-generated holograms, which optimizes the wavefront of a virtual object to enhance the imaging quality and to accelerate the convergence speed of amplitude-only holograms (AOHs). We experimentally demonstrated that the proposed method achieves faster convergence, higher imaging quality, and superior robustness compared to its traditional counterparts, such as gradient descent and modified Gerchberg-Saxton algorithms. Moreover, we have demonstrated that this method can be further extended to multi-plane and tilted-plane holography, which is critical to fields such as holographic displays, optical manipulation, and holographic lithography.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109634"},"PeriodicalIF":3.7,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-23DOI: 10.1016/j.optlaseng.2026.109642
Takeshi Seki , Kiyoshi Oka , Akihiro Naganawa
Transbronchial photothermal therapy (PTT) is a promising modality for treating peripheral lung cancer, especially in patients who are not eligible for surgery. However, real-time temperature monitoring in deep and narrow bronchial regions remains a significant challenge. Conventional thermometry techniques, such as MRI, CT, ultrasound, and pyrometry, are limited by poor spatial access, low accuracy, or incompatibility with bronchoscopy. To address this, we propose a neural network–assisted method to estimate tissue temperature using reflected light spectra and laser power, enabling non-contact thermometry without additional sensors. This approach was implemented using a composite-type optical fiberscope capable of simultaneous laser irradiation and optical sensing. We validated the system using tissue-mimicking phantoms containing porphysome at concentrations of 29, 58, and 100 µM under 250 or 500 mW laser irradiation, targeting temperatures between room temp. and 60 °C. Root Mean Square Error (RMSE) between estimated and actual temperatures was 1.50 °C to 2.89 °C for 250 mW and 1.60 °C to 3.44 °C for 500 mW. This is the first report of real-time temperature estimation using deep-learning and a composite fiberscope in bronchoscopic PTT. The proposed method enables compact, cost-effective, and sensorless temperature monitoring, offering a practical solution for safe and effective clinical PTT in anatomically constrained regions.
{"title":"Neural network–assisted optical temperature estimation using a composite–type optical fiberscope for transbronchial photothermal therapy","authors":"Takeshi Seki , Kiyoshi Oka , Akihiro Naganawa","doi":"10.1016/j.optlaseng.2026.109642","DOIUrl":"10.1016/j.optlaseng.2026.109642","url":null,"abstract":"<div><div>Transbronchial photothermal therapy (PTT) is a promising modality for treating peripheral lung cancer, especially in patients who are not eligible for surgery. However, real-time temperature monitoring in deep and narrow bronchial regions remains a significant challenge. Conventional thermometry techniques, such as MRI, CT, ultrasound, and pyrometry, are limited by poor spatial access, low accuracy, or incompatibility with bronchoscopy. To address this, we propose a neural network–assisted method to estimate tissue temperature using reflected light spectra and laser power, enabling non-contact thermometry without additional sensors. This approach was implemented using a composite-type optical fiberscope capable of simultaneous laser irradiation and optical sensing. We validated the system using tissue-mimicking phantoms containing porphysome at concentrations of 29, 58, and 100 µM under 250 or 500 mW laser irradiation, targeting temperatures between room temp. and 60 °C. Root Mean Square Error (RMSE) between estimated and actual temperatures was 1.50 °C to 2.89 °C for 250 mW and 1.60 °C to 3.44 °C for 500 mW. This is the first report of real-time temperature estimation using deep-learning and a composite fiberscope in bronchoscopic PTT. The proposed method enables compact, cost-effective, and sensorless temperature monitoring, offering a practical solution for safe and effective clinical PTT in anatomically constrained regions.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109642"},"PeriodicalIF":3.7,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-23DOI: 10.1016/j.optlaseng.2026.109640
Le Hu , Xinyue Ma , Junyang Li , Ke Hu , Chenxing Wang
Single-pixel imaging (SPI) has gained significant attention due to its potential for wide-spectrum imaging. In SPI, the target scene is sampled by projecting a series of encoding patterns, and then an image is reconstructed from the captured intensity sequence. When combined with Fringe Projection Profilometry (FPP), 3D SPI can be achieved by modulating depth information into a fringe-deformed image. However, typical sampling strategies of SPI often struggle to balance low sampling rates with high accuracy, and maintaining imaging details is even more challenging. To address these issues, we propose a novel Fourier single-pixel imaging strategy based on space-frequency analysis. A windowed sampling strategy is introduced to efficiently obtain an image prior, solving the inherent drawback of SPD’s lack of spatial resolution. With the obtained prior image, spectral analysis is conducted to extract significant components for reconstructing the target's basic structure, with which, detail enhancement is carried out by spatial analysis. Finally, the detail-enhanced image is analyzed again to extract more significant components, enabling the recovery of target details. Extensive experiments verify that our space-frequency method enhances imaging details at low sampling rates.
{"title":"Space-frequency analysis-based Fourier single-pixel 3D imaging","authors":"Le Hu , Xinyue Ma , Junyang Li , Ke Hu , Chenxing Wang","doi":"10.1016/j.optlaseng.2026.109640","DOIUrl":"10.1016/j.optlaseng.2026.109640","url":null,"abstract":"<div><div>Single-pixel imaging (SPI) has gained significant attention due to its potential for wide-spectrum imaging. In SPI, the target scene is sampled by projecting a series of encoding patterns, and then an image is reconstructed from the captured intensity sequence. When combined with Fringe Projection Profilometry (FPP), 3D SPI can be achieved by modulating depth information into a fringe-deformed image. However, typical sampling strategies of SPI often struggle to balance low sampling rates with high accuracy, and maintaining imaging details is even more challenging. To address these issues, we propose a novel Fourier single-pixel imaging strategy based on space-frequency analysis. A windowed sampling strategy is introduced to efficiently obtain an image prior, solving the inherent drawback of SPD’s lack of spatial resolution. With the obtained prior image, spectral analysis is conducted to extract significant components for reconstructing the target's basic structure, with which, detail enhancement is carried out by spatial analysis. Finally, the detail-enhanced image is analyzed again to extract more significant components, enabling the recovery of target details. Extensive experiments verify that our space-frequency method enhances imaging details at low sampling rates.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109640"},"PeriodicalIF":3.7,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-22DOI: 10.1016/j.optlaseng.2026.109623
Miao Li, Chaorui Chen, Xi Wang, Xinru Zhang, Youwei Dai, Longwu Luo
We present a shock-wave velocity field ultrafast imaging reconstruction algorithm and a hardware deployment scheme for interferometric imaging systems with arbitrary reflective surfaces. The algorithm unfolds the alternating direction method of multipliers (ADMM) iterations into trainable network layers. It combines the interpretability of traditional optimization methods with deep learning to improve reconstruction accuracy. For hardware deployment, a co-optimization strategy is designed. This strategy uses operator mapping and 8-bit integer quantization for CPUs and deep learning processing units (DPUs). Simulation and experimental results are provided. Under high compression ratio encoding, the method achieves a peak signal-to-noise ratio (PSNR) of 29.53 dB. This is 11.46 dB higher than ADMM-TV and 6.91 dB higher than E-3DTV. The structural similarity index (SSIM) reaches 0.88. The learned perceptual image patch similarity (LPIPS) is 0.17. Experiments show that the maximum absolute error of the reconstructed velocity field is 1.58 km/s, with a relative error of 9.87%. Dynamic and static power consumption are reduced by 89.27% and 94.83%, respectively, without reducing reconstruction accuracy. These results show that the method improves reconstruction accuracy and reduces power consumption. It also addresses limitations of traditional algorithms in both accuracy and deployment efficiency. The method provides a reliable approach for dynamic reconstruction in ultrafast imaging of shock-wave velocity fields.
{"title":"Ultrafast VISAR velocity field reconstruction via deep unfolding networks and hardware-optimized deployment","authors":"Miao Li, Chaorui Chen, Xi Wang, Xinru Zhang, Youwei Dai, Longwu Luo","doi":"10.1016/j.optlaseng.2026.109623","DOIUrl":"10.1016/j.optlaseng.2026.109623","url":null,"abstract":"<div><div>We present a shock-wave velocity field ultrafast imaging reconstruction algorithm and a hardware deployment scheme for interferometric imaging systems with arbitrary reflective surfaces. The algorithm unfolds the alternating direction method of multipliers (ADMM) iterations into trainable network layers. It combines the interpretability of traditional optimization methods with deep learning to improve reconstruction accuracy. For hardware deployment, a co-optimization strategy is designed. This strategy uses operator mapping and 8-bit integer quantization for CPUs and deep learning processing units (DPUs). Simulation and experimental results are provided. Under high compression ratio encoding, the method achieves a peak signal-to-noise ratio (PSNR) of 29.53 dB. This is 11.46 dB higher than ADMM-TV and 6.91 dB higher than E-3DTV. The structural similarity index (SSIM) reaches 0.88. The learned perceptual image patch similarity (LPIPS) is 0.17. Experiments show that the maximum absolute error of the reconstructed velocity field is 1.58 km/s, with a relative error of 9.87%. Dynamic and static power consumption are reduced by 89.27% and 94.83%, respectively, without reducing reconstruction accuracy. These results show that the method improves reconstruction accuracy and reduces power consumption. It also addresses limitations of traditional algorithms in both accuracy and deployment efficiency. The method provides a reliable approach for dynamic reconstruction in ultrafast imaging of shock-wave velocity fields.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109623"},"PeriodicalIF":3.7,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to achieve a satisfactory three-dimensional (3D) light field display, the 3D image information with correct spatial occlusion relation should be provided in a wide viewing angle range. However, the optical aberration and the structural error are two key causes of the deformation of reconstructed 3D images, especially at the large viewing angle. Here, a joint optimization method of the optical structure and image coding for 3D light field display is proposed to enhance the construction accuracy of voxels. A composite lens with an aperture is designed to suppress optical aberrations. A pre-correction method based on optical path detection is implemented to further mitigate the structural errors and residual the optical aberrations. Experimental results demonstrated that the high-precision 3D light field display with a 100-degree viewing angle is achieved through the proposed method. The deviation of the voxel is reduced from 51.1 mm to 3.1 mm, compared with traditional methods. Medical, military and other applications with high-precision requirements can be met.
{"title":"3D light field display with enhanced reconstruction accuracy based on distortion- suppressed compound lens array and pre-correction encoded image","authors":"Xudong Wen, Xin Gao, Yaohe Zheng, Ziyun Lu, Jinhong He, Hanyu Li, Ningchi Li, Boyang Liu, Binbin Yan, Xunbo Yu, Xinzhu Sang","doi":"10.1016/j.optlaseng.2026.109630","DOIUrl":"10.1016/j.optlaseng.2026.109630","url":null,"abstract":"<div><div>In order to achieve a satisfactory three-dimensional (3D) light field display, the 3D image information with correct spatial occlusion relation should be provided in a wide viewing angle range. However, the optical aberration and the structural error are two key causes of the deformation of reconstructed 3D images, especially at the large viewing angle. Here, a joint optimization method of the optical structure and image coding for 3D light field display is proposed to enhance the construction accuracy of voxels. A composite lens with an aperture is designed to suppress optical aberrations. A pre-correction method based on optical path detection is implemented to further mitigate the structural errors and residual the optical aberrations. Experimental results demonstrated that the high-precision 3D light field display with a 100-degree viewing angle is achieved through the proposed method. The deviation of the voxel is reduced from 51.1 mm to 3.1 mm, compared with traditional methods. Medical, military and other applications with high-precision requirements can be met.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109630"},"PeriodicalIF":3.7,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Measuring the topological charge (TC) of optical vortices is crucial for advancing applications in areas such as optical communication and quantum information processing. Although various interferometric and non-interferometric techniques have been developed for coherent and partially coherent beams, most of these methods are ineffective for fractional-vortex beams, especially when the beam gets perturbed. In this work, we propose and experimentally demonstrate a simple, non-interferometric technique based on autocorrelation for assessing and quantitatively measuring the TC of fractional vortex beams. We generated fractional optical vortex beams using computer-generated fork-shaped holograms and then obtained the corresponding random optical patterns after scattering through a rough surface. The autocorrelation rings of random patterns provide the TC of fractional vortex beams, and the asymmetry gradually becomes symmetric as the TC approaches an integer value. Additionally, by examining the divergence of the first dark ring with respect to propagation distance, we can quantitatively estimate the fractional TC. The measured divergence closely matches theoretical results, achieving an accuracy of over 98 %. The proposed method eliminates the need for phase retrieval, coherence modulation, or interferometry, providing a practical and robust solution for measuring fractional TCs, even in the presence of perturbations such as scattering and mild atmospheric turbulence, which are common in free-space optical communication systems.
{"title":"Evolution of fractional vortices through intensity autocorrelation of scattered speckle patterns","authors":"MD. Haider Ansari , Velagala Ganesh , Sakshi Choudhary , Ravi Kumar , Shashi Prabhakar , Salla Gangi Reddy","doi":"10.1016/j.optlaseng.2026.109637","DOIUrl":"10.1016/j.optlaseng.2026.109637","url":null,"abstract":"<div><div>Measuring the topological charge (TC) of optical vortices is crucial for advancing applications in areas such as optical communication and quantum information processing. Although various interferometric and non-interferometric techniques have been developed for coherent and partially coherent beams, most of these methods are ineffective for fractional-vortex beams, especially when the beam gets perturbed. In this work, we propose and experimentally demonstrate a simple, non-interferometric technique based on autocorrelation for assessing and quantitatively measuring the TC of fractional vortex beams. We generated fractional optical vortex beams using computer-generated fork-shaped holograms and then obtained the corresponding random optical patterns after scattering through a rough surface. The autocorrelation rings of random patterns provide the TC of fractional vortex beams, and the asymmetry gradually becomes symmetric as the TC approaches an integer value. Additionally, by examining the divergence of the first dark ring with respect to propagation distance, we can quantitatively estimate the fractional TC. The measured divergence closely matches theoretical results, achieving an accuracy of over 98 %. The proposed method eliminates the need for phase retrieval, coherence modulation, or interferometry, providing a practical and robust solution for measuring fractional TCs, even in the presence of perturbations such as scattering and mild atmospheric turbulence, which are common in free-space optical communication systems.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109637"},"PeriodicalIF":3.7,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-20DOI: 10.1016/j.optlaseng.2026.109624
Huitong Huang , Xin Meng , Xiaohong Jiang , Ao Wang , Yixuan Xu , Zhibiao Liu , Yixuan Liu , Jianxin Li
Optical throughput and spectral resolution are two critical parameters for achieving rapid and accurate Raman microscopic imaging. However, conventional methods often face a trade-off between these two performances. In response to this limitation, this work proposes a high-efficiency Raman microscopic imaging system based on a dispersion-interference hybrid spectroscopic architecture. By integrating Amici prisms into a Sagnac interferometer, the system enables spectral shearing control without the need for an entrance slit, achieving both high throughput and good spectral resolution. Using a 785 nm laser, the system was evaluated on Nd: Y₃Al₅O₁₂ ceramics, fluorite crystals, and SERS-enhanced 4-aminothiophenol samples on a gold substrate, achieving a point spectral resolution of 10 cm⁻¹ and a full width at half maximum (FWHM) resolution of 25 cm⁻¹. Combined with a 1024 × 1024 pixel EMCCD camera, the system supports area imaging and employs a one-dimensional push-broom scanning strategy to efficiently acquire the entire field of view, significantly improving imaging speed compared to conventional point-by-point scanning methods. Experimental results demonstrate that the system offers high throughput, high resolution, and rapid Raman hyperspectral imaging capabilities, with potential applications in material analysis, biological detection, and chemical imaging.
{"title":"High efficiency Raman microscopic imaging based on a dispersion-interference hybrid spectrometer","authors":"Huitong Huang , Xin Meng , Xiaohong Jiang , Ao Wang , Yixuan Xu , Zhibiao Liu , Yixuan Liu , Jianxin Li","doi":"10.1016/j.optlaseng.2026.109624","DOIUrl":"10.1016/j.optlaseng.2026.109624","url":null,"abstract":"<div><div>Optical throughput and spectral resolution are two critical parameters for achieving rapid and accurate Raman microscopic imaging. However, conventional methods often face a trade-off between these two performances. In response to this limitation, this work proposes a high-efficiency Raman microscopic imaging system based on a dispersion-interference hybrid spectroscopic architecture. By integrating Amici prisms into a Sagnac interferometer, the system enables spectral shearing control without the need for an entrance slit, achieving both high throughput and good spectral resolution. Using a 785 nm laser, the system was evaluated on Nd: Y₃Al₅O₁₂ ceramics, fluorite crystals, and SERS-enhanced 4-aminothiophenol samples on a gold substrate, achieving a point spectral resolution of 10 cm⁻¹ and a full width at half maximum (FWHM) resolution of 25 cm⁻¹. Combined with a 1024 × 1024 pixel EMCCD camera, the system supports area imaging and employs a one-dimensional push-broom scanning strategy to efficiently acquire the entire field of view, significantly improving imaging speed compared to conventional point-by-point scanning methods. Experimental results demonstrate that the system offers high throughput, high resolution, and rapid Raman hyperspectral imaging capabilities, with potential applications in material analysis, biological detection, and chemical imaging.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109624"},"PeriodicalIF":3.7,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}