Alex Maric, Xin Shen, Gregory Aschenbrenner, Bahram Javidi
We evaluate a three-dimensional (3D) object detection system for operation in turbid water based on 3D profilometric Integral Imaging (InIm) and a deep neural network. While conventional InIm computational reconstruction provides the two-dimensional (2D) slices of the 3D scene at specific 2D depth planes, 3D profilometry allows visualization of the 3D surface from unique perspectives. In the proposed method, we develop a deep neural network-based red, green, blue-depth (RGB-D) object detection framework using passive 3D profilometry under turbid conditions. An image sensor on a moving platform captures multiple 2D perspective images of the 3D scene, from which a depth map is statistically estimated. The captured perspective image and the estimated depth map are then fused to generate a four-channel RGB-D image for 3D object detection in turbidity. Comparative experiments are conducted and demonstrate that the proposed 3D profilometry-based approach outperforms both 2D imaging and conventional 3D InIm-based reconstruction across various turbidity levels evaluated. To the best of our knowledge, this is the first report on InIm 3D profilometry for object detection in turbid water.
{"title":"3D profilometric object detection in turbid water using integral imaging and deep neural networks.","authors":"Alex Maric, Xin Shen, Gregory Aschenbrenner, Bahram Javidi","doi":"10.1364/OE.583889","DOIUrl":"https://doi.org/10.1364/OE.583889","url":null,"abstract":"<p><p>We evaluate a three-dimensional (3D) object detection system for operation in turbid water based on 3D profilometric Integral Imaging (InIm) and a deep neural network. While conventional InIm computational reconstruction provides the two-dimensional (2D) slices of the 3D scene at specific 2D depth planes, 3D profilometry allows visualization of the 3D surface from unique perspectives. In the proposed method, we develop a deep neural network-based red, green, blue-depth (RGB-D) object detection framework using passive 3D profilometry under turbid conditions. An image sensor on a moving platform captures multiple 2D perspective images of the 3D scene, from which a depth map is statistically estimated. The captured perspective image and the estimated depth map are then fused to generate a four-channel RGB-D image for 3D object detection in turbidity. Comparative experiments are conducted and demonstrate that the proposed 3D profilometry-based approach outperforms both 2D imaging and conventional 3D InIm-based reconstruction across various turbidity levels evaluated. To the best of our knowledge, this is the first report on InIm 3D profilometry for object detection in turbid water.</p>","PeriodicalId":19691,"journal":{"name":"Optics express","volume":"34 5","pages":"9238-9250"},"PeriodicalIF":3.3,"publicationDate":"2026-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147473447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyperspectral imaging has emerged as a tool for detailed spectral and image analysis, with applications in various fields. Initially developed for remote sensing applications, this technology utilizes hundreds of spectral channels and is increasingly being used also in laboratory settings. However, laboratory settings present significant challenges to conducting a detailed, close-range analysis of small samples due to the large image magnification of a hyperspectral system and its specified optics. A limited working distance of close-up hyperspectral imaging results in a shallow depth of field, causing blurred regions when imaging non-flat samples. This paper presents a computationally efficient multi-focal-plane fusion algorithm for hyperspectral images. The algorithm integrates complementary spatial information from different focal depths while preserving the reflectance of the original data. The core of the presented hyperspectral focus stacking method is based on Laplacian pyramid decomposition combined with local sharpness metrics using standard deviation statistics. The proposed approach is tuned by three control fusion parameters. These parameters are adjusted and optimized using selected non-reference image quality metrics, such as the naturalness image quality evaluator (NIQE), the perceptual image quality evaluator (PIQE), and the blind/referenceless image spatial quality evaluator (BRISQUE), as well as general global sharpness measures and the proposed local sharpness evaluation procedure. Experimental evaluations demonstrate that, with appropriate parameter settings, the fused hyperspectral image consistently exhibits higher gradient-based sharpness than any individual input image, while maintaining spectral integrity. This approach is ideal for close-up, hyperspectral-based laboratory analysis of a variety of samples with limited depth of field.
{"title":"Close-up hyperspectral image focus stacking through Laplacian pyramid fusion.","authors":"Daniel Synek, Lukáš Krauz, Petr Páta","doi":"10.1364/OE.588317","DOIUrl":"https://doi.org/10.1364/OE.588317","url":null,"abstract":"<p><p>Hyperspectral imaging has emerged as a tool for detailed spectral and image analysis, with applications in various fields. Initially developed for remote sensing applications, this technology utilizes hundreds of spectral channels and is increasingly being used also in laboratory settings. However, laboratory settings present significant challenges to conducting a detailed, close-range analysis of small samples due to the large image magnification of a hyperspectral system and its specified optics. A limited working distance of close-up hyperspectral imaging results in a shallow depth of field, causing blurred regions when imaging non-flat samples. This paper presents a computationally efficient multi-focal-plane fusion algorithm for hyperspectral images. The algorithm integrates complementary spatial information from different focal depths while preserving the reflectance of the original data. The core of the presented hyperspectral focus stacking method is based on Laplacian pyramid decomposition combined with local sharpness metrics using standard deviation statistics. The proposed approach is tuned by three control fusion parameters. These parameters are adjusted and optimized using selected non-reference image quality metrics, such as the naturalness image quality evaluator (NIQE), the perceptual image quality evaluator (PIQE), and the blind/referenceless image spatial quality evaluator (BRISQUE), as well as general global sharpness measures and the proposed local sharpness evaluation procedure. Experimental evaluations demonstrate that, with appropriate parameter settings, the fused hyperspectral image consistently exhibits higher gradient-based sharpness than any individual input image, while maintaining spectral integrity. This approach is ideal for close-up, hyperspectral-based laboratory analysis of a variety of samples with limited depth of field.</p>","PeriodicalId":19691,"journal":{"name":"Optics express","volume":"34 5","pages":"8795-8815"},"PeriodicalIF":3.3,"publicationDate":"2026-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147474702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinge Yang, Qiang Fu, Yunfeng Nie, Wolfgang Heidrich
Classical lens design minimizes optical aberrations to produce sharp images, but is typically decoupled from downstream computer vision tasks. Existing end-to-end optical design learns optical encoding through joint optimization, but often suffers from an unstable training process. We propose task-driven lens design, what we believe to be a new optimization philosophy for joint optics-network systems. We freeze the pretrained vision model and optimize only the lens so that the image formation better fits the model's feature preferences. This network-frozen setting yields a low-dimensional and stable optimization process, enabling lens design from scratch without human intervention, thereby exploring a broader design space. Multiple computer vision experiments show that TaskLenses outperform classical ImagingLenses with the same or even fewer elements. Our analysis reveals that the learned optics exhibit long-tailed point spread functions, better preserving preferred structural cues when aberrations cannot be fully corrected. These results highlight task-driven design as a practical route for optical lenses that are compatible with modern vision models, and also inspire believed to be new optical design objectives beyond traditional aberration minimization.
{"title":"Task-driven lens design.","authors":"Xinge Yang, Qiang Fu, Yunfeng Nie, Wolfgang Heidrich","doi":"10.1364/OE.588912","DOIUrl":"https://doi.org/10.1364/OE.588912","url":null,"abstract":"<p><p>Classical lens design minimizes optical aberrations to produce sharp images, but is typically decoupled from downstream computer vision tasks. Existing end-to-end optical design learns optical encoding through joint optimization, but often suffers from an unstable training process. We propose task-driven lens design, what we believe to be a new optimization philosophy for joint optics-network systems. We freeze the pretrained vision model and optimize only the lens so that the image formation better fits the model's feature preferences. This network-frozen setting yields a low-dimensional and stable optimization process, enabling lens design from scratch without human intervention, thereby exploring a broader design space. Multiple computer vision experiments show that TaskLenses outperform classical ImagingLenses with the same or even fewer elements. Our analysis reveals that the learned optics exhibit long-tailed point spread functions, better preserving preferred structural cues when aberrations cannot be fully corrected. These results highlight task-driven design as a practical route for optical lenses that are compatible with modern vision models, and also inspire believed to be new optical design objectives beyond traditional aberration minimization.</p>","PeriodicalId":19691,"journal":{"name":"Optics express","volume":"34 5","pages":"8961-8975"},"PeriodicalIF":3.3,"publicationDate":"2026-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147474711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
By incorporating a polymer waveguide optical phased array (OPA) device emitting a line beam output, we demonstrated a coherent detecting LiDAR system. The OPA comprises 128 polymer-waveguide phase modulators, providing efficient low-power phase modulation. A line-beam output was emitted from the end-fired waveguide array of the OPA, which was driven by a directly modulated DFB laser with a linear chirp rate of 83 THz/s. An iterative linearization process of the chirp was incorporated for the precise measurement of Doppler-induced beat frequencies produced by a moving target. Compared to the conventional raster-scanning method, the proposed line beam scanning method could reduce the acquisition time for a 4D point cloud. With the fabricated OPA chip exhibiting an insertion loss of 6.5 dB and a field of view of 30° × 32°, simultaneous detection of distance and velocity was demonstrated in order to distinguish between stationary and moving targets.
{"title":"Coherent line-beam LiDARs using polymer waveguide optical phased array.","authors":"Jinung Jin, Eun-Su Lee, Kwon-Wook Chun, Min-Cheol Oh","doi":"10.1364/OE.588571","DOIUrl":"https://doi.org/10.1364/OE.588571","url":null,"abstract":"<p><p>By incorporating a polymer waveguide optical phased array (OPA) device emitting a line beam output, we demonstrated a coherent detecting LiDAR system. The OPA comprises 128 polymer-waveguide phase modulators, providing efficient low-power phase modulation. A line-beam output was emitted from the end-fired waveguide array of the OPA, which was driven by a directly modulated DFB laser with a linear chirp rate of 83 THz/s. An iterative linearization process of the chirp was incorporated for the precise measurement of Doppler-induced beat frequencies produced by a moving target. Compared to the conventional raster-scanning method, the proposed line beam scanning method could reduce the acquisition time for a 4D point cloud. With the fabricated OPA chip exhibiting an insertion loss of 6.5 dB and a field of view of 30° × 32°, simultaneous detection of distance and velocity was demonstrated in order to distinguish between stationary and moving targets.</p>","PeriodicalId":19691,"journal":{"name":"Optics express","volume":"34 5","pages":"9158-9165"},"PeriodicalIF":3.3,"publicationDate":"2026-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147474712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a single-shot, common-path Fizeau interferometer for high-precision micro-displacement measurement. By integrating a wire-grid polarizer with a pixelated polarization camera, the system implements spatial polarization phase-shifting for real-time phase retrieval. The common-path configuration and single-exposure acquisition significantly enhance robustness against environmental vibrations and air turbulence. The polarization-multiplexing scheme enables single-exposure measurement, granting the system the capability for real-time dynamic displacement monitoring. Experimental results demonstrate that the proposed method is capable of measuring displacements as small as 2 nm with high repeatability. This compact, vibration-insensitive design offers a robust solution for dynamic nanometric metrology in precision engineering.
{"title":"Single-shot common-path Fizeau interferometer for vibration-insensitive nanometric displacement measurement.","authors":"Qingze Chen, Xiaoyan Wang, Songjie Luo, Huiling Huang, Panfeng Ding, Huichuan Lin, Ziyang Chen, Jixiong Pu","doi":"10.1364/OE.590448","DOIUrl":"https://doi.org/10.1364/OE.590448","url":null,"abstract":"<p><p>We present a single-shot, common-path Fizeau interferometer for high-precision micro-displacement measurement. By integrating a wire-grid polarizer with a pixelated polarization camera, the system implements spatial polarization phase-shifting for real-time phase retrieval. The common-path configuration and single-exposure acquisition significantly enhance robustness against environmental vibrations and air turbulence. The polarization-multiplexing scheme enables single-exposure measurement, granting the system the capability for real-time dynamic displacement monitoring. Experimental results demonstrate that the proposed method is capable of measuring displacements as small as 2 nm with high repeatability. This compact, vibration-insensitive design offers a robust solution for dynamic nanometric metrology in precision engineering.</p>","PeriodicalId":19691,"journal":{"name":"Optics express","volume":"34 5","pages":"8613-8620"},"PeriodicalIF":3.3,"publicationDate":"2026-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147474266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rees P Verleur, Catriona M L White, Shishir Tumma, Timothee L Pourpoint, Daniel R Guildenbecher
This work develops a 70 kHz tunable acoustic gradient (TAG) lens and high-speed camera configuration for three-dimensional diagnostics of multiphase particle flows. The experimental scene is back-illuminated with a pulsed LED, driven by custom hardware, to capture images with variable phase delays with respect to the sinusoidal TAG lens focal sweep. Proposed calibration methodology and data processing techniques automate the 3D localization and tracking of particles recorded by this configuration. Capabilities are quantitatively assessed by investigating the conically expanding particle field produced by a vibrating nozzle, and good agreement between the statistics of the velocity components demonstrates comparable accuracy in the in-plane and optical depth directions. Finally, capabilities are demonstrated for a challenging and practical measurement of hypergolic reaction of nitrogen tetroxide (NTO) and monomethylhydrazine (MMH). TAG recordings are shown to provide finer depth resolution and reduced susceptibility to imaging noise compared to the common digital inline holography (DIH) diagnostic. The developed capabilities are expected to have widespread utility to future study of transient and 3D multiphase flows.
{"title":"KHz rate 3D white light particle tracking with a tunable acoustic gradient (TAG) lens.","authors":"Rees P Verleur, Catriona M L White, Shishir Tumma, Timothee L Pourpoint, Daniel R Guildenbecher","doi":"10.1364/OE.588303","DOIUrl":"https://doi.org/10.1364/OE.588303","url":null,"abstract":"<p><p>This work develops a 70 kHz tunable acoustic gradient (TAG) lens and high-speed camera configuration for three-dimensional diagnostics of multiphase particle flows. The experimental scene is back-illuminated with a pulsed LED, driven by custom hardware, to capture images with variable phase delays with respect to the sinusoidal TAG lens focal sweep. Proposed calibration methodology and data processing techniques automate the 3D localization and tracking of particles recorded by this configuration. Capabilities are quantitatively assessed by investigating the conically expanding particle field produced by a vibrating nozzle, and good agreement between the statistics of the velocity components demonstrates comparable accuracy in the in-plane and optical depth directions. Finally, capabilities are demonstrated for a challenging and practical measurement of hypergolic reaction of nitrogen tetroxide (NTO) and monomethylhydrazine (MMH). TAG recordings are shown to provide finer depth resolution and reduced susceptibility to imaging noise compared to the common digital inline holography (DIH) diagnostic. The developed capabilities are expected to have widespread utility to future study of transient and 3D multiphase flows.</p>","PeriodicalId":19691,"journal":{"name":"Optics express","volume":"34 5","pages":"8552-8568"},"PeriodicalIF":3.3,"publicationDate":"2026-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147474621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zizheng Gao, Kai Fu, Jiefeng Mu, Cheng Fei, Shuzhen Fan, Pengfei Hu, Junliang Liu, Zhaojun Liu, Xian Zhao, Yongfu Li
Laser jamming has emerged as a critical challenge for cameras, with the widespread deployment of automotive light detection and ranging (LiDAR), the risks of detector saturation and image distortion have increased significantly. To address this problem, this paper proposed an anti-laser-jamming imaging strategy based on correlated double sampling (CDS). Through an alternating dual-channel sampling mechanism, the camera can remain operational under laser jamming, ensuring the acquisition of at least one complete image frame. We constructed an experimental system consisting of a laser jamming unit, a photoelectric detection and control unit, a high-speed comparator unit, and a short-wave infrared (SWIR) camera to carry out the verification. Laboratory simulation experiments were conducted under different laser repetition frequencies and pulse widths, followed by anti-laser-jamming experiments using mechanical LiDAR systems and real-vehicle tests in road environments. The results demonstrate that the CDS-based anti-laser-jamming strategy effectively suppresses LiDAR-induced disturbances, achieving stable and high-fidelity imaging performance in road environments. This study provides a practical and scalable solution to enhance the evasion of automotive imaging sensors against laser jamming, offering broad application prospects in autonomous driving, surveillance, and security.
{"title":"Anti-laser-jamming imaging strategy for cameras based on correlated double sampling technique.","authors":"Zizheng Gao, Kai Fu, Jiefeng Mu, Cheng Fei, Shuzhen Fan, Pengfei Hu, Junliang Liu, Zhaojun Liu, Xian Zhao, Yongfu Li","doi":"10.1364/OE.584409","DOIUrl":"https://doi.org/10.1364/OE.584409","url":null,"abstract":"<p><p>Laser jamming has emerged as a critical challenge for cameras, with the widespread deployment of automotive light detection and ranging (LiDAR), the risks of detector saturation and image distortion have increased significantly. To address this problem, this paper proposed an anti-laser-jamming imaging strategy based on correlated double sampling (CDS). Through an alternating dual-channel sampling mechanism, the camera can remain operational under laser jamming, ensuring the acquisition of at least one complete image frame. We constructed an experimental system consisting of a laser jamming unit, a photoelectric detection and control unit, a high-speed comparator unit, and a short-wave infrared (SWIR) camera to carry out the verification. Laboratory simulation experiments were conducted under different laser repetition frequencies and pulse widths, followed by anti-laser-jamming experiments using mechanical LiDAR systems and real-vehicle tests in road environments. The results demonstrate that the CDS-based anti-laser-jamming strategy effectively suppresses LiDAR-induced disturbances, achieving stable and high-fidelity imaging performance in road environments. This study provides a practical and scalable solution to enhance the evasion of automotive imaging sensors against laser jamming, offering broad application prospects in autonomous driving, surveillance, and security.</p>","PeriodicalId":19691,"journal":{"name":"Optics express","volume":"34 5","pages":"9251-9265"},"PeriodicalIF":3.3,"publicationDate":"2026-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147474647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wei Zhou, Yu Tian, Jianyang Shi, Junwen Zhang, Feng Bao, Nan Chi, Ziwei Li
Deep learning has been successfully applied in imaging through scattering media, enabling direct recovery of the input light field from output speckle patterns. However, due to the high degrees of freedom in the light scattering process, current reconstruction methods can only work well in a trained data domain. Hence, achieving out-of-distribution (OOD) robustness in unseen scenes usually requires extensive experimental data collection. To overcome this limitation, we propose an optical image mixing (OIM) approach that introduces an efficient optical-domain data augmentation strategy to enhance model generalization under limited data conditions. By physically mixing optical images, OIM expands the effective training distribution without additional sample collection. We experimentally validate the proposed method on a multimode fiber (MMF) platform. With only 200 measured images and a 40-fold data augmentation by OIM, we achieve generalized reconstruction for 4096-pixel grayscale images. Compared to conventional models without OIM, we improve the image reconstruction fidelity by 26.2%. The results validate that OIM can serve as a plug-and-play module to enhance the generalization performance of existing reconstruction networks in computational imaging applications.
{"title":"Optical data augmentation enhances out-of-distribution robustness of scattering medium image transmission.","authors":"Wei Zhou, Yu Tian, Jianyang Shi, Junwen Zhang, Feng Bao, Nan Chi, Ziwei Li","doi":"10.1364/OE.585696","DOIUrl":"https://doi.org/10.1364/OE.585696","url":null,"abstract":"<p><p>Deep learning has been successfully applied in imaging through scattering media, enabling direct recovery of the input light field from output speckle patterns. However, due to the high degrees of freedom in the light scattering process, current reconstruction methods can only work well in a trained data domain. Hence, achieving out-of-distribution (OOD) robustness in unseen scenes usually requires extensive experimental data collection. To overcome this limitation, we propose an optical image mixing (OIM) approach that introduces an efficient optical-domain data augmentation strategy to enhance model generalization under limited data conditions. By physically mixing optical images, OIM expands the effective training distribution without additional sample collection. We experimentally validate the proposed method on a multimode fiber (MMF) platform. With only 200 measured images and a 40-fold data augmentation by OIM, we achieve generalized reconstruction for 4096-pixel grayscale images. Compared to conventional models without OIM, we improve the image reconstruction fidelity by 26.2%. The results validate that OIM can serve as a plug-and-play module to enhance the generalization performance of existing reconstruction networks in computational imaging applications.</p>","PeriodicalId":19691,"journal":{"name":"Optics express","volume":"34 5","pages":"8134-8148"},"PeriodicalIF":3.3,"publicationDate":"2026-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147474760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We report a directional, tunable third-harmonic generation (THG) in the deep-UV (DUV) (220-270 nm) from thin transparent dielectrics using a Kerr-induced transient grating (TG). Two noncollinear femtosecond pulses induce a transient Kerr grating whose wavevector K→TG=k→1-k→2 provides a quasi-phase-matching contribution that compensates the phase mismatch Δk=k3ω-3kω, enabling THG via degenerate four-wave mixing and yielding twin, spatially separated DUV beams. With a fixed crossing angle, the TH signal exhibits cubic intensity scaling and a zero-delay temporal gate, confirming its ultrafast χ(3) origin. Across DUV-transparent solids, efficiency under this fixed geometry correlates with DUV dispersion, apart from the reported variations in χ(3). In CaF2, we measure a 0.14% conversion efficiency at 266 nm, which is ∼20× higher than measured in quartz under identical experimental conditions, despite a similar order of magnitude of χ(3). TG-assisted THG thus offers a compact, simple-to-align route to directionally separated, femtosecond-pumped, continuously tunable DUV beams. We outline avenues to maximize efficiency, angle tuning for increasing effective coherence length, and material selection for a twin DUV source useful in ultrafast and photoemission spectroscopy.
{"title":"Tunable, spatially separated twin beam deep-UV third harmonic generation via a transient Kerr grating in thin dielectrics.","authors":"Ajinkya Punjal, Vivek Dwij, Ruturaj Puranik, Rodney Bernard, Aditya K Dharmadhikari, Shriganesh Prabhu","doi":"10.1364/OE.590515","DOIUrl":"https://doi.org/10.1364/OE.590515","url":null,"abstract":"<p><p>We report a directional, tunable third-harmonic generation (THG) in the deep-UV (DUV) (220-270 nm) from thin transparent dielectrics using a Kerr-induced transient grating (TG). Two noncollinear femtosecond pulses induce a transient Kerr grating whose wavevector <i>K</i>→<sub><i>T</i><i>G</i></sub>=<i>k</i>→<sub>1</sub>-<i>k</i>→<sub>2</sub> provides a quasi-phase-matching contribution that compensates the phase mismatch <i>Δ</i><i>k</i>=<i>k</i><sub>3<i>ω</i></sub>-3<i>k</i><sub>ω</sub>, enabling THG via degenerate four-wave mixing and yielding twin, spatially separated DUV beams. With a fixed crossing angle, the TH signal exhibits cubic intensity scaling and a zero-delay temporal gate, confirming its ultrafast <i>χ</i><sup>(3)</sup> origin. Across DUV-transparent solids, efficiency under this fixed geometry correlates with DUV dispersion, apart from the reported variations in <i>χ</i><sup>(3)</sup>. In CaF<sub>2</sub>, we measure a 0.14% conversion efficiency at 266 nm, which is ∼20× higher than measured in quartz under identical experimental conditions, despite a similar order of magnitude of <i>χ</i><sup>(3)</sup>. TG-assisted THG thus offers a compact, simple-to-align route to directionally separated, femtosecond-pumped, continuously tunable DUV beams. We outline avenues to maximize efficiency, angle tuning for increasing effective coherence length, and material selection for a twin DUV source useful in ultrafast and photoemission spectroscopy.</p>","PeriodicalId":19691,"journal":{"name":"Optics express","volume":"34 5","pages":"9189-9198"},"PeriodicalIF":3.3,"publicationDate":"2026-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147474784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Achille Bogas-Droy, Marcin Piotrowski, Gerhard Spindler, Stefano Bigotta, Nicolas Dalloz, Anne Hildenbrand-Dhollande
We report a compact, high-power ZnGeP2 (ZGP) optical parametric oscillator (OPO) pumped by a Ho-doped nanosecond pulsed laser at 2.06 µm that delivers a narrow-linewidth emission centered at 3.98 µm with a multi-watt output. The OPO employs a non-planar ring resonator incorporating two intracavity silicon etalons to enforce spectral selection. By adjusting the relative tilt of the etalons, we achieve narrowband emission under a type-I phase matching configuration, with a 3.7 nm linewidth at 3.98 µm and 4.7 W output power. Compared with the free-running broadband configuration (i.e., without etalons), the spectrally narrowed OPO provides a 10-fold increase in power spectral density in the target region, with only a modest reduction in overall efficiency. Given the applicability of the narrowing method over the wide gain bandwidth of ZGP, we also demonstrate narrow emission simultaneously at 3.8 and 4.5 µm. The spectral characteristics of the OPO source are well reproduced by a numerical model in both broadband and narrowband operations.
{"title":"Narrow-linewidth 3980 nm laser source based on ZGP OPO in the non-planar ring cavity with etalons.","authors":"Achille Bogas-Droy, Marcin Piotrowski, Gerhard Spindler, Stefano Bigotta, Nicolas Dalloz, Anne Hildenbrand-Dhollande","doi":"10.1364/OE.587326","DOIUrl":"https://doi.org/10.1364/OE.587326","url":null,"abstract":"<p><p>We report a compact, high-power ZnGeP<sub>2</sub> (ZGP) optical parametric oscillator (OPO) pumped by a Ho-doped nanosecond pulsed laser at 2.06 µm that delivers a narrow-linewidth emission centered at 3.98 µm with a multi-watt output. The OPO employs a non-planar ring resonator incorporating two intracavity silicon etalons to enforce spectral selection. By adjusting the relative tilt of the etalons, we achieve narrowband emission under a type-I phase matching configuration, with a 3.7 nm linewidth at 3.98 µm and 4.7 W output power. Compared with the free-running broadband configuration (i.e., without etalons), the spectrally narrowed OPO provides a 10-fold increase in power spectral density in the target region, with only a modest reduction in overall efficiency. Given the applicability of the narrowing method over the wide gain bandwidth of ZGP, we also demonstrate narrow emission simultaneously at 3.8 and 4.5 µm. The spectral characteristics of the OPO source are well reproduced by a numerical model in both broadband and narrowband operations.</p>","PeriodicalId":19691,"journal":{"name":"Optics express","volume":"34 5","pages":"9062-9072"},"PeriodicalIF":3.3,"publicationDate":"2026-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147474800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}