首页 > 最新文献

Optics and Lasers in Engineering最新文献

英文 中文
Far field single-pixel image-free tracking for moving object 运动物体的远场单像素无图像跟踪
IF 3.7 2区 工程技术 Q2 OPTICS Pub Date : 2026-01-24 DOI: 10.1016/j.optlaseng.2026.109651
Yunsong Gu , Huahua Wang , Han Zhuang , Bingjie Li , Hongyue Xiao , Changqi Zhang , Hongwei Jiang , Haochong Huang , Zhiyuan Zheng , Ze Zhang , Lu Gao
Real-time tracking of key parts of moving objects remains a major challenge in practical single-pixel imaging systems. In this work, we propose a far-field single-pixel image-free tracking framework for moving objects based on an image-free tracking network (IFTN). By jointly optimizing structured illumination and network training, the proposed method directly maps one-dimensional single-pixel measurements to target coordinates without image reconstruction, effectively mitigating motion-blur-induced accuracy degradation. Experimental results demonstrate that the proposed approach achieves a tracking accuracy of 85.8% with a mean absolute percentage error of 8.4% at a sampling rate of 6.25%, and a tracking speed of 95.2 Hz. Furthermore, the system enables far-field tracking at distances up to 80 m. Compared with traditional single-pixel imaging-based tracking methods that rely on image reconstruction, the proposed method improves tracking accuracy by more than six times. This work provides an efficient and flexible solution for image-free tracking in remote sensing and related applications.
在实际的单像素成像系统中,实时跟踪运动物体的关键部分仍然是一个主要的挑战。在这项工作中,我们提出了一种基于无图像跟踪网络(IFTN)的运动物体远场单像素无图像跟踪框架。该方法通过优化结构照明和网络训练,将一维单像素测量直接映射到目标坐标,无需图像重建,有效缓解了运动模糊引起的精度下降。实验结果表明,在6.25%的采样率下,该方法的跟踪精度为85.8%,平均绝对百分比误差为8.4%,跟踪速度为95.2 Hz。此外,该系统可以在80 米的距离上进行远场跟踪。与传统的基于单像素成像的基于图像重建的跟踪方法相比,该方法的跟踪精度提高了6倍以上。这项工作为遥感及相关应用中的无图像跟踪提供了一种高效、灵活的解决方案。
{"title":"Far field single-pixel image-free tracking for moving object","authors":"Yunsong Gu ,&nbsp;Huahua Wang ,&nbsp;Han Zhuang ,&nbsp;Bingjie Li ,&nbsp;Hongyue Xiao ,&nbsp;Changqi Zhang ,&nbsp;Hongwei Jiang ,&nbsp;Haochong Huang ,&nbsp;Zhiyuan Zheng ,&nbsp;Ze Zhang ,&nbsp;Lu Gao","doi":"10.1016/j.optlaseng.2026.109651","DOIUrl":"10.1016/j.optlaseng.2026.109651","url":null,"abstract":"<div><div>Real-time tracking of key parts of moving objects remains a major challenge in practical single-pixel imaging systems. In this work, we propose a far-field single-pixel image-free tracking framework for moving objects based on an image-free tracking network (IFTN). By jointly optimizing structured illumination and network training, the proposed method directly maps one-dimensional single-pixel measurements to target coordinates without image reconstruction, effectively mitigating motion-blur-induced accuracy degradation. Experimental results demonstrate that the proposed approach achieves a tracking accuracy of 85.8% with a mean absolute percentage error of 8.4% at a sampling rate of 6.25%, and a tracking speed of 95.2 Hz. Furthermore, the system enables far-field tracking at distances up to 80 m. Compared with traditional single-pixel imaging-based tracking methods that rely on image reconstruction, the proposed method improves tracking accuracy by more than six times. This work provides an efficient and flexible solution for image-free tracking in remote sensing and related applications.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109651"},"PeriodicalIF":3.7,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
S3CRAD: Superpixel-guided background inpainting and spatial-spectral constrained representation for hyperspectral anomaly detection S3CRAD:用于高光谱异常检测的超像素引导背景绘制和空间光谱约束表示
IF 3.7 2区 工程技术 Q2 OPTICS Pub Date : 2026-01-24 DOI: 10.1016/j.optlaseng.2026.109657
Mingtao You , Yiming Yao , Dong Zhao , Zhe Zhao , Pattathal V. Arun , Yian Wang , Huixin Zhou , Ronghua Chi
Hyperspectral anomaly detection techniques aim to effectively separate anomalies from the background. Most of the existing approaches do not focus on the contours of anomalous targets, resulting in blurred detection results. In order to overcome this challenge, we propose a superpixel-guided background inpainting and spatial-spectral constrained representation method for hyperspectral anomaly detection (S3CRAD). Specifically, we propose a superpixel-guided strategy that highlights the boundary information between anomalies and the background. Moreover, the existing methods do not fully exploit the differences between anomalies and the background during background reconstruction. Hence, we propose a multi-feature fusion strategy that considers the differences in image contrasts, further emphasizing the difference between anomaly and background pixels. Finally, we propose a spatial-spectral weighting scheme to regularize the representation coefficients, thereby exploiting spatial and spectral information more effectively than existing methods. With the regularized coefficients, the target pixel is better reconstructed via representation. The anomaly result is obtained by computing the residual between the original and reconstructed pixels. The key advantage of our method lies in its ability to fully utilize both spatial and spectral information while effectively reducing the impact of noise on anomaly detection results. Experimental results demonstrate that our approach outperforms nine state-of-the-art methods.
高光谱异常检测技术旨在有效地将异常从背景中分离出来。现有的检测方法大多不关注异常目标的轮廓,导致检测结果模糊。为了克服这一挑战,我们提出了一种用于高光谱异常检测的超像素引导背景绘制和空间光谱约束表示方法(S3CRAD)。具体来说,我们提出了一种超像素引导策略,该策略突出了异常与背景之间的边界信息。此外,现有方法在背景重建过程中没有充分利用异常与背景之间的差异。因此,我们提出了一种考虑图像对比度差异的多特征融合策略,进一步强调异常像素和背景像素之间的差异。最后,我们提出了一种空间-光谱加权方案来正则化表征系数,从而比现有方法更有效地利用空间和光谱信息。利用正则化系数,通过表示可以更好地重建目标像素。通过计算原始像素与重建像素之间的残差,得到异常结果。该方法的关键优势在于能够充分利用空间和光谱信息,同时有效降低噪声对异常检测结果的影响。实验结果表明,我们的方法优于九种最先进的方法。
{"title":"S3CRAD: Superpixel-guided background inpainting and spatial-spectral constrained representation for hyperspectral anomaly detection","authors":"Mingtao You ,&nbsp;Yiming Yao ,&nbsp;Dong Zhao ,&nbsp;Zhe Zhao ,&nbsp;Pattathal V. Arun ,&nbsp;Yian Wang ,&nbsp;Huixin Zhou ,&nbsp;Ronghua Chi","doi":"10.1016/j.optlaseng.2026.109657","DOIUrl":"10.1016/j.optlaseng.2026.109657","url":null,"abstract":"<div><div>Hyperspectral anomaly detection techniques aim to effectively separate anomalies from the background. Most of the existing approaches do not focus on the contours of anomalous targets, resulting in blurred detection results. In order to overcome this challenge, we propose a superpixel-guided background inpainting and spatial-spectral constrained representation method for hyperspectral anomaly detection (S3CRAD). Specifically, we propose a superpixel-guided strategy that highlights the boundary information between anomalies and the background. Moreover, the existing methods do not fully exploit the differences between anomalies and the background during background reconstruction. Hence, we propose a multi-feature fusion strategy that considers the differences in image contrasts, further emphasizing the difference between anomaly and background pixels. Finally, we propose a spatial-spectral weighting scheme to regularize the representation coefficients, thereby exploiting spatial and spectral information more effectively than existing methods. With the regularized coefficients, the target pixel is better reconstructed via representation. The anomaly result is obtained by computing the residual between the original and reconstructed pixels. The key advantage of our method lies in its ability to fully utilize both spatial and spectral information while effectively reducing the impact of noise on anomaly detection results. Experimental results demonstrate that our approach outperforms nine state-of-the-art methods.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109657"},"PeriodicalIF":3.7,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large-aperture nonparaxial Alvarez lenses enabling varifocal augmented reality head-up displays with a wide field of view 大口径非傍轴Alvarez镜头,可实现变焦增强现实平视显示器,视野开阔
IF 3.7 2区 工程技术 Q2 OPTICS Pub Date : 2026-01-24 DOI: 10.1016/j.optlaseng.2026.109638
Yi Liu, Haoteng Liu, Zhiqing Zhao, Qimeng Wang, Weiji Liang, Bo-Ru Yang, Zong Qin
Augmented reality head-up displays (AR-HUDs) serve as in-vehicle human-machine interfaces, with a critical requirement for a continuously adjustable virtual image distance (VID) across a large field of view (FOV). This adaptability allows the virtual image to align with objects at varying depths in the road environment, thereby mitigating visual fatigue. An AR-HUD employs a wide FOV and a large eyebox, resulting in a high étendue, which necessitates optical elements spanning more than 10 centimeters. Consequently, the varifocal component must feature a large aperture and be thin to ensure a compact form factor suitable for automotive integration. Among various slim varifocal optics, Alvarez lenses, which work through transverse lens displacement, offer good scalability due to the mature fabrication of freeform optics. However, increasing the aperture introduces significant aberrations regarding the conventional cubic-form Alvarez lens design. This study presents optimal design rules for applying Alvarez lenses to AR-HUDs, including the displacement method, higher-order terms beyond the classic cubic form, and co-optimization with the freeform mirror. An AR-HUD prototype with a FOV of 13° by 5° and an eyebox of 130 mm by 60 mm was built using Alvarez lenses. A continuously variable VID ranging from 2.5 to 7.5 m was achieved with a resolution of more than 60 pixels per degree. The Alvarez lenses span more than 20 cm transversely and are only 2.5 cm thick, enabling a compact volume of 10.4 liters—almost no increase from that of a single-focal HUD.
增强现实平视显示器(ar - hud)作为车载人机界面,其关键要求是在大视场(FOV)上连续可调的虚拟图像距离(VID)。这种适应性使虚拟图像能够与道路环境中不同深度的物体对齐,从而减轻视觉疲劳。AR-HUD采用宽视场和大眼箱,因此需要10厘米以上的光学元件。因此,变焦组件必须具有大孔径和薄,以确保紧凑的外形因素适合汽车集成。在各种薄型变焦光学器件中,利用横向透镜位移工作的Alvarez透镜由于自由曲面光学器件的成熟,具有良好的可扩展性。然而,增加孔径引入显着像差对于传统的立方形式阿尔瓦雷斯透镜设计。本研究提出了将Alvarez透镜应用于ar - hud的优化设计规则,包括位移法、超越经典立方形式的高阶项以及与自由曲面反射镜的协同优化。AR-HUD原型采用Alvarez透镜,视场为13°× 5°,眼箱为130 mm × 60 mm。连续可变VID范围为2.5至7.5 m,分辨率超过每度60像素。Alvarez透镜的横向跨度超过20厘米,厚度仅为2.5厘米,使其体积达到10.4升,与单焦HUD相比几乎没有增加。
{"title":"Large-aperture nonparaxial Alvarez lenses enabling varifocal augmented reality head-up displays with a wide field of view","authors":"Yi Liu,&nbsp;Haoteng Liu,&nbsp;Zhiqing Zhao,&nbsp;Qimeng Wang,&nbsp;Weiji Liang,&nbsp;Bo-Ru Yang,&nbsp;Zong Qin","doi":"10.1016/j.optlaseng.2026.109638","DOIUrl":"10.1016/j.optlaseng.2026.109638","url":null,"abstract":"<div><div>Augmented reality head-up displays (AR-HUDs) serve as in-vehicle human-machine interfaces, with a critical requirement for a continuously adjustable virtual image distance (VID) across a large field of view (FOV). This adaptability allows the virtual image to align with objects at varying depths in the road environment, thereby mitigating visual fatigue. An AR-HUD employs a wide FOV and a large eyebox, resulting in a high étendue, which necessitates optical elements spanning more than 10 centimeters. Consequently, the varifocal component must feature a large aperture and be thin to ensure a compact form factor suitable for automotive integration. Among various slim varifocal optics, Alvarez lenses, which work through transverse lens displacement, offer good scalability due to the mature fabrication of freeform optics. However, increasing the aperture introduces significant aberrations regarding the conventional cubic-form Alvarez lens design. This study presents optimal design rules for applying Alvarez lenses to AR-HUDs, including the displacement method, higher-order terms beyond the classic cubic form, and co-optimization with the freeform mirror. An AR-HUD prototype with a FOV of 13° by 5° and an eyebox of 130 mm by 60 mm was built using Alvarez lenses. A continuously variable VID ranging from 2.5 to 7.5 m was achieved with a resolution of more than 60 pixels per degree. The Alvarez lenses span more than 20 cm transversely and are only 2.5 cm thick, enabling a compact volume of 10.4 liters—almost no increase from that of a single-focal HUD.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109638"},"PeriodicalIF":3.7,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wavefront-driven optimization for high-quality hologram generation 高质量全息图生成的波前驱动优化
IF 3.7 2区 工程技术 Q2 OPTICS Pub Date : 2026-01-23 DOI: 10.1016/j.optlaseng.2026.109634
Zhenyang Zhang , Xianrui Feng , Hao Jiang , Jian Wang , Zhengqiong Dong , Lei Nie , Jinlong Zhu , Shiyuan Liu
We propose a wavefront-driven optimization design method (WDOD) based on an indirect optimization strategy for computer-generated holograms, which optimizes the wavefront of a virtual object to enhance the imaging quality and to accelerate the convergence speed of amplitude-only holograms (AOHs). We experimentally demonstrated that the proposed method achieves faster convergence, higher imaging quality, and superior robustness compared to its traditional counterparts, such as gradient descent and modified Gerchberg-Saxton algorithms. Moreover, we have demonstrated that this method can be further extended to multi-plane and tilted-plane holography, which is critical to fields such as holographic displays, optical manipulation, and holographic lithography.
提出了一种基于间接优化策略的波前驱动优化设计方法(WDOD),该方法通过优化虚拟物体的波前来提高全息图的成像质量,加快全息图的收敛速度。实验证明,与传统的梯度下降算法和改进的Gerchberg-Saxton算法相比,该方法具有更快的收敛速度、更高的成像质量和更好的鲁棒性。此外,我们还证明了该方法可以进一步扩展到多平面和倾斜平面全息,这对于全息显示、光学操作和全息光刻等领域至关重要。
{"title":"Wavefront-driven optimization for high-quality hologram generation","authors":"Zhenyang Zhang ,&nbsp;Xianrui Feng ,&nbsp;Hao Jiang ,&nbsp;Jian Wang ,&nbsp;Zhengqiong Dong ,&nbsp;Lei Nie ,&nbsp;Jinlong Zhu ,&nbsp;Shiyuan Liu","doi":"10.1016/j.optlaseng.2026.109634","DOIUrl":"10.1016/j.optlaseng.2026.109634","url":null,"abstract":"<div><div>We propose a wavefront-driven optimization design method (WDOD) based on an indirect optimization strategy for computer-generated holograms, which optimizes the wavefront of a virtual object to enhance the imaging quality and to accelerate the convergence speed of amplitude-only holograms (AOHs). We experimentally demonstrated that the proposed method achieves faster convergence, higher imaging quality, and superior robustness compared to its traditional counterparts, such as gradient descent and modified Gerchberg-Saxton algorithms. Moreover, we have demonstrated that this method can be further extended to multi-plane and tilted-plane holography, which is critical to fields such as holographic displays, optical manipulation, and holographic lithography.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109634"},"PeriodicalIF":3.7,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural network–assisted optical temperature estimation using a composite–type optical fiberscope for transbronchial photothermal therapy 经支气管光热治疗中复合光纤镜的神经网络辅助光学温度估计
IF 3.7 2区 工程技术 Q2 OPTICS Pub Date : 2026-01-23 DOI: 10.1016/j.optlaseng.2026.109642
Takeshi Seki , Kiyoshi Oka , Akihiro Naganawa
Transbronchial photothermal therapy (PTT) is a promising modality for treating peripheral lung cancer, especially in patients who are not eligible for surgery. However, real-time temperature monitoring in deep and narrow bronchial regions remains a significant challenge. Conventional thermometry techniques, such as MRI, CT, ultrasound, and pyrometry, are limited by poor spatial access, low accuracy, or incompatibility with bronchoscopy. To address this, we propose a neural network–assisted method to estimate tissue temperature using reflected light spectra and laser power, enabling non-contact thermometry without additional sensors. This approach was implemented using a composite-type optical fiberscope capable of simultaneous laser irradiation and optical sensing. We validated the system using tissue-mimicking phantoms containing porphysome at concentrations of 29, 58, and 100 µM under 250 or 500 mW laser irradiation, targeting temperatures between room temp. and 60 °C. Root Mean Square Error (RMSE) between estimated and actual temperatures was 1.50 °C to 2.89 °C for 250 mW and 1.60 °C to 3.44 °C for 500 mW. This is the first report of real-time temperature estimation using deep-learning and a composite fiberscope in bronchoscopic PTT. The proposed method enables compact, cost-effective, and sensorless temperature monitoring, offering a practical solution for safe and effective clinical PTT in anatomically constrained regions.
经支气管光热疗法(PTT)是治疗周围性肺癌的一种很有前途的方式,特别是对于不适合手术的患者。然而,在深支气管和窄支气管区域的实时温度监测仍然是一个重大的挑战。传统的测温技术,如MRI、CT、超声和热法,由于空间通道差、准确性低或与支气管镜检查不相容而受到限制。为了解决这个问题,我们提出了一种神经网络辅助方法,利用反射光光谱和激光功率来估计组织温度,从而实现无需额外传感器的非接触式测温。该方法是利用一种能够同时进行激光照射和光传感的复合型光纤镜实现的。我们在250或500 mW激光照射下,使用含卟啉体浓度为29、58和100 μ M的组织模拟模型验证了该系统,目标温度在室温到60°C之间。估计温度与实际温度之间的均方根误差(RMSE)为250兆瓦时为1.50°C至2.89°C, 500兆瓦时为1.60°C至3.44°C。这是在支气管镜PTT中使用深度学习和复合纤维镜进行实时温度估计的第一篇报道。所提出的方法能够实现紧凑、经济、无传感器的温度监测,为解剖受限区域的安全有效的临床PTT提供实用的解决方案。
{"title":"Neural network–assisted optical temperature estimation using a composite–type optical fiberscope for transbronchial photothermal therapy","authors":"Takeshi Seki ,&nbsp;Kiyoshi Oka ,&nbsp;Akihiro Naganawa","doi":"10.1016/j.optlaseng.2026.109642","DOIUrl":"10.1016/j.optlaseng.2026.109642","url":null,"abstract":"<div><div>Transbronchial photothermal therapy (PTT) is a promising modality for treating peripheral lung cancer, especially in patients who are not eligible for surgery. However, real-time temperature monitoring in deep and narrow bronchial regions remains a significant challenge. Conventional thermometry techniques, such as MRI, CT, ultrasound, and pyrometry, are limited by poor spatial access, low accuracy, or incompatibility with bronchoscopy. To address this, we propose a neural network–assisted method to estimate tissue temperature using reflected light spectra and laser power, enabling non-contact thermometry without additional sensors. This approach was implemented using a composite-type optical fiberscope capable of simultaneous laser irradiation and optical sensing. We validated the system using tissue-mimicking phantoms containing porphysome at concentrations of 29, 58, and 100 µM under 250 or 500 mW laser irradiation, targeting temperatures between room temp. and 60 °C. Root Mean Square Error (RMSE) between estimated and actual temperatures was 1.50 °C to 2.89 °C for 250 mW and 1.60 °C to 3.44 °C for 500 mW. This is the first report of real-time temperature estimation using deep-learning and a composite fiberscope in bronchoscopic PTT. The proposed method enables compact, cost-effective, and sensorless temperature monitoring, offering a practical solution for safe and effective clinical PTT in anatomically constrained regions.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109642"},"PeriodicalIF":3.7,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Space-frequency analysis-based Fourier single-pixel 3D imaging 基于空频分析的傅里叶单像素三维成像
IF 3.7 2区 工程技术 Q2 OPTICS Pub Date : 2026-01-23 DOI: 10.1016/j.optlaseng.2026.109640
Le Hu , Xinyue Ma , Junyang Li , Ke Hu , Chenxing Wang
Single-pixel imaging (SPI) has gained significant attention due to its potential for wide-spectrum imaging. In SPI, the target scene is sampled by projecting a series of encoding patterns, and then an image is reconstructed from the captured intensity sequence. When combined with Fringe Projection Profilometry (FPP), 3D SPI can be achieved by modulating depth information into a fringe-deformed image. However, typical sampling strategies of SPI often struggle to balance low sampling rates with high accuracy, and maintaining imaging details is even more challenging. To address these issues, we propose a novel Fourier single-pixel imaging strategy based on space-frequency analysis. A windowed sampling strategy is introduced to efficiently obtain an image prior, solving the inherent drawback of SPD’s lack of spatial resolution. With the obtained prior image, spectral analysis is conducted to extract significant components for reconstructing the target's basic structure, with which, detail enhancement is carried out by spatial analysis. Finally, the detail-enhanced image is analyzed again to extract more significant components, enabling the recovery of target details. Extensive experiments verify that our space-frequency method enhances imaging details at low sampling rates.
单像素成像(SPI)因其具有广谱成像的潜力而受到广泛关注。在SPI中,通过投射一系列编码模式对目标场景进行采样,然后根据捕获的强度序列重建图像。当与条纹投影轮廓术(FPP)相结合时,可以通过将深度信息调制到条纹变形图像中来实现3D SPI。然而,典型的SPI采样策略往往难以平衡低采样率和高精度,保持成像细节更具挑战性。为了解决这些问题,我们提出了一种基于空频分析的新颖傅立叶单像素成像策略。引入加窗采样策略,有效地获得图像先验,解决了SPD缺乏空间分辨率的固有缺点。对得到的先验图像进行光谱分析,提取重要分量,重建目标基本结构,并通过空间分析进行细节增强。最后,对增强后的图像再次进行分析,提取更重要的成分,恢复目标细节。大量的实验验证了我们的空频方法在低采样率下增强了成像细节。
{"title":"Space-frequency analysis-based Fourier single-pixel 3D imaging","authors":"Le Hu ,&nbsp;Xinyue Ma ,&nbsp;Junyang Li ,&nbsp;Ke Hu ,&nbsp;Chenxing Wang","doi":"10.1016/j.optlaseng.2026.109640","DOIUrl":"10.1016/j.optlaseng.2026.109640","url":null,"abstract":"<div><div>Single-pixel imaging (SPI) has gained significant attention due to its potential for wide-spectrum imaging. In SPI, the target scene is sampled by projecting a series of encoding patterns, and then an image is reconstructed from the captured intensity sequence. When combined with Fringe Projection Profilometry (FPP), 3D SPI can be achieved by modulating depth information into a fringe-deformed image. However, typical sampling strategies of SPI often struggle to balance low sampling rates with high accuracy, and maintaining imaging details is even more challenging. To address these issues, we propose a novel Fourier single-pixel imaging strategy based on space-frequency analysis. A windowed sampling strategy is introduced to efficiently obtain an image prior, solving the inherent drawback of SPD’s lack of spatial resolution. With the obtained prior image, spectral analysis is conducted to extract significant components for reconstructing the target's basic structure, with which, detail enhancement is carried out by spatial analysis. Finally, the detail-enhanced image is analyzed again to extract more significant components, enabling the recovery of target details. Extensive experiments verify that our space-frequency method enhances imaging details at low sampling rates.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109640"},"PeriodicalIF":3.7,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ultrafast VISAR velocity field reconstruction via deep unfolding networks and hardware-optimized deployment 基于深度展开网络和硬件优化部署的超快VISAR速度场重建
IF 3.7 2区 工程技术 Q2 OPTICS Pub Date : 2026-01-22 DOI: 10.1016/j.optlaseng.2026.109623
Miao Li, Chaorui Chen, Xi Wang, Xinru Zhang, Youwei Dai, Longwu Luo
We present a shock-wave velocity field ultrafast imaging reconstruction algorithm and a hardware deployment scheme for interferometric imaging systems with arbitrary reflective surfaces. The algorithm unfolds the alternating direction method of multipliers (ADMM) iterations into trainable network layers. It combines the interpretability of traditional optimization methods with deep learning to improve reconstruction accuracy. For hardware deployment, a co-optimization strategy is designed. This strategy uses operator mapping and 8-bit integer quantization for CPUs and deep learning processing units (DPUs). Simulation and experimental results are provided. Under high compression ratio encoding, the method achieves a peak signal-to-noise ratio (PSNR) of 29.53 dB. This is 11.46 dB higher than ADMM-TV and 6.91 dB higher than E-3DTV. The structural similarity index (SSIM) reaches 0.88. The learned perceptual image patch similarity (LPIPS) is 0.17. Experiments show that the maximum absolute error of the reconstructed velocity field is 1.58 km/s, with a relative error of 9.87%. Dynamic and static power consumption are reduced by 89.27% and 94.83%, respectively, without reducing reconstruction accuracy. These results show that the method improves reconstruction accuracy and reduces power consumption. It also addresses limitations of traditional algorithms in both accuracy and deployment efficiency. The method provides a reliable approach for dynamic reconstruction in ultrafast imaging of shock-wave velocity fields.
提出了一种用于任意反射面干涉成像系统的冲击波速度场超快成像重建算法和硬件部署方案。该算法将乘法器交替方向迭代法(ADMM)展开为可训练的网络层。它将传统优化方法的可解释性与深度学习相结合,提高了重构精度。针对硬件部署,设计了协同优化策略。该策略对cpu和深度学习处理单元(dpu)使用算子映射和8位整数量化。给出了仿真和实验结果。在高压缩比编码下,该方法的峰值信噪比(PSNR)为29.53 dB。比ADMM-TV高11.46 dB,比E-3DTV高6.91 dB。结构相似指数(SSIM)达到0.88。学习到的感知图像斑块相似度(LPIPS)为0.17。实验结果表明,重建速度场的最大绝对误差为1.58 km/s,相对误差为9.87%。在不降低重构精度的前提下,动态和静态功耗分别降低89.27%和94.83%。结果表明,该方法提高了重建精度,降低了功耗。它还解决了传统算法在精度和部署效率方面的局限性。该方法为冲击波速度场的超快成像提供了一种可靠的动态重建方法。
{"title":"Ultrafast VISAR velocity field reconstruction via deep unfolding networks and hardware-optimized deployment","authors":"Miao Li,&nbsp;Chaorui Chen,&nbsp;Xi Wang,&nbsp;Xinru Zhang,&nbsp;Youwei Dai,&nbsp;Longwu Luo","doi":"10.1016/j.optlaseng.2026.109623","DOIUrl":"10.1016/j.optlaseng.2026.109623","url":null,"abstract":"<div><div>We present a shock-wave velocity field ultrafast imaging reconstruction algorithm and a hardware deployment scheme for interferometric imaging systems with arbitrary reflective surfaces. The algorithm unfolds the alternating direction method of multipliers (ADMM) iterations into trainable network layers. It combines the interpretability of traditional optimization methods with deep learning to improve reconstruction accuracy. For hardware deployment, a co-optimization strategy is designed. This strategy uses operator mapping and 8-bit integer quantization for CPUs and deep learning processing units (DPUs). Simulation and experimental results are provided. Under high compression ratio encoding, the method achieves a peak signal-to-noise ratio (PSNR) of 29.53 dB. This is 11.46 dB higher than ADMM-TV and 6.91 dB higher than E-3DTV. The structural similarity index (SSIM) reaches 0.88. The learned perceptual image patch similarity (LPIPS) is 0.17. Experiments show that the maximum absolute error of the reconstructed velocity field is 1.58 km/s, with a relative error of 9.87%. Dynamic and static power consumption are reduced by 89.27% and 94.83%, respectively, without reducing reconstruction accuracy. These results show that the method improves reconstruction accuracy and reduces power consumption. It also addresses limitations of traditional algorithms in both accuracy and deployment efficiency. The method provides a reliable approach for dynamic reconstruction in ultrafast imaging of shock-wave velocity fields.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109623"},"PeriodicalIF":3.7,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D light field display with enhanced reconstruction accuracy based on distortion- suppressed compound lens array and pre-correction encoded image 基于抑制畸变复合透镜阵列和预校正编码图像的三维光场显示
IF 3.7 2区 工程技术 Q2 OPTICS Pub Date : 2026-01-22 DOI: 10.1016/j.optlaseng.2026.109630
Xudong Wen, Xin Gao, Yaohe Zheng, Ziyun Lu, Jinhong He, Hanyu Li, Ningchi Li, Boyang Liu, Binbin Yan, Xunbo Yu, Xinzhu Sang
In order to achieve a satisfactory three-dimensional (3D) light field display, the 3D image information with correct spatial occlusion relation should be provided in a wide viewing angle range. However, the optical aberration and the structural error are two key causes of the deformation of reconstructed 3D images, especially at the large viewing angle. Here, a joint optimization method of the optical structure and image coding for 3D light field display is proposed to enhance the construction accuracy of voxels. A composite lens with an aperture is designed to suppress optical aberrations. A pre-correction method based on optical path detection is implemented to further mitigate the structural errors and residual the optical aberrations. Experimental results demonstrated that the high-precision 3D light field display with a 100-degree viewing angle is achieved through the proposed method. The deviation of the voxel is reduced from 51.1 mm to 3.1 mm, compared with traditional methods. Medical, military and other applications with high-precision requirements can be met.
为了获得满意的三维光场显示效果,需要在较宽的视角范围内提供具有正确空间遮挡关系的三维图像信息。然而,光学像差和结构误差是造成三维重建图像变形的两个主要原因,尤其是在大视角下。本文提出了一种三维光场显示光学结构与图像编码的联合优化方法,以提高体素的构建精度。设计了一种带孔径的复合透镜来抑制光学像差。提出了一种基于光路检测的预校正方法,以进一步减小结构误差和残留光像差。实验结果表明,该方法可实现100度视角下的高精度三维光场显示。与传统方法相比,体素的偏差从51.1 mm减小到3.1 mm。可以满足医疗,军事和其他高精度要求的应用。
{"title":"3D light field display with enhanced reconstruction accuracy based on distortion- suppressed compound lens array and pre-correction encoded image","authors":"Xudong Wen,&nbsp;Xin Gao,&nbsp;Yaohe Zheng,&nbsp;Ziyun Lu,&nbsp;Jinhong He,&nbsp;Hanyu Li,&nbsp;Ningchi Li,&nbsp;Boyang Liu,&nbsp;Binbin Yan,&nbsp;Xunbo Yu,&nbsp;Xinzhu Sang","doi":"10.1016/j.optlaseng.2026.109630","DOIUrl":"10.1016/j.optlaseng.2026.109630","url":null,"abstract":"<div><div>In order to achieve a satisfactory three-dimensional (3D) light field display, the 3D image information with correct spatial occlusion relation should be provided in a wide viewing angle range. However, the optical aberration and the structural error are two key causes of the deformation of reconstructed 3D images, especially at the large viewing angle. Here, a joint optimization method of the optical structure and image coding for 3D light field display is proposed to enhance the construction accuracy of voxels. A composite lens with an aperture is designed to suppress optical aberrations. A pre-correction method based on optical path detection is implemented to further mitigate the structural errors and residual the optical aberrations. Experimental results demonstrated that the high-precision 3D light field display with a 100-degree viewing angle is achieved through the proposed method. The deviation of the voxel is reduced from 51.1 mm to 3.1 mm, compared with traditional methods. Medical, military and other applications with high-precision requirements can be met.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109630"},"PeriodicalIF":3.7,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evolution of fractional vortices through intensity autocorrelation of scattered speckle patterns 基于散斑模式强度自相关的分数涡演化
IF 3.7 2区 工程技术 Q2 OPTICS Pub Date : 2026-01-22 DOI: 10.1016/j.optlaseng.2026.109637
MD. Haider Ansari , Velagala Ganesh , Sakshi Choudhary , Ravi Kumar , Shashi Prabhakar , Salla Gangi Reddy
Measuring the topological charge (TC) of optical vortices is crucial for advancing applications in areas such as optical communication and quantum information processing. Although various interferometric and non-interferometric techniques have been developed for coherent and partially coherent beams, most of these methods are ineffective for fractional-vortex beams, especially when the beam gets perturbed. In this work, we propose and experimentally demonstrate a simple, non-interferometric technique based on autocorrelation for assessing and quantitatively measuring the TC of fractional vortex beams. We generated fractional optical vortex beams using computer-generated fork-shaped holograms and then obtained the corresponding random optical patterns after scattering through a rough surface. The autocorrelation rings of random patterns provide the TC of fractional vortex beams, and the asymmetry gradually becomes symmetric as the TC approaches an integer value. Additionally, by examining the divergence of the first dark ring with respect to propagation distance, we can quantitatively estimate the fractional TC. The measured divergence closely matches theoretical results, achieving an accuracy of over 98 %. The proposed method eliminates the need for phase retrieval, coherence modulation, or interferometry, providing a practical and robust solution for measuring fractional TCs, even in the presence of perturbations such as scattering and mild atmospheric turbulence, which are common in free-space optical communication systems.
测量光涡旋的拓扑电荷(TC)对于推进光通信和量子信息处理等领域的应用至关重要。尽管针对相干光束和部分相干光束已经开发了各种干涉和非干涉技术,但这些方法大多对分数涡光束无效,特别是当光束受到扰动时。在这项工作中,我们提出并实验证明了一种简单的、基于自相关的非干涉技术,用于评估和定量测量分数涡旋光束的TC。利用计算机生成的叉形全息图生成分数阶光学涡旋光束,通过粗糙表面散射后得到相应的随机光学图样。随机图样的自相关环提供分数涡旋光束的TC,当TC接近整数值时,不对称逐渐变成对称。此外,通过检查第一暗环的散度与传播距离的关系,我们可以定量地估计分数TC。测量的散度与理论结果非常吻合,精度超过98%。所提出的方法消除了相位恢复、相干调制或干涉测量的需要,为测量分数阶tc提供了一种实用而稳健的解决方案,即使存在散射和轻微大气湍流等扰动,这在自由空间光通信系统中很常见。
{"title":"Evolution of fractional vortices through intensity autocorrelation of scattered speckle patterns","authors":"MD. Haider Ansari ,&nbsp;Velagala Ganesh ,&nbsp;Sakshi Choudhary ,&nbsp;Ravi Kumar ,&nbsp;Shashi Prabhakar ,&nbsp;Salla Gangi Reddy","doi":"10.1016/j.optlaseng.2026.109637","DOIUrl":"10.1016/j.optlaseng.2026.109637","url":null,"abstract":"<div><div>Measuring the topological charge (TC) of optical vortices is crucial for advancing applications in areas such as optical communication and quantum information processing. Although various interferometric and non-interferometric techniques have been developed for coherent and partially coherent beams, most of these methods are ineffective for fractional-vortex beams, especially when the beam gets perturbed. In this work, we propose and experimentally demonstrate a simple, non-interferometric technique based on autocorrelation for assessing and quantitatively measuring the TC of fractional vortex beams. We generated fractional optical vortex beams using computer-generated fork-shaped holograms and then obtained the corresponding random optical patterns after scattering through a rough surface. The autocorrelation rings of random patterns provide the TC of fractional vortex beams, and the asymmetry gradually becomes symmetric as the TC approaches an integer value. Additionally, by examining the divergence of the first dark ring with respect to propagation distance, we can quantitatively estimate the fractional TC. The measured divergence closely matches theoretical results, achieving an accuracy of over 98 %. The proposed method eliminates the need for phase retrieval, coherence modulation, or interferometry, providing a practical and robust solution for measuring fractional TCs, even in the presence of perturbations such as scattering and mild atmospheric turbulence, which are common in free-space optical communication systems.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109637"},"PeriodicalIF":3.7,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High efficiency Raman microscopic imaging based on a dispersion-interference hybrid spectrometer 基于色散干涉混合光谱仪的高效拉曼显微成像
IF 3.7 2区 工程技术 Q2 OPTICS Pub Date : 2026-01-20 DOI: 10.1016/j.optlaseng.2026.109624
Huitong Huang , Xin Meng , Xiaohong Jiang , Ao Wang , Yixuan Xu , Zhibiao Liu , Yixuan Liu , Jianxin Li
Optical throughput and spectral resolution are two critical parameters for achieving rapid and accurate Raman microscopic imaging. However, conventional methods often face a trade-off between these two performances. In response to this limitation, this work proposes a high-efficiency Raman microscopic imaging system based on a dispersion-interference hybrid spectroscopic architecture. By integrating Amici prisms into a Sagnac interferometer, the system enables spectral shearing control without the need for an entrance slit, achieving both high throughput and good spectral resolution. Using a 785 nm laser, the system was evaluated on Nd: Y₃Al₅O₁₂ ceramics, fluorite crystals, and SERS-enhanced 4-aminothiophenol samples on a gold substrate, achieving a point spectral resolution of 10 cm⁻¹ and a full width at half maximum (FWHM) resolution of 25 cm⁻¹. Combined with a 1024 × 1024 pixel EMCCD camera, the system supports area imaging and employs a one-dimensional push-broom scanning strategy to efficiently acquire the entire field of view, significantly improving imaging speed compared to conventional point-by-point scanning methods. Experimental results demonstrate that the system offers high throughput, high resolution, and rapid Raman hyperspectral imaging capabilities, with potential applications in material analysis, biological detection, and chemical imaging.
光学通量和光谱分辨率是实现快速、准确拉曼显微成像的两个关键参数。然而,传统方法经常面临这两种性能之间的权衡。针对这一局限性,本文提出了一种基于色散-干涉混合光谱结构的高效拉曼显微成像系统。通过将Amici棱镜集成到Sagnac干涉仪中,该系统无需入口狭缝即可实现光谱剪切控制,从而实现高通量和良好的光谱分辨率。使用785 nm激光,在金衬底上对Nd: Y₃Al₅O₁₂陶瓷,萤石晶体和sers增强的4-氨基噻吩样品进行了评估,获得了10 cm⁻¹的点光谱分辨率和25 cm⁻¹的半宽(FWHM)分辨率。该系统配合1024 × 1024像素的EMCCD摄像头,支持区域成像,采用一维推扫帚扫描策略,有效获取整个视场,与传统逐点扫描方式相比,成像速度显著提高。实验结果表明,该系统具有高通量、高分辨率和快速的拉曼高光谱成像能力,在材料分析、生物检测和化学成像方面具有潜在的应用前景。
{"title":"High efficiency Raman microscopic imaging based on a dispersion-interference hybrid spectrometer","authors":"Huitong Huang ,&nbsp;Xin Meng ,&nbsp;Xiaohong Jiang ,&nbsp;Ao Wang ,&nbsp;Yixuan Xu ,&nbsp;Zhibiao Liu ,&nbsp;Yixuan Liu ,&nbsp;Jianxin Li","doi":"10.1016/j.optlaseng.2026.109624","DOIUrl":"10.1016/j.optlaseng.2026.109624","url":null,"abstract":"<div><div>Optical throughput and spectral resolution are two critical parameters for achieving rapid and accurate Raman microscopic imaging. However, conventional methods often face a trade-off between these two performances. In response to this limitation, this work proposes a high-efficiency Raman microscopic imaging system based on a dispersion-interference hybrid spectroscopic architecture. By integrating Amici prisms into a Sagnac interferometer, the system enables spectral shearing control without the need for an entrance slit, achieving both high throughput and good spectral resolution. Using a 785 nm laser, the system was evaluated on Nd: Y₃Al₅O₁₂ ceramics, fluorite crystals, and SERS-enhanced 4-aminothiophenol samples on a gold substrate, achieving a point spectral resolution of 10 cm⁻¹ and a full width at half maximum (FWHM) resolution of 25 cm⁻¹. Combined with a 1024 × 1024 pixel EMCCD camera, the system supports area imaging and employs a one-dimensional push-broom scanning strategy to efficiently acquire the entire field of view, significantly improving imaging speed compared to conventional point-by-point scanning methods. Experimental results demonstrate that the system offers high throughput, high resolution, and rapid Raman hyperspectral imaging capabilities, with potential applications in material analysis, biological detection, and chemical imaging.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109624"},"PeriodicalIF":3.7,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Optics and Lasers in Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1