首页 > 最新文献

Optics and Lasers in Engineering最新文献

英文 中文
S3CRAD: Superpixel-guided background inpainting and spatial-spectral constrained representation for hyperspectral anomaly detection S3CRAD:用于高光谱异常检测的超像素引导背景绘制和空间光谱约束表示
IF 3.7 2区 工程技术 Q2 OPTICS Pub Date : 2026-06-01 Epub Date: 2026-01-24 DOI: 10.1016/j.optlaseng.2026.109657
Mingtao You , Yiming Yao , Dong Zhao , Zhe Zhao , Pattathal V. Arun , Yian Wang , Huixin Zhou , Ronghua Chi
Hyperspectral anomaly detection techniques aim to effectively separate anomalies from the background. Most of the existing approaches do not focus on the contours of anomalous targets, resulting in blurred detection results. In order to overcome this challenge, we propose a superpixel-guided background inpainting and spatial-spectral constrained representation method for hyperspectral anomaly detection (S3CRAD). Specifically, we propose a superpixel-guided strategy that highlights the boundary information between anomalies and the background. Moreover, the existing methods do not fully exploit the differences between anomalies and the background during background reconstruction. Hence, we propose a multi-feature fusion strategy that considers the differences in image contrasts, further emphasizing the difference between anomaly and background pixels. Finally, we propose a spatial-spectral weighting scheme to regularize the representation coefficients, thereby exploiting spatial and spectral information more effectively than existing methods. With the regularized coefficients, the target pixel is better reconstructed via representation. The anomaly result is obtained by computing the residual between the original and reconstructed pixels. The key advantage of our method lies in its ability to fully utilize both spatial and spectral information while effectively reducing the impact of noise on anomaly detection results. Experimental results demonstrate that our approach outperforms nine state-of-the-art methods.
高光谱异常检测技术旨在有效地将异常从背景中分离出来。现有的检测方法大多不关注异常目标的轮廓,导致检测结果模糊。为了克服这一挑战,我们提出了一种用于高光谱异常检测的超像素引导背景绘制和空间光谱约束表示方法(S3CRAD)。具体来说,我们提出了一种超像素引导策略,该策略突出了异常与背景之间的边界信息。此外,现有方法在背景重建过程中没有充分利用异常与背景之间的差异。因此,我们提出了一种考虑图像对比度差异的多特征融合策略,进一步强调异常像素和背景像素之间的差异。最后,我们提出了一种空间-光谱加权方案来正则化表征系数,从而比现有方法更有效地利用空间和光谱信息。利用正则化系数,通过表示可以更好地重建目标像素。通过计算原始像素与重建像素之间的残差,得到异常结果。该方法的关键优势在于能够充分利用空间和光谱信息,同时有效降低噪声对异常检测结果的影响。实验结果表明,我们的方法优于九种最先进的方法。
{"title":"S3CRAD: Superpixel-guided background inpainting and spatial-spectral constrained representation for hyperspectral anomaly detection","authors":"Mingtao You ,&nbsp;Yiming Yao ,&nbsp;Dong Zhao ,&nbsp;Zhe Zhao ,&nbsp;Pattathal V. Arun ,&nbsp;Yian Wang ,&nbsp;Huixin Zhou ,&nbsp;Ronghua Chi","doi":"10.1016/j.optlaseng.2026.109657","DOIUrl":"10.1016/j.optlaseng.2026.109657","url":null,"abstract":"<div><div>Hyperspectral anomaly detection techniques aim to effectively separate anomalies from the background. Most of the existing approaches do not focus on the contours of anomalous targets, resulting in blurred detection results. In order to overcome this challenge, we propose a superpixel-guided background inpainting and spatial-spectral constrained representation method for hyperspectral anomaly detection (S3CRAD). Specifically, we propose a superpixel-guided strategy that highlights the boundary information between anomalies and the background. Moreover, the existing methods do not fully exploit the differences between anomalies and the background during background reconstruction. Hence, we propose a multi-feature fusion strategy that considers the differences in image contrasts, further emphasizing the difference between anomaly and background pixels. Finally, we propose a spatial-spectral weighting scheme to regularize the representation coefficients, thereby exploiting spatial and spectral information more effectively than existing methods. With the regularized coefficients, the target pixel is better reconstructed via representation. The anomaly result is obtained by computing the residual between the original and reconstructed pixels. The key advantage of our method lies in its ability to fully utilize both spatial and spectral information while effectively reducing the impact of noise on anomaly detection results. Experimental results demonstrate that our approach outperforms nine state-of-the-art methods.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109657"},"PeriodicalIF":3.7,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Space-frequency analysis-based Fourier single-pixel 3D imaging 基于空频分析的傅里叶单像素三维成像
IF 3.7 2区 工程技术 Q2 OPTICS Pub Date : 2026-06-01 Epub Date: 2026-01-23 DOI: 10.1016/j.optlaseng.2026.109640
Le Hu , Xinyue Ma , Junyang Li , Ke Hu , Chenxing Wang
Single-pixel imaging (SPI) has gained significant attention due to its potential for wide-spectrum imaging. In SPI, the target scene is sampled by projecting a series of encoding patterns, and then an image is reconstructed from the captured intensity sequence. When combined with Fringe Projection Profilometry (FPP), 3D SPI can be achieved by modulating depth information into a fringe-deformed image. However, typical sampling strategies of SPI often struggle to balance low sampling rates with high accuracy, and maintaining imaging details is even more challenging. To address these issues, we propose a novel Fourier single-pixel imaging strategy based on space-frequency analysis. A windowed sampling strategy is introduced to efficiently obtain an image prior, solving the inherent drawback of SPD’s lack of spatial resolution. With the obtained prior image, spectral analysis is conducted to extract significant components for reconstructing the target's basic structure, with which, detail enhancement is carried out by spatial analysis. Finally, the detail-enhanced image is analyzed again to extract more significant components, enabling the recovery of target details. Extensive experiments verify that our space-frequency method enhances imaging details at low sampling rates.
单像素成像(SPI)因其具有广谱成像的潜力而受到广泛关注。在SPI中,通过投射一系列编码模式对目标场景进行采样,然后根据捕获的强度序列重建图像。当与条纹投影轮廓术(FPP)相结合时,可以通过将深度信息调制到条纹变形图像中来实现3D SPI。然而,典型的SPI采样策略往往难以平衡低采样率和高精度,保持成像细节更具挑战性。为了解决这些问题,我们提出了一种基于空频分析的新颖傅立叶单像素成像策略。引入加窗采样策略,有效地获得图像先验,解决了SPD缺乏空间分辨率的固有缺点。对得到的先验图像进行光谱分析,提取重要分量,重建目标基本结构,并通过空间分析进行细节增强。最后,对增强后的图像再次进行分析,提取更重要的成分,恢复目标细节。大量的实验验证了我们的空频方法在低采样率下增强了成像细节。
{"title":"Space-frequency analysis-based Fourier single-pixel 3D imaging","authors":"Le Hu ,&nbsp;Xinyue Ma ,&nbsp;Junyang Li ,&nbsp;Ke Hu ,&nbsp;Chenxing Wang","doi":"10.1016/j.optlaseng.2026.109640","DOIUrl":"10.1016/j.optlaseng.2026.109640","url":null,"abstract":"<div><div>Single-pixel imaging (SPI) has gained significant attention due to its potential for wide-spectrum imaging. In SPI, the target scene is sampled by projecting a series of encoding patterns, and then an image is reconstructed from the captured intensity sequence. When combined with Fringe Projection Profilometry (FPP), 3D SPI can be achieved by modulating depth information into a fringe-deformed image. However, typical sampling strategies of SPI often struggle to balance low sampling rates with high accuracy, and maintaining imaging details is even more challenging. To address these issues, we propose a novel Fourier single-pixel imaging strategy based on space-frequency analysis. A windowed sampling strategy is introduced to efficiently obtain an image prior, solving the inherent drawback of SPD’s lack of spatial resolution. With the obtained prior image, spectral analysis is conducted to extract significant components for reconstructing the target's basic structure, with which, detail enhancement is carried out by spatial analysis. Finally, the detail-enhanced image is analyzed again to extract more significant components, enabling the recovery of target details. Extensive experiments verify that our space-frequency method enhances imaging details at low sampling rates.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109640"},"PeriodicalIF":3.7,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ZnSe based 2D photonic crystal biosensor supported with machine learning for critical blood component detection and diagnosis of hypofibrinogenemia 基于ZnSe的二维光子晶体生物传感器,支持机器学习,用于低纤维蛋白原血症的关键血液成分检测和诊断
IF 3.7 2区 工程技术 Q2 OPTICS Pub Date : 2026-06-01 Epub Date: 2026-02-10 DOI: 10.1016/j.optlaseng.2026.109678
Harikrishnan N, Sangeetha A
Biosensor for detecting critical blood components using zinc selenide (ZnSe) two dimensional photonic crystal is presented. The proposed sensor is expected to be a resourceful approach in identifying hypofibrinogenemia, chronic renal disease and artery related disorders. Wavelength sensitivity, quality factor (QF) and figure of merit are the defining metrics of the sensor which are being evaluated. Design and implementation of the sensor is done using FDTD (Finite difference time domain) method. Four cavity layout is being investigated in the sensor topology to refine the spectral separation between the analytes. Machine learning (ML) platform is deployed for predicting optical parameters such as wavelength sensitivity and peak wavelength. ML models such as polynomial regression, gradient boost, XGboost regression, K-nearest neighbor regression and Random Forest are leveraged to predict the metrics. R2 score, Mean squared error (MSE) and mean absolute error (MAE) are evaluated for each model to comprehend the accuracy in predicting sensor metrics.
提出了一种利用硒化锌(ZnSe)二维光子晶体检测血液关键成分的生物传感器。该传感器有望成为识别低纤维蛋白原血症、慢性肾脏疾病和动脉相关疾病的一种有效方法。波长灵敏度、质量因子(QF)和优值是所评价的传感器的决定性指标。传感器的设计和实现采用时域有限差分(FDTD)方法。研究了传感器拓扑结构中的四腔布局,以改进分析物之间的光谱分离。利用机器学习(ML)平台预测波长灵敏度和峰值波长等光学参数。机器学习模型,如多项式回归、梯度增强、XGboost回归、k近邻回归和随机森林被用来预测指标。对每个模型的R2评分、均方误差(MSE)和平均绝对误差(MAE)进行评估,以了解预测传感器指标的准确性。
{"title":"ZnSe based 2D photonic crystal biosensor supported with machine learning for critical blood component detection and diagnosis of hypofibrinogenemia","authors":"Harikrishnan N,&nbsp;Sangeetha A","doi":"10.1016/j.optlaseng.2026.109678","DOIUrl":"10.1016/j.optlaseng.2026.109678","url":null,"abstract":"<div><div>Biosensor for detecting critical blood components using zinc selenide (ZnSe) two dimensional photonic crystal is presented. The proposed sensor is expected to be a resourceful approach in identifying hypofibrinogenemia, chronic renal disease and artery related disorders. Wavelength sensitivity, quality factor (QF) and figure of merit are the defining metrics of the sensor which are being evaluated. Design and implementation of the sensor is done using FDTD (Finite difference time domain) method. Four cavity layout is being investigated in the sensor topology to refine the spectral separation between the analytes. Machine learning (ML) platform is deployed for predicting optical parameters such as wavelength sensitivity and peak wavelength. ML models such as polynomial regression, gradient boost, XGboost regression, K-nearest neighbor regression and Random Forest are leveraged to predict the metrics. R<sup>2</sup> score, Mean squared error (MSE) and mean absolute error (MAE) are evaluated for each model to comprehend the accuracy in predicting sensor metrics.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109678"},"PeriodicalIF":3.7,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146189442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large-aperture nonparaxial Alvarez lenses enabling varifocal augmented reality head-up displays with a wide field of view 大口径非傍轴Alvarez镜头,可实现变焦增强现实平视显示器,视野开阔
IF 3.7 2区 工程技术 Q2 OPTICS Pub Date : 2026-06-01 Epub Date: 2026-01-24 DOI: 10.1016/j.optlaseng.2026.109638
Yi Liu, Haoteng Liu, Zhiqing Zhao, Qimeng Wang, Weiji Liang, Bo-Ru Yang, Zong Qin
Augmented reality head-up displays (AR-HUDs) serve as in-vehicle human-machine interfaces, with a critical requirement for a continuously adjustable virtual image distance (VID) across a large field of view (FOV). This adaptability allows the virtual image to align with objects at varying depths in the road environment, thereby mitigating visual fatigue. An AR-HUD employs a wide FOV and a large eyebox, resulting in a high étendue, which necessitates optical elements spanning more than 10 centimeters. Consequently, the varifocal component must feature a large aperture and be thin to ensure a compact form factor suitable for automotive integration. Among various slim varifocal optics, Alvarez lenses, which work through transverse lens displacement, offer good scalability due to the mature fabrication of freeform optics. However, increasing the aperture introduces significant aberrations regarding the conventional cubic-form Alvarez lens design. This study presents optimal design rules for applying Alvarez lenses to AR-HUDs, including the displacement method, higher-order terms beyond the classic cubic form, and co-optimization with the freeform mirror. An AR-HUD prototype with a FOV of 13° by 5° and an eyebox of 130 mm by 60 mm was built using Alvarez lenses. A continuously variable VID ranging from 2.5 to 7.5 m was achieved with a resolution of more than 60 pixels per degree. The Alvarez lenses span more than 20 cm transversely and are only 2.5 cm thick, enabling a compact volume of 10.4 liters—almost no increase from that of a single-focal HUD.
增强现实平视显示器(ar - hud)作为车载人机界面,其关键要求是在大视场(FOV)上连续可调的虚拟图像距离(VID)。这种适应性使虚拟图像能够与道路环境中不同深度的物体对齐,从而减轻视觉疲劳。AR-HUD采用宽视场和大眼箱,因此需要10厘米以上的光学元件。因此,变焦组件必须具有大孔径和薄,以确保紧凑的外形因素适合汽车集成。在各种薄型变焦光学器件中,利用横向透镜位移工作的Alvarez透镜由于自由曲面光学器件的成熟,具有良好的可扩展性。然而,增加孔径引入显着像差对于传统的立方形式阿尔瓦雷斯透镜设计。本研究提出了将Alvarez透镜应用于ar - hud的优化设计规则,包括位移法、超越经典立方形式的高阶项以及与自由曲面反射镜的协同优化。AR-HUD原型采用Alvarez透镜,视场为13°× 5°,眼箱为130 mm × 60 mm。连续可变VID范围为2.5至7.5 m,分辨率超过每度60像素。Alvarez透镜的横向跨度超过20厘米,厚度仅为2.5厘米,使其体积达到10.4升,与单焦HUD相比几乎没有增加。
{"title":"Large-aperture nonparaxial Alvarez lenses enabling varifocal augmented reality head-up displays with a wide field of view","authors":"Yi Liu,&nbsp;Haoteng Liu,&nbsp;Zhiqing Zhao,&nbsp;Qimeng Wang,&nbsp;Weiji Liang,&nbsp;Bo-Ru Yang,&nbsp;Zong Qin","doi":"10.1016/j.optlaseng.2026.109638","DOIUrl":"10.1016/j.optlaseng.2026.109638","url":null,"abstract":"<div><div>Augmented reality head-up displays (AR-HUDs) serve as in-vehicle human-machine interfaces, with a critical requirement for a continuously adjustable virtual image distance (VID) across a large field of view (FOV). This adaptability allows the virtual image to align with objects at varying depths in the road environment, thereby mitigating visual fatigue. An AR-HUD employs a wide FOV and a large eyebox, resulting in a high étendue, which necessitates optical elements spanning more than 10 centimeters. Consequently, the varifocal component must feature a large aperture and be thin to ensure a compact form factor suitable for automotive integration. Among various slim varifocal optics, Alvarez lenses, which work through transverse lens displacement, offer good scalability due to the mature fabrication of freeform optics. However, increasing the aperture introduces significant aberrations regarding the conventional cubic-form Alvarez lens design. This study presents optimal design rules for applying Alvarez lenses to AR-HUDs, including the displacement method, higher-order terms beyond the classic cubic form, and co-optimization with the freeform mirror. An AR-HUD prototype with a FOV of 13° by 5° and an eyebox of 130 mm by 60 mm was built using Alvarez lenses. A continuously variable VID ranging from 2.5 to 7.5 m was achieved with a resolution of more than 60 pixels per degree. The Alvarez lenses span more than 20 cm transversely and are only 2.5 cm thick, enabling a compact volume of 10.4 liters—almost no increase from that of a single-focal HUD.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109638"},"PeriodicalIF":3.7,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D light field display with enhanced reconstruction accuracy based on distortion- suppressed compound lens array and pre-correction encoded image 基于抑制畸变复合透镜阵列和预校正编码图像的三维光场显示
IF 3.7 2区 工程技术 Q2 OPTICS Pub Date : 2026-06-01 Epub Date: 2026-01-22 DOI: 10.1016/j.optlaseng.2026.109630
Xudong Wen, Xin Gao, Yaohe Zheng, Ziyun Lu, Jinhong He, Hanyu Li, Ningchi Li, Boyang Liu, Binbin Yan, Xunbo Yu, Xinzhu Sang
In order to achieve a satisfactory three-dimensional (3D) light field display, the 3D image information with correct spatial occlusion relation should be provided in a wide viewing angle range. However, the optical aberration and the structural error are two key causes of the deformation of reconstructed 3D images, especially at the large viewing angle. Here, a joint optimization method of the optical structure and image coding for 3D light field display is proposed to enhance the construction accuracy of voxels. A composite lens with an aperture is designed to suppress optical aberrations. A pre-correction method based on optical path detection is implemented to further mitigate the structural errors and residual the optical aberrations. Experimental results demonstrated that the high-precision 3D light field display with a 100-degree viewing angle is achieved through the proposed method. The deviation of the voxel is reduced from 51.1 mm to 3.1 mm, compared with traditional methods. Medical, military and other applications with high-precision requirements can be met.
为了获得满意的三维光场显示效果,需要在较宽的视角范围内提供具有正确空间遮挡关系的三维图像信息。然而,光学像差和结构误差是造成三维重建图像变形的两个主要原因,尤其是在大视角下。本文提出了一种三维光场显示光学结构与图像编码的联合优化方法,以提高体素的构建精度。设计了一种带孔径的复合透镜来抑制光学像差。提出了一种基于光路检测的预校正方法,以进一步减小结构误差和残留光像差。实验结果表明,该方法可实现100度视角下的高精度三维光场显示。与传统方法相比,体素的偏差从51.1 mm减小到3.1 mm。可以满足医疗,军事和其他高精度要求的应用。
{"title":"3D light field display with enhanced reconstruction accuracy based on distortion- suppressed compound lens array and pre-correction encoded image","authors":"Xudong Wen,&nbsp;Xin Gao,&nbsp;Yaohe Zheng,&nbsp;Ziyun Lu,&nbsp;Jinhong He,&nbsp;Hanyu Li,&nbsp;Ningchi Li,&nbsp;Boyang Liu,&nbsp;Binbin Yan,&nbsp;Xunbo Yu,&nbsp;Xinzhu Sang","doi":"10.1016/j.optlaseng.2026.109630","DOIUrl":"10.1016/j.optlaseng.2026.109630","url":null,"abstract":"<div><div>In order to achieve a satisfactory three-dimensional (3D) light field display, the 3D image information with correct spatial occlusion relation should be provided in a wide viewing angle range. However, the optical aberration and the structural error are two key causes of the deformation of reconstructed 3D images, especially at the large viewing angle. Here, a joint optimization method of the optical structure and image coding for 3D light field display is proposed to enhance the construction accuracy of voxels. A composite lens with an aperture is designed to suppress optical aberrations. A pre-correction method based on optical path detection is implemented to further mitigate the structural errors and residual the optical aberrations. Experimental results demonstrated that the high-precision 3D light field display with a 100-degree viewing angle is achieved through the proposed method. The deviation of the voxel is reduced from 51.1 mm to 3.1 mm, compared with traditional methods. Medical, military and other applications with high-precision requirements can be met.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109630"},"PeriodicalIF":3.7,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Single-photon lidar system and multiscale optimization algorithm for extreme SBR 极端SBR单光子激光雷达系统及多尺度优化算法
IF 3.7 2区 工程技术 Q2 OPTICS Pub Date : 2026-06-01 Epub Date: 2026-01-20 DOI: 10.1016/j.optlaseng.2026.109629
Jinfeng Xu , Qingsheng Xue , Fengqin Lu , Junhong Song , Xing Li
Single-photon counting lidar enables highly sensitive imaging by detecting extremely weak photon returns, but reconstructing reliable depth and reflectance from sparse photon data under extreme low signal-to-background ratio (SBR) conditions remains challenging. We propose a reflectance-guided multi-scale joint optimization framework built on the Sparse Poisson Intensity Reconstruction Algorithm (SPIRAL-TAP), which achieves collaborative reconstruction of reflectance and depth, and thus avoids the edge blurring and structural inconsistencies often observed when reflectance and depth are reconstructed independently. In the depth update, reflectance-weighted guidance is introduced to improve reconstruction quality. Compared with several signal reconstruction algorithms, the proposed algorithm achieves high-quality 3D reconstruction with a reflectance root mean square error (RMSE) of 0.11 and depth RMSE 0.2 m at an extreme SBR of 0.04, representing a 48% reduction relative to the single-scale SPIRAL-TAP method. The effectiveness and generality of the framework are validated on publicly available sparse photon datasets. The experimental results demonstrate that the method significantly improves reconstruction accuracy while preserving fine spatial details, and provides a practical solution for 3D imaging under low-SBR single-photon lidar conditions.
单光子计数激光雷达通过检测极弱的光子返回来实现高灵敏度成像,但在极低信本比(SBR)条件下,从稀疏光子数据重建可靠的深度和反射率仍然具有挑战性。在稀疏泊松强度重建算法(SPIRAL-TAP)的基础上,提出了一种基于反射率引导的多尺度联合优化框架,实现了反射率和深度的协同重建,避免了反射率和深度单独重建时经常出现的边缘模糊和结构不一致等问题。在深度更新中,引入了反射率加权制导,提高了重建质量。与几种信号重建算法相比,该算法实现了高质量的三维重建,反射率均方根误差(RMSE)为0.11,深度RMSE为0.2 m, SBR极值为0.04,相对于单尺度SPIRAL-TAP方法降低了48%。在公开的稀疏光子数据集上验证了该框架的有效性和通用性。实验结果表明,该方法在保持良好空间细节的同时,显著提高了重建精度,为低sbr单光子激光雷达条件下的三维成像提供了一种实用的解决方案。
{"title":"Single-photon lidar system and multiscale optimization algorithm for extreme SBR","authors":"Jinfeng Xu ,&nbsp;Qingsheng Xue ,&nbsp;Fengqin Lu ,&nbsp;Junhong Song ,&nbsp;Xing Li","doi":"10.1016/j.optlaseng.2026.109629","DOIUrl":"10.1016/j.optlaseng.2026.109629","url":null,"abstract":"<div><div>Single-photon counting lidar enables highly sensitive imaging by detecting extremely weak photon returns, but reconstructing reliable depth and reflectance from sparse photon data under extreme low signal-to-background ratio (SBR) conditions remains challenging. We propose a reflectance-guided multi-scale joint optimization framework built on the Sparse Poisson Intensity Reconstruction Algorithm (SPIRAL-TAP), which achieves collaborative reconstruction of reflectance and depth, and thus avoids the edge blurring and structural inconsistencies often observed when reflectance and depth are reconstructed independently. In the depth update, reflectance-weighted guidance is introduced to improve reconstruction quality. Compared with several signal reconstruction algorithms, the proposed algorithm achieves high-quality 3D reconstruction with a reflectance root mean square error (RMSE) of 0.11 and depth RMSE 0.2 m at an extreme SBR of 0.04, representing a 48% reduction relative to the single-scale SPIRAL-TAP method. The effectiveness and generality of the framework are validated on publicly available sparse photon datasets. The experimental results demonstrate that the method significantly improves reconstruction accuracy while preserving fine spatial details, and provides a practical solution for 3D imaging under low-SBR single-photon lidar conditions.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109629"},"PeriodicalIF":3.7,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clutter suppression of optical coherence tomography angiography based on kalman filtering of optical attenuation coefficient and eigen decomposition for deep cortex vascular visualization 基于光学衰减系数卡尔曼滤波和特征分解的光学相干断层血管成像杂波抑制
IF 3.7 2区 工程技术 Q2 OPTICS Pub Date : 2026-06-01 Epub Date: 2026-01-28 DOI: 10.1016/j.optlaseng.2026.109660
Ben Xiang , Xinru Wu , Haoran Zhang , Jian Liu , Yao Yu , Jingmin Luan , Yuqian Zhao , Yi Wang , Yanqiu Yang , Zhenhe Ma
In this study, we present an optical coherence tomography angiography (OCTA) method based on Kalman filtering of optical attenuation coefficient (OAC) and eigen decomposition (oED) for clutter suppression and deep microvascular signal enhancement. The proposed approach utilizes the inherent characteristics of the tissue and suppresses the influence of noise floor on the accuracy of OAC calculation through Kalman filtering, effectively solving the problem of low deep OCT signals causing deep microvascular signals to be overwhelmed by noise and light source jitter. Concurrently, by performing eigen decomposition on the computed OAC images, the method achieves substantial clutter suppression. In phantom and in vivo experiments, oED can effectively suppress clutter and noise, extract deep microvascular signals, promote better differentiation between blood vessels and static tissues, and visualize deep microvessels. Owing to these advantages, vascular maps processed with the oED method exhibit enhanced vessel visibility and connectivity, thereby substantially improving overall image quality. This approach holds significant promise for studying vascular-related pathology in pairs.
在这项研究中,我们提出了一种基于卡尔曼滤波的光学衰减系数(OAC)和本征分解(oED)的光学相干断层血管成像(OCTA)方法,用于杂波抑制和深部微血管信号增强。该方法利用组织的固有特性,通过卡尔曼滤波抑制本底噪声对OAC计算精度的影响,有效解决深层OCT信号低导致深层微血管信号被噪声和光源抖动淹没的问题。同时,通过对计算得到的OAC图像进行特征分解,实现了较好的杂波抑制。在幻影和活体实验中,oED能够有效抑制杂波和噪声,提取深层微血管信号,促进血管与静态组织更好的区分,实现深层微血管可视化。由于这些优点,使用oED方法处理的血管图具有增强的血管可见性和连通性,从而大大提高了整体图像质量。这种方法对成对研究血管相关病理具有重要的前景。
{"title":"Clutter suppression of optical coherence tomography angiography based on kalman filtering of optical attenuation coefficient and eigen decomposition for deep cortex vascular visualization","authors":"Ben Xiang ,&nbsp;Xinru Wu ,&nbsp;Haoran Zhang ,&nbsp;Jian Liu ,&nbsp;Yao Yu ,&nbsp;Jingmin Luan ,&nbsp;Yuqian Zhao ,&nbsp;Yi Wang ,&nbsp;Yanqiu Yang ,&nbsp;Zhenhe Ma","doi":"10.1016/j.optlaseng.2026.109660","DOIUrl":"10.1016/j.optlaseng.2026.109660","url":null,"abstract":"<div><div>In this study, we present an optical coherence tomography angiography (OCTA) method based on Kalman filtering of optical attenuation coefficient (OAC) and eigen decomposition (oED) for clutter suppression and deep microvascular signal enhancement. The proposed approach utilizes the inherent characteristics of the tissue and suppresses the influence of noise floor on the accuracy of OAC calculation through Kalman filtering, effectively solving the problem of low deep OCT signals causing deep microvascular signals to be overwhelmed by noise and light source jitter. Concurrently, by performing eigen decomposition on the computed OAC images, the method achieves substantial clutter suppression. In phantom and in vivo experiments, oED can effectively suppress clutter and noise, extract deep microvascular signals, promote better differentiation between blood vessels and static tissues, and visualize deep microvessels. Owing to these advantages, vascular maps processed with the oED method exhibit enhanced vessel visibility and connectivity, thereby substantially improving overall image quality. This approach holds significant promise for studying vascular-related pathology in pairs.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109660"},"PeriodicalIF":3.7,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An optimized GPU-accelerated digital image correlation algorithm 一种优化的gpu加速数字图像相关算法
IF 3.7 2区 工程技术 Q2 OPTICS Pub Date : 2026-06-01 Epub Date: 2026-01-31 DOI: 10.1016/j.optlaseng.2026.109635
Yuanhang Dou , Xuexi Cui , Xiangdong Wu , Min Wan
Digital Image Correlation (DIC) algorithm is widely used to measure the deformation of materials under load. Due to the inherent complexity of the DIC algorithm, high-sampling-rate measurement of displacement or deformation fields remains a significant challenge. We leverage the CUDA parallel computing environment of NVIDIA GPUs combined with an innovative DIC algorithm to improve processing speed. We divide all subsets in the measurement region into multiple levels, and analyze the motion characteristics of subsets in the image sequence. The deformation parameter initial values of other subsets are estimated using the results of already computed subsets, which greatly reduces the computational load for initial guesses. Additionally, this method enables the entire process of continuous computation to be executed on the GPU, avoiding frequent data exchange and CPU involvement. During the Inverse Compositional Gauss-Newton (IC-GN) iterative calculation, the L1 cache and thread resources of the GPU chip are fully utilized to enhance computing speed. The accuracy of the method was validated on the recognized DIC Challenge dataset. This proves that our method meets the measurement requirements. Achieved a maximum 2D full-field DIC measurement speed of over 6 × 106 points per second, or a stereo measurement rate exceeding 200 fps. The reliability of the algorithm in relevant experiments was verified using real uniaxial tensile, biaxial tensile, and vibration test images.
数字图像相关(DIC)算法被广泛用于测量材料在载荷作用下的变形。由于DIC算法固有的复杂性,高采样率的位移或变形场测量仍然是一个重大挑战。我们利用NVIDIA gpu的CUDA并行计算环境,结合创新的DIC算法来提高处理速度。我们将测量区域的所有子集划分为多个级别,并分析子集在图像序列中的运动特征。其他子集的变形参数初始值是利用已经计算的子集的结果来估计的,这大大减少了初始猜测的计算量。此外,这种方法使连续计算的整个过程都在GPU上执行,避免了频繁的数据交换和CPU的介入。在逆成分高斯-牛顿(IC-GN)迭代计算中,充分利用GPU芯片的L1缓存和线程资源,提高计算速度。在已识别的DIC Challenge数据集上验证了该方法的准确性。这证明我们的方法满足测量要求。最大2D全场DIC测量速度超过6 × 106点/秒,立体测量速率超过200 fps。通过真实的单轴拉伸、双轴拉伸和振动测试图像,验证了算法在相关实验中的可靠性。
{"title":"An optimized GPU-accelerated digital image correlation algorithm","authors":"Yuanhang Dou ,&nbsp;Xuexi Cui ,&nbsp;Xiangdong Wu ,&nbsp;Min Wan","doi":"10.1016/j.optlaseng.2026.109635","DOIUrl":"10.1016/j.optlaseng.2026.109635","url":null,"abstract":"<div><div>Digital Image Correlation (DIC) algorithm is widely used to measure the deformation of materials under load. Due to the inherent complexity of the DIC algorithm, high-sampling-rate measurement of displacement or deformation fields remains a significant challenge. We leverage the CUDA parallel computing environment of NVIDIA GPUs combined with an innovative DIC algorithm to improve processing speed. We divide all subsets in the measurement region into multiple levels, and analyze the motion characteristics of subsets in the image sequence. The deformation parameter initial values of other subsets are estimated using the results of already computed subsets, which greatly reduces the computational load for initial guesses. Additionally, this method enables the entire process of continuous computation to be executed on the GPU, avoiding frequent data exchange and CPU involvement. During the Inverse Compositional Gauss-Newton (IC-GN) iterative calculation, the L1 cache and thread resources of the GPU chip are fully utilized to enhance computing speed. The accuracy of the method was validated on the recognized DIC Challenge dataset. This proves that our method meets the measurement requirements. Achieved a maximum 2D full-field DIC measurement speed of over 6 × 10<sup>6</sup> points per second, or a stereo measurement rate exceeding 200 fps. The reliability of the algorithm in relevant experiments was verified using real uniaxial tensile, biaxial tensile, and vibration test images.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109635"},"PeriodicalIF":3.7,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CEDF-VSNLO: A novel computational-efficient denoising framework via variance stabilization and noise-adaptive low-rank optimization for fringe patterns CEDF-VSNLO:一种基于方差稳定和噪声自适应低秩优化的计算效率高的条纹图去噪框架
IF 3.7 2区 工程技术 Q2 OPTICS Pub Date : 2026-06-01 Epub Date: 2026-01-31 DOI: 10.1016/j.optlaseng.2026.109667
Yuheng Li , Cixing Lv , Junwei Liang , Yi Qin , Jiale Li , Yunyao Zeng
Phase Shifting Profilometry (PSP) stands as a dominant technique within optical metrology for high-precision 3D measurement, yet its accuracy is fundamentally limited by the mixed Gaussian-Poisson (GP) noise, which necessitates highly effective denoising. However, existing methods suffer from limitations in computational efficiency and the effective use of physical priors. To overcome these limitations, this paper proposes a novel Computational-Efficient Denoising Framework via Variance Stabilization and Noise-adaptive Low-Rank Optimization, called CEDF-VSNLO. The proposed framework introduces a computational strategy that transforms the denoising task from processing N phase-shifted fringe patterns to processing only two variance-stabilized sine/cosine component images. Since the number of processed images is fixed at two regardless of the phase shift steps N, this approach decouples the computational cost from the shift steps, thereby achieving a fundamental reduction in complexity from linear O(N) to constant O(1) relative to N. Additionally, the framework is further enhanced by an improved method for estimating the inherent Amplitude-Modulation and Frequency-Modulation (AM-FM) physical prior. Guided by the resulting AM-FM map, a two-stage clustering strategy is then employed to group image blocks based on their shared noise characteristics. This organization enables a final, noise-adaptive low-rank denoising process, where the regularization strength for each cluster is dynamically calibrated using its average AM-FM values to optimally balance noise suppression with structural fidelity. Simulations and real experiments demonstrate that the proposed CEDF-VSNLO framework significantly improves phase accuracy and structural fidelity, outperforming current state-of-the-art techniques.
相移轮廓术(PSP)是光学计量学中高精度三维测量的主要技术,但其精度受到高斯-泊松(GP)混合噪声的限制,这需要高效的去噪。然而,现有的方法在计算效率和物理先验的有效利用方面受到限制。为了克服这些限制,本文提出了一种新的基于方差稳定和噪声自适应低秩优化的计算高效去噪框架,称为CEDF-VSNLO。该框架引入了一种计算策略,将去噪任务从处理N个相移条纹图案转换为仅处理两个方差稳定的正弦/余弦分量图像。由于无论相移步长N如何,处理的图像数量都固定为2,因此该方法将计算成本与移步长解耦,从而实现了从线性O(N)到相对于N的恒定O(1)的基本复杂性降低。此外,该框架通过改进的估计固有调幅和调频(AM-FM)物理先验的方法进一步增强。在得到的AM-FM图的指导下,采用两阶段聚类策略根据图像块的共同噪声特征对其进行分组。该组织实现了最终的噪声自适应低秩去噪过程,其中每个簇的正则化强度使用其平均AM-FM值动态校准,以最佳地平衡噪声抑制与结构保真度。仿真和实际实验表明,所提出的CEDF-VSNLO框架显著提高了相位精度和结构保真度,优于当前最先进的技术。
{"title":"CEDF-VSNLO: A novel computational-efficient denoising framework via variance stabilization and noise-adaptive low-rank optimization for fringe patterns","authors":"Yuheng Li ,&nbsp;Cixing Lv ,&nbsp;Junwei Liang ,&nbsp;Yi Qin ,&nbsp;Jiale Li ,&nbsp;Yunyao Zeng","doi":"10.1016/j.optlaseng.2026.109667","DOIUrl":"10.1016/j.optlaseng.2026.109667","url":null,"abstract":"<div><div>Phase Shifting Profilometry (PSP) stands as a dominant technique within optical metrology for high-precision 3D measurement, yet its accuracy is fundamentally limited by the mixed Gaussian-Poisson (GP) noise, which necessitates highly effective denoising. However, existing methods suffer from limitations in computational efficiency and the effective use of physical priors. To overcome these limitations, this paper proposes a novel Computational-Efficient Denoising Framework via Variance Stabilization and Noise-adaptive Low-Rank Optimization, called CEDF-VSNLO. The proposed framework introduces a computational strategy that transforms the denoising task from processing <span><math><mi>N</mi></math></span> phase-shifted fringe patterns to processing only two variance-stabilized sine/cosine component images. Since the number of processed images is fixed at two regardless of the phase shift steps <span><math><mi>N</mi></math></span>, this approach decouples the computational cost from the shift steps, thereby achieving a fundamental reduction in complexity from linear <span><math><mrow><mi>O</mi><mo>(</mo><mi>N</mi><mo>)</mo></mrow></math></span> to constant <span><math><mrow><mi>O</mi><mo>(</mo><mn>1</mn><mo>)</mo></mrow></math></span> relative to <span><math><mi>N</mi></math></span>. Additionally, the framework is further enhanced by an improved method for estimating the inherent Amplitude-Modulation and Frequency-Modulation (AM-FM) physical prior. Guided by the resulting AM-FM map, a two-stage clustering strategy is then employed to group image blocks based on their shared noise characteristics. This organization enables a final, noise-adaptive low-rank denoising process, where the regularization strength for each cluster is dynamically calibrated using its average AM-FM values to optimally balance noise suppression with structural fidelity. Simulations and real experiments demonstrate that the proposed CEDF-VSNLO framework significantly improves phase accuracy and structural fidelity, outperforming current state-of-the-art techniques.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109667"},"PeriodicalIF":3.7,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-line-of-sight human pose estimation 非视线人体姿势估计
IF 3.7 2区 工程技术 Q2 OPTICS Pub Date : 2026-06-01 Epub Date: 2026-02-10 DOI: 10.1016/j.optlaseng.2026.109658
Zhongpei Xiao , Chen Dai , Ruilin Ye , Jianwei Zeng , Wenwen Li , Feihu Xu
Estimating human poses in non-line-of-sight (NLOS) conditions is crucial for applications in rescue and security. However, directly applying existing pose estimation technology to NLOS imaging results in low accuracy due to severe signal degradation and limited spatial resolution, particularly when the target is far from the relay surface. To address these limitations, we propose a novel framework for NLOS human pose estimation incorporating two key advancements. First, we introduce a deep learning architecture that jointly extracts features from both depth and intensity maps, enabling robust pose estimation even under low signal-to-noise ratio (SNR) conditions. Second, a physics-based NLOS simulation pipeline is provided to generate a large-scale dataset of NLOS pose samples, enabling flexible training data synthesis without complex capture systems. Experimental results on both synthetic and real-world NLOS datasets demonstrate that our approach achieves accurate pose estimation at challenging distances (up to 1.75 m from relay surfaces) under low-SNR conditions. These results highlight the significant potential of our framework for practical NLOS human pose estimation applications.
在非视距(NLOS)条件下估计人体姿势对于救援和安全应用至关重要。然而,直接将现有的姿态估计技术应用于NLOS成像,由于信号严重退化和空间分辨率有限,特别是当目标距离中继表面较远时,精度较低。为了解决这些限制,我们提出了一个新的NLOS人体姿态估计框架,其中包括两个关键进展。首先,我们引入了一种深度学习架构,该架构可以联合从深度和强度图中提取特征,即使在低信噪比(SNR)条件下也能实现鲁棒的姿态估计。其次,提供基于物理的NLOS仿真管道,生成大规模的NLOS姿态样本数据集,无需复杂的捕获系统即可实现灵活的训练数据合成。在合成和实际NLOS数据集上的实验结果表明,在低信噪比条件下,我们的方法在具有挑战性的距离(距离继电器表面高达1.75 m)上实现了准确的姿态估计。这些结果突出了我们的框架在实际NLOS人体姿态估计应用中的巨大潜力。
{"title":"Non-line-of-sight human pose estimation","authors":"Zhongpei Xiao ,&nbsp;Chen Dai ,&nbsp;Ruilin Ye ,&nbsp;Jianwei Zeng ,&nbsp;Wenwen Li ,&nbsp;Feihu Xu","doi":"10.1016/j.optlaseng.2026.109658","DOIUrl":"10.1016/j.optlaseng.2026.109658","url":null,"abstract":"<div><div>Estimating human poses in non-line-of-sight (NLOS) conditions is crucial for applications in rescue and security. However, directly applying existing pose estimation technology to NLOS imaging results in low accuracy due to severe signal degradation and limited spatial resolution, particularly when the target is far from the relay surface. To address these limitations, we propose a novel framework for NLOS human pose estimation incorporating two key advancements. First, we introduce a deep learning architecture that jointly extracts features from both depth and intensity maps, enabling robust pose estimation even under low signal-to-noise ratio (SNR) conditions. Second, a physics-based NLOS simulation pipeline is provided to generate a large-scale dataset of NLOS pose samples, enabling flexible training data synthesis without complex capture systems. Experimental results on both synthetic and real-world NLOS datasets demonstrate that our approach achieves accurate pose estimation at challenging distances (up to 1.75 m from relay surfaces) under low-SNR conditions. These results highlight the significant potential of our framework for practical NLOS human pose estimation applications.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"201 ","pages":"Article 109658"},"PeriodicalIF":3.7,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146189765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Optics and Lasers in Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1