首页 > 最新文献

IEEE Transactions on Computational Imaging最新文献

英文 中文
Closed-Form Approximation of the Total Variation Proximal Operator 总变分近算子的封闭逼近
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-08-28 DOI: 10.1109/TCI.2025.3603689
Edward P. Chandler;Shirin Shoushtari;Brendt Wohlberg;Ulugbek S. Kamilov
Total variation (TV) is a widely used function for regularizing imaging inverse problems that is particularly appropriate for images whose underlying structure is piecewise constant. TV regularized optimization problems are typically solved using proximal methods, but the way in which they are applied is constrained by the absence of a closed-form expression for the proximal operator of the TV function. A closed-form approximation of the TV proximal operator has previously been proposed, but its accuracy was not theoretically explored in detail. We address this gap by making several new theoretical contributions, proving that the approximation leads to a proximal operator of some convex function, it is equivalent to a gradient descent step on a smoothed version of TV, and that its error can be fully characterized and controlled with its scaling parameter. We experimentally validate our theoretical results on image denoising and sparse-view computed tomography (CT) image reconstruction.
总变分(TV)是一种广泛用于正则化成像逆问题的函数,特别适用于底层结构为分段常数的图像。TV正则化优化问题通常使用近端方法来解决,但它们的应用方式受到缺乏TV函数近端算子的封闭形式表达式的限制。以前已经提出了一种电视近端算子的闭形式近似,但其精度没有在理论上进行详细的探讨。我们通过提出几个新的理论贡献来解决这一差距,证明了近似导致一些凸函数的近端算子,它相当于平滑版TV的梯度下降步骤,并且其误差可以通过其缩放参数完全表征和控制。实验验证了图像去噪和稀疏视图计算机断层扫描(CT)图像重建的理论结果。
{"title":"Closed-Form Approximation of the Total Variation Proximal Operator","authors":"Edward P. Chandler;Shirin Shoushtari;Brendt Wohlberg;Ulugbek S. Kamilov","doi":"10.1109/TCI.2025.3603689","DOIUrl":"https://doi.org/10.1109/TCI.2025.3603689","url":null,"abstract":"Total variation (TV) is a widely used function for regularizing imaging inverse problems that is particularly appropriate for images whose underlying structure is piecewise constant. TV regularized optimization problems are typically solved using proximal methods, but the way in which they are applied is constrained by the absence of a closed-form expression for the proximal operator of the TV function. A closed-form approximation of the TV proximal operator has previously been proposed, but its accuracy was not theoretically explored in detail. We address this gap by making several new theoretical contributions, proving that the approximation leads to a proximal operator of some convex function, it is equivalent to a gradient descent step on a smoothed version of TV, and that its error can be fully characterized and controlled with its scaling parameter. We experimentally validate our theoretical results on image denoising and sparse-view computed tomography (CT) image reconstruction.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1217-1228"},"PeriodicalIF":4.8,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145090148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fourier Analysis of Interference Scanning Optical Probe Microscopy 干涉扫描光学探针显微镜的傅里叶分析
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-08-28 DOI: 10.1109/TCI.2025.3603741
Emmanuel Soubies;Wolfgang Bacsa
As opposed to popular far-field and near-field optical microscopy techniques, Interference Scanning Optical probe Microscopy (ISOM) operates in the intermediate-field region, where the probing distance is typically of the order of the wavelength of incident light. Specifically, ISOM enables the imaging of nanostructures through numerical inverse scattering of standing waves generated by the interference between the incident (or reflected) and scattered waves. In this work, we shed new light on this microscopy modality through an in-depth Fourier analysis. Our analysis reveals insights on the required acquisition sampling step as well as on the resolution limit of the system. Moreover, we propose two novel methods to address the associated inverse scattering problem, leveraging the intrinsic structure of the image formation model to reduce computational complexity and sensitivity to errors in model parameters. Finally, we illustrate our theoretical findings with numerical experiments.
与流行的远场和近场光学显微镜技术相反,干涉扫描光学探针显微镜(ISOM)在中间场区域工作,其中探测距离通常是入射光波长的数量级。具体来说,ISOM通过入射波(或反射波)和散射波之间的干涉产生的驻波的数值逆散射来实现纳米结构的成像。在这项工作中,我们通过深入的傅立叶分析揭示了这种显微镜形态的新亮点。我们的分析揭示了所需的采集采样步骤以及系统的分辨率限制的见解。此外,我们提出了两种新的方法来解决相关的逆散射问题,利用图像形成模型的内在结构来降低计算复杂度和对模型参数误差的敏感性。最后,我们用数值实验来说明我们的理论发现。
{"title":"Fourier Analysis of Interference Scanning Optical Probe Microscopy","authors":"Emmanuel Soubies;Wolfgang Bacsa","doi":"10.1109/TCI.2025.3603741","DOIUrl":"https://doi.org/10.1109/TCI.2025.3603741","url":null,"abstract":"As opposed to popular far-field and near-field optical microscopy techniques, Interference Scanning Optical probe Microscopy (ISOM) operates in the intermediate-field region, where the probing distance is typically of the order of the wavelength of incident light. Specifically, ISOM enables the imaging of nanostructures through numerical inverse scattering of standing waves generated by the interference between the incident (or reflected) and scattered waves. In this work, we shed new light on this microscopy modality through an in-depth Fourier analysis. Our analysis reveals insights on the required acquisition sampling step as well as on the resolution limit of the system. Moreover, we propose two novel methods to address the associated inverse scattering problem, leveraging the intrinsic structure of the image formation model to reduce computational complexity and sensitivity to errors in model parameters. Finally, we illustrate our theoretical findings with numerical experiments.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1206-1216"},"PeriodicalIF":4.8,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145036203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Single-View Fluorescence Molecular Tomography Based on Hyperspectral NIR-II Imaging 基于高光谱NIR-II成像的单视点荧光分子层析成像
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-08-25 DOI: 10.1109/TCI.2025.3602315
Yunfei Li;Qian Liu;Fuhong Cai
Biological tissue optics has garnered significant attention in biomedical research for its non-destructive, high-sensitivity nature. However, the scattering and absorption properties of biological tissues fundamentally limit the penetration depth of optical imaging. Fluorescence molecular tomography (FMT) offers a solution balancing imaging depth and resolution, yet tissue scattering and absorption continue to challenge depth-resolved reconstruction accuracy. This study develops a sensitive near-infrared II (NIR-II) hyperspectral imaging system to investigate the relationship between fluorescence penetration depth and tissue absorption/scattering coefficients. By leveraging the strong water absorption peak around 1450 nm, we strategically divide the reconstruction object into layers within the FMT model, significantly improving the ill-posed inverse problem. We then utilize hyperspectral data to select wavelengths with progressively decreasing absorption coefficients relative to the 1450 nm peak. This enables layer-by-layer 3D reconstruction of deep biological tissues, overcoming the limitations of conventional FMT. Our method demonstrates single-perspective FMT reconstruction capable of resolving heterogeneous targets at 10 mm depth with a 0.74 Dice coefficient in depth discrimination. This spectraldimension-enhanced FMT method enables accurate 3D reconstruction from single-view measurements. By exploiting the depth-dependent light-tissue interactions at selected NIR-II wavelengths, our approach achieves imaging quality comparable to multi-angle systems while simplifying the experimental setup. Both simulation and phantom experiments demonstrate precise target localization and shape recovery, suggesting promising potential for small animal imaging applications where system complexity and acquisition speed are critical.
生物组织光学以其无损、高灵敏度的特性在生物医学研究中引起了广泛的关注。然而,生物组织的散射和吸收特性从根本上限制了光学成像的穿透深度。荧光分子层析成像(FMT)提供了一种平衡成像深度和分辨率的解决方案,但组织散射和吸收继续挑战深度分辨率重建的准确性。本研究开发了一种灵敏的近红外高光谱成像系统,研究荧光穿透深度与组织吸收/散射系数的关系。利用1450 nm左右的强吸水峰,我们在FMT模型中有策略地将重建对象分层,显著改善了不适定逆问题。然后,我们利用高光谱数据来选择相对于1450 nm峰吸收系数逐渐降低的波长。这使得深层生物组织的逐层三维重建成为可能,克服了传统FMT的局限性。我们的方法证明了单视角FMT重建能够分辨10 mm深度的非均匀目标,深度判别的Dice系数为0.74。这种光谱尺寸增强的FMT方法可以从单视图测量中实现精确的3D重建。通过在选定的NIR-II波长下利用与深度相关的光-组织相互作用,我们的方法在简化实验设置的同时实现了与多角度系统相当的成像质量。模拟和模拟实验都证明了精确的目标定位和形状恢复,这表明在系统复杂性和获取速度至关重要的小动物成像应用中有很大的潜力。
{"title":"Single-View Fluorescence Molecular Tomography Based on Hyperspectral NIR-II Imaging","authors":"Yunfei Li;Qian Liu;Fuhong Cai","doi":"10.1109/TCI.2025.3602315","DOIUrl":"https://doi.org/10.1109/TCI.2025.3602315","url":null,"abstract":"Biological tissue optics has garnered significant attention in biomedical research for its non-destructive, high-sensitivity nature. However, the scattering and absorption properties of biological tissues fundamentally limit the penetration depth of optical imaging. Fluorescence molecular tomography (FMT) offers a solution balancing imaging depth and resolution, yet tissue scattering and absorption continue to challenge depth-resolved reconstruction accuracy. This study develops a sensitive near-infrared II (NIR-II) hyperspectral imaging system to investigate the relationship between fluorescence penetration depth and tissue absorption/scattering coefficients. By leveraging the strong water absorption peak around 1450 nm, we strategically divide the reconstruction object into layers within the FMT model, significantly improving the ill-posed inverse problem. We then utilize hyperspectral data to select wavelengths with progressively decreasing absorption coefficients relative to the 1450 nm peak. This enables layer-by-layer 3D reconstruction of deep biological tissues, overcoming the limitations of conventional FMT. Our method demonstrates single-perspective FMT reconstruction capable of resolving heterogeneous targets at 10 mm depth with a 0.74 Dice coefficient in depth discrimination. This spectraldimension-enhanced FMT method enables accurate 3D reconstruction from single-view measurements. By exploiting the depth-dependent light-tissue interactions at selected NIR-II wavelengths, our approach achieves imaging quality comparable to multi-angle systems while simplifying the experimental setup. Both simulation and phantom experiments demonstrate precise target localization and shape recovery, suggesting promising potential for small animal imaging applications where system complexity and acquisition speed are critical.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1161-1173"},"PeriodicalIF":4.8,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Multi-Source Illumination Color Constancy Through Physics-Based Rendering and Spectral Power Distribution Embedding 基于物理渲染和光谱功率分布嵌入的多光源照明色彩恒常性研究
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-08-20 DOI: 10.1109/TCI.2025.3598440
Xinhui Xue;Hai-Miao Hu;Zhuang He;Haowen Zheng
Color constancy seeks to keep the perceived color of objects consistent under varying illumination conditions. However, existing methods often rely on restrictive prior assumptions or suffer from limited generalization capability, posing significant challenges in complex scenes with multiple light sources. In this paper, we propose a neural network-enhanced, physics-based approach to multi-illuminant color constancy that leverages spectral imaging—highly sensitive to illumination variation. First, we analyze the physical image-formation process under mixed lighting and introduce a master–subordinate illumination model, extending conventional correlated-color-temperature re-illumination techniques. Our neural network framework explicitly models the correlation between narrow-band spectral reflectance and the spectral power distribution (SPD) of the illumination, enabling accurate recovery of the scene light’s full SPD. Using this model, we fuse RGB images with the estimated illumination spectra to predict illuminant chromaticity precisely, then correct image colors to a standard reference light. Extensive experiments on synthetic multi–color-temperature datasets and real-world spectral datasets demonstrate that our neural network-based method achieves state-of-the-art accuracy in spectral estimation and color-constancy correction.
颜色恒常性是指在不同的照明条件下保持物体的感知颜色一致。然而,现有的方法往往依赖于限制性的先验假设或泛化能力有限,在多光源的复杂场景中提出了重大挑战。在本文中,我们提出了一种神经网络增强的、基于物理的多光源颜色恒常性方法,该方法利用光谱成像对照明变化高度敏感。首先,我们分析了混合光照下的物理成像过程,并引入了主从照明模型,扩展了传统的相关色温再照明技术。我们的神经网络框架明确地模拟了窄带光谱反射率与照明的光谱功率分布(SPD)之间的相关性,从而能够准确地恢复场景光的全SPD。利用该模型,我们将RGB图像与估计的照明光谱融合,精确预测光源色度,然后将图像颜色校正为标准参考光。在合成多色温数据集和真实光谱数据集上的大量实验表明,我们基于神经网络的方法在光谱估计和颜色常数校正方面达到了最先进的精度。
{"title":"Towards Multi-Source Illumination Color Constancy Through Physics-Based Rendering and Spectral Power Distribution Embedding","authors":"Xinhui Xue;Hai-Miao Hu;Zhuang He;Haowen Zheng","doi":"10.1109/TCI.2025.3598440","DOIUrl":"https://doi.org/10.1109/TCI.2025.3598440","url":null,"abstract":"Color constancy seeks to keep the perceived color of objects consistent under varying illumination conditions. However, existing methods often rely on restrictive prior assumptions or suffer from limited generalization capability, posing significant challenges in complex scenes with multiple light sources. In this paper, we propose a neural network-enhanced, physics-based approach to multi-illuminant color constancy that leverages spectral imaging—highly sensitive to illumination variation. First, we analyze the physical image-formation process under mixed lighting and introduce a master–subordinate illumination model, extending conventional correlated-color-temperature re-illumination techniques. Our neural network framework explicitly models the correlation between narrow-band spectral reflectance and the spectral power distribution (SPD) of the illumination, enabling accurate recovery of the scene light’s full SPD. Using this model, we fuse RGB images with the estimated illumination spectra to predict illuminant chromaticity precisely, then correct image colors to a standard reference light. Extensive experiments on synthetic multi–color-temperature datasets and real-world spectral datasets demonstrate that our neural network-based method achieves state-of-the-art accuracy in spectral estimation and color-constancy correction.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1349-1360"},"PeriodicalIF":4.8,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145255921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Video Super-Resolution: Spatiotemporal Fusion for Sparse Camera Array 多视频超分辨率:稀疏相机阵列的时空融合
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-08-18 DOI: 10.1109/TCI.2025.3599774
Xudong Liu;Tianren Li;Yu Zhang;Yufu Qu;Zhenzhong Wei
A sparse camera array captures multiple images of a scene within the same spatial plane, enabling super-resolution reconstruction. However, existing methods often fail to fully exploit time as an additional dimension for enhanced information acquisition. Even when temporal and spatial observations are collected simultaneously, their individual contributions are often conflated. Analysis of the system’s imaging model reveals that the spatiotemporal camera system, integrating a camera array with video sequences, holds greater potential for degradation recovery. Based on these insights, we propose a novel multi-video super-resolution network for spatiotemporal information fusion. Guided by explicit physical dimensional orientation, the network effectively integrates spatial information and propagates it along the temporal dimension. By utilizing diverse and informative spatiotemporal sampling, our method more readily addresses challenges arising from ill-posed mapping matrices during reconstruction. Experimental results on both synthetic and real-world datasets show that the components of our network, with information fully propagated and spatiotemporally fused, work synergistically to enhance super-resolution performance, providing substantial improvements over state-of-the-art methods. We believe our study can inspire innovations for future super-resolution tasks by optimizing information acquisition and utilization.
稀疏相机阵列捕获同一空间平面内场景的多幅图像,实现超分辨率重建。然而,现有的方法往往不能充分利用时间作为增强信息获取的额外维度。即使同时收集时间和空间观测,它们各自的贡献也常常被混为一谈。对系统成像模型的分析表明,将摄像机阵列与视频序列相结合的时空摄像机系统具有更大的退化恢复潜力。在此基础上,我们提出了一种用于时空信息融合的新型多视频超分辨率网络。在明确的物理维度方向的指导下,网络有效地整合空间信息并沿着时间维度传播。通过利用多样化和信息丰富的时空采样,我们的方法更容易解决重建过程中由不适定映射矩阵引起的挑战。在合成数据集和真实数据集上的实验结果表明,我们的网络组件具有充分传播和时空融合的信息,协同工作以提高超分辨率性能,比最先进的方法提供了实质性的改进。我们相信我们的研究可以通过优化信息获取和利用来激发未来超分辨率任务的创新。
{"title":"Multi-Video Super-Resolution: Spatiotemporal Fusion for Sparse Camera Array","authors":"Xudong Liu;Tianren Li;Yu Zhang;Yufu Qu;Zhenzhong Wei","doi":"10.1109/TCI.2025.3599774","DOIUrl":"https://doi.org/10.1109/TCI.2025.3599774","url":null,"abstract":"A sparse camera array captures multiple images of a scene within the same spatial plane, enabling super-resolution reconstruction. However, existing methods often fail to fully exploit time as an additional dimension for enhanced information acquisition. Even when temporal and spatial observations are collected simultaneously, their individual contributions are often conflated. Analysis of the system’s imaging model reveals that the spatiotemporal camera system, integrating a camera array with video sequences, holds greater potential for degradation recovery. Based on these insights, we propose a novel multi-video super-resolution network for spatiotemporal information fusion. Guided by explicit physical dimensional orientation, the network effectively integrates spatial information and propagates it along the temporal dimension. By utilizing diverse and informative spatiotemporal sampling, our method more readily addresses challenges arising from ill-posed mapping matrices during reconstruction. Experimental results on both synthetic and real-world datasets show that the components of our network, with information fully propagated and spatiotemporally fused, work synergistically to enhance super-resolution performance, providing substantial improvements over state-of-the-art methods. We believe our study can inspire innovations for future super-resolution tasks by optimizing information acquisition and utilization.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1087-1098"},"PeriodicalIF":4.8,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144909295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Cardiac Cine MRI Reconstruction With Spatiotemporal Diffusion Model 基于时空扩散模型的鲁棒心脏MRI重构
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-08-13 DOI: 10.1109/TCI.2025.3598421
Zi Wang;Jiahao Huang;Mingkai Huang;Chengyan Wang;Guang Yang;Xiaobo Qu
Accelerated dynamic magnetic resonance imaging (MRI) is highly expected in clinical applications. However, its reconstruction remains challenging due to the inherently high dimensionality and spatiotemporal complexity. While diffusion models have demonstrated robust performance in spatial imaging, their application to spatiotemporal data has been underexplored. To address this gap, we propose a novel spatiotemporal diffusion model (STDM) specifically designed for robust dynamic MRI reconstruction. Our approach decomposes the complex 3D diffusion process into manageable sub-problems by focusing on 2D spatiotemporal images, thereby reducing dimensionality and enhancing computational efficiency. Each 2D image is treated independently, allowing for a parallel reverse diffusion process guided by data consistency to ensure measurement alignment. To further improve the image quality, we introduce a dual-directional diffusion framework (dSTDM), which simultaneously performs reverse diffusion along two orthogonal directions, effectively capturing the full 3D data distribution. Comprehensive experiments on cardiac cine MRI datasets demonstrate that our approach achieves state-of-the-art performance in highly accelerated reconstruction. Additionally, it exhibits preliminary robustness across various undersampling scenarios and unseen datasets, including patient data, non-Cartesian radial sampling, and different anatomies.
加速动态磁共振成像(MRI)在临床应用中备受期待。然而,由于其固有的高维性和时空复杂性,其重建仍然具有挑战性。虽然扩散模型在空间成像中表现出强大的性能,但它们在时空数据中的应用尚未得到充分探索。为了解决这一差距,我们提出了一种专门为鲁棒动态MRI重建设计的新型时空扩散模型(STDM)。该方法通过关注二维时空图像,将复杂的三维扩散过程分解为可管理的子问题,从而降低了维数,提高了计算效率。每个2D图像都是独立处理的,允许在数据一致性指导下平行的反向扩散过程,以确保测量对齐。为了进一步提高图像质量,我们引入了一个双向扩散框架(dSTDM),它同时沿着两个正交方向进行反向扩散,有效地捕获了完整的3D数据分布。在心脏电影MRI数据集上的综合实验表明,我们的方法在高加速重建中实现了最先进的性能。此外,它在各种欠采样场景和未见数据集(包括患者数据、非笛卡尔径向采样和不同解剖结构)中表现出初步的鲁棒性。
{"title":"Robust Cardiac Cine MRI Reconstruction With Spatiotemporal Diffusion Model","authors":"Zi Wang;Jiahao Huang;Mingkai Huang;Chengyan Wang;Guang Yang;Xiaobo Qu","doi":"10.1109/TCI.2025.3598421","DOIUrl":"https://doi.org/10.1109/TCI.2025.3598421","url":null,"abstract":"Accelerated dynamic magnetic resonance imaging (MRI) is highly expected in clinical applications. However, its reconstruction remains challenging due to the inherently high dimensionality and spatiotemporal complexity. While diffusion models have demonstrated robust performance in spatial imaging, their application to spatiotemporal data has been underexplored. To address this gap, we propose a novel spatiotemporal diffusion model (STDM) specifically designed for robust dynamic MRI reconstruction. Our approach decomposes the complex 3D diffusion process into manageable sub-problems by focusing on 2D spatiotemporal images, thereby reducing dimensionality and enhancing computational efficiency. Each 2D image is treated independently, allowing for a parallel reverse diffusion process guided by data consistency to ensure measurement alignment. To further improve the image quality, we introduce a dual-directional diffusion framework (dSTDM), which simultaneously performs reverse diffusion along two orthogonal directions, effectively capturing the full 3D data distribution. Comprehensive experiments on cardiac cine MRI datasets demonstrate that our approach achieves state-of-the-art performance in highly accelerated reconstruction. Additionally, it exhibits preliminary robustness across various undersampling scenarios and unseen datasets, including patient data, non-Cartesian radial sampling, and different anatomies.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1258-1270"},"PeriodicalIF":4.8,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145090149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Double-Branched and Multi-Magnetic Directions Feature Fusion Network (DB&MDF2-Net) for the Accurate Reconstruction of Magnetic Particle Imaging 用于磁粒子成像精确重建的双分支多磁方向特征融合网络(DB&MDF2-Net)
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-08-13 DOI: 10.1109/TCI.2025.3598455
Jintao Li;Lizhi Zhang;Shuangchen Li;Huanlong Gao;Shuaishuai He;Yizhe Zhao;Xiaowei He;Yuqing Hou;Hongbo Guo
Objective: Magnetic particle imaging (MPI) is a novel non-destructive medical imaging method that visualizes the spatial distribution of superparamagnetic iron oxide nanoparticles. However, due to the non-uniformity of the selection and drive field, the unsatisfactory of the receive coil and the different components of the magnetization signal (induced electromotive force) detected by the orthogonal coil, processing the voltage signals measured by the receiving coils in different directions without discrimination will affect the reconstruction quality. Methods: This study introduces the Double-Branched and Multi-Magnetic Directions Feature Fusion Network (DB&MDF2-Net) to address these challenges. The dual-branch(DB) strategy processes X and Y-directional magnetic field components independently, reducing information confusion. Each branch has a dual-sampling feature(DSF) layer that captures multi-scale spatial information and preserves spatial structure, enhancing the extraction of particle distribution and edge details. Additionally, a multi-head self-attention transformer(MSA-T) layer efficiently integrates features from different modules, allowing the network to learn complex inter-feature relationships. Results: The effectiveness of the DB strategy, DSF and MSA-T layers in our proposed method were validated through ablation experiments. Simulate and phantom experiments further demonstrate significant improvements in detail capture and anti-noise capability of DB&MDF2-Net without any hardware modifications, enabling more precise restoration of real particle distribution characteristics. Conclusion: These findings suggest that DB&MDF2-Net can significantly improve the imaging accuracy of MPI. Significance: This research is expected to enhance the practicality of MPI in biomedical applications and contribute to the future development of MPI technology.
目的:磁颗粒成像(MPI)是一种新型的非破坏性医学成像方法,可显示超顺磁性氧化铁纳米颗粒的空间分布。然而,由于选择和驱动场的不均匀性、接收线圈的不理想以及正交线圈检测到的磁化信号(感应电动势)分量不同,对接收线圈测得的电压信号进行不同方向的不区分处理会影响重构质量。方法:本研究引入双分支和多磁方向特征融合网络(DB&MDF2-Net)来解决这些挑战。双支路(DB)策略独立处理X和y方向磁场分量,减少了信息混淆。每个分支都有一个双采样特征(DSF)层,该层捕获多尺度空间信息并保留空间结构,增强了粒子分布和边缘细节的提取。此外,一个多头自关注变压器(MSA-T)层有效地集成了来自不同模块的特征,使网络能够学习复杂的特征间关系。结果:通过烧蚀实验验证了本文方法中DB策略、DSF层和MSA-T层的有效性。仿真和模拟实验进一步证明,DB&MDF2-Net在不修改硬件的情况下,在细节捕获和抗噪能力方面有了显著的提高,能够更精确地恢复真实的颗粒分布特征。结论:DB&MDF2-Net可显著提高MPI的成像精度。意义:本研究有望提高MPI在生物医学应用中的实用性,为MPI技术的未来发展做出贡献。
{"title":"Double-Branched and Multi-Magnetic Directions Feature Fusion Network (DB&MDF2-Net) for the Accurate Reconstruction of Magnetic Particle Imaging","authors":"Jintao Li;Lizhi Zhang;Shuangchen Li;Huanlong Gao;Shuaishuai He;Yizhe Zhao;Xiaowei He;Yuqing Hou;Hongbo Guo","doi":"10.1109/TCI.2025.3598455","DOIUrl":"https://doi.org/10.1109/TCI.2025.3598455","url":null,"abstract":"<italic>Objective:</i> Magnetic particle imaging (MPI) is a novel non-destructive medical imaging method that visualizes the spatial distribution of superparamagnetic iron oxide nanoparticles. However, due to the non-uniformity of the selection and drive field, the unsatisfactory of the receive coil and the different components of the magnetization signal (induced electromotive force) detected by the orthogonal coil, processing the voltage signals measured by the receiving coils in different directions without discrimination will affect the reconstruction quality. <italic>Methods:</i> This study introduces the Double-Branched and Multi-Magnetic Directions Feature Fusion Network (DB&MDF2-Net) to address these challenges. The dual-branch(DB) strategy processes X and Y-directional magnetic field components independently, reducing information confusion. Each branch has a dual-sampling feature(DSF) layer that captures multi-scale spatial information and preserves spatial structure, enhancing the extraction of particle distribution and edge details. Additionally, a multi-head self-attention transformer(MSA-T) layer efficiently integrates features from different modules, allowing the network to learn complex inter-feature relationships. <italic>Results:</i> The effectiveness of the DB strategy, DSF and MSA-T layers in our proposed method were validated through ablation experiments. Simulate and phantom experiments further demonstrate significant improvements in detail capture and anti-noise capability of DB&MDF2-Net without any hardware modifications, enabling more precise restoration of real particle distribution characteristics. <italic>Conclusion:</i> These findings suggest that DB&MDF2-Net can significantly improve the imaging accuracy of MPI. <italic>Significance:</i> This research is expected to enhance the practicality of MPI in biomedical applications and contribute to the future development of MPI technology.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1074-1086"},"PeriodicalIF":4.8,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144918326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HDN: Hybrid Deep-Learning and Non-Line-of-Sight Reconstruction Framework for Transcranial Photoacoustic Imaging of Human Brain HDN:基于深度学习和非视线重建的经颅人脑光声成像框架
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-08-11 DOI: 10.1109/TCI.2025.3594073
Pengcheng Wan;Fan Zhang;Yuting Shen;Hulin Zhao;Xiran Cai;Xiaohua Feng;Fei Gao
Photoacoustic imaging combines the high contrast of optical imaging with the deep penetration depth of ultrasonic imaging, showing great potential in cerebrovascular disease detection. However, the ultrasonic wave suffers strong attenuation and multi-scattering when it passes through the skull tissue, resulting in the distortion of the collected photoacoustic signal. In this paper, inspired by the principles of deep learning and non-line-of-sight imaging, we propose an image reconstruction framework named HDN (Hybrid Deep-learning and Non-line-of-sight), which consists of the signal extraction part and difference utilization part. The signal extraction part is used to correct the distorted signal and reconstruct an initial image. The difference utilization part is used to make further use of the signal difference between the distorted signal and corrected signal, reconstructing the residual image between the initial image and the target image. The test results on a photoacoustic digital brain simulation dataset show that compared with the traditional method (delay-and-sum) and deep-learning-based method (UNet), the HDN achieved superior performance in both signal correction and image reconstruction. Specifically for the structural similarity index, the HDN reached 0.661 in imaging results, compared to 0.157 for the delay-and-sum method and 0.305 for the deep-learning-based method.
光声成像结合了光学成像的高对比度和超声成像的深穿透深度,在脑血管疾病检测中显示出巨大的潜力。然而,超声波在穿过颅骨组织时受到强烈的衰减和多重散射,导致采集到的光声信号失真。本文受深度学习和非视距成像原理的启发,提出了一种图像重建框架HDN (Hybrid deep -learning and non-line-of-sight),该框架由信号提取部分和差分利用部分组成。信号提取部分用于校正失真信号并重建初始图像。差值利用部分用于进一步利用失真信号与校正信号之间的信号差,重构初始图像与目标图像之间的残差图像。在光声数字脑模拟数据集上的测试结果表明,与传统的延迟求和方法和基于深度学习的UNet方法相比,HDN在信号校正和图像重建方面都取得了更好的效果。具体到结构相似指数,成像结果HDN达到0.661,而延迟和方法的HDN为0.157,基于深度学习的HDN为0.305。
{"title":"HDN: Hybrid Deep-Learning and Non-Line-of-Sight Reconstruction Framework for Transcranial Photoacoustic Imaging of Human Brain","authors":"Pengcheng Wan;Fan Zhang;Yuting Shen;Hulin Zhao;Xiran Cai;Xiaohua Feng;Fei Gao","doi":"10.1109/TCI.2025.3594073","DOIUrl":"https://doi.org/10.1109/TCI.2025.3594073","url":null,"abstract":"Photoacoustic imaging combines the high contrast of optical imaging with the deep penetration depth of ultrasonic imaging, showing great potential in cerebrovascular disease detection. However, the ultrasonic wave suffers strong attenuation and multi-scattering when it passes through the skull tissue, resulting in the distortion of the collected photoacoustic signal. In this paper, inspired by the principles of deep learning and non-line-of-sight imaging, we propose an image reconstruction framework named HDN (Hybrid Deep-learning and Non-line-of-sight), which consists of the signal extraction part and difference utilization part. The signal extraction part is used to correct the distorted signal and reconstruct an initial image. The difference utilization part is used to make further use of the signal difference between the distorted signal and corrected signal, reconstructing the residual image between the initial image and the target image. The test results on a photoacoustic digital brain simulation dataset show that compared with the traditional method (delay-and-sum) and deep-learning-based method (UNet), the HDN achieved superior performance in both signal correction and image reconstruction. Specifically for the structural similarity index, the HDN reached 0.661 in imaging results, compared to 0.157 for the delay-and-sum method and 0.305 for the deep-learning-based method.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1142-1149"},"PeriodicalIF":4.8,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144918327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Helical Reconstruction Network for Multi-Source Static CT 多源静态CT螺旋重建网络
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-08-11 DOI: 10.1109/TCI.2025.3597449
Chunliang Ma;Kaiwen Tan;Yunxiang Li;Shouhua Luo
Nanovision static CT is an innovative CT scanning technique that features the arrangement of the X-ray source array and detector array on two parallel planes with a consistent offset. This configuration significantly enhances temporal resolution compared to conventional CT, providing particular advantages for dynamic organ imaging and low-dose imaging applications. However, it also introduces cone angle and sparse angle artifacts during helical scanning. To address this, this paper proposes a novel theoretical analysis framework to systematically analyze the artifact generation mechanism of the traditional FDK algorithm in this scenario. Through numerical solutions and data superposition, we are able to attribute the causes of artifacts for the first time to two types of data incompleteness issues arising from the lack of cone angle data and insufficient sparse angular sampling. Building on these insights, we propose an innovative dual-module collaborative reconstruction network. First, we introduce the Helical Bi-directional xFDK algorithm (HbixFDK), which employs a limited-angle weighted compensation strategy to mitigate data incompleteness in the cone angle region. Next, we develop the attention-based Helical FISTA network (HFISTA-Net), which utilizes the output from HbixFDK as the initial reconstruction to effectively suppress sparse sampling artifacts. Extensive experiments conducted on the TCIA dataset and clinical static CT scans demonstrate that our proposed method significantly reduces both cone angle and sparse angle artifacts in static CT helical scanning. The approach achieves rapid and high-precision helical reconstruction, showcasing superior accuracy and computational efficiency.
纳米视觉静态CT是一种创新的CT扫描技术,其特点是x射线源阵列和探测器阵列排列在两个平行平面上,具有一致的偏移量。与传统CT相比,这种配置显著提高了时间分辨率,为动态器官成像和低剂量成像应用提供了特别的优势。然而,在螺旋扫描过程中也会引入锥角和稀疏角伪影。针对这一问题,本文提出了一种新的理论分析框架,系统分析了传统FDK算法在该场景下的伪影生成机制。通过数值解和数据叠加,我们首次将伪影的原因归结为两种类型的数据不完整问题,这两种问题是由于缺乏锥角数据和稀疏角度采样不足引起的。在此基础上,我们提出了一种创新的双模块协同重构网络。首先,我们介绍了螺旋双向xFDK算法(HbixFDK),该算法采用有限角度加权补偿策略来缓解锥角区域的数据不完整性。接下来,我们开发了基于注意力的螺旋FISTA网络(HFISTA-Net),该网络利用HbixFDK的输出作为初始重构来有效抑制稀疏采样伪影。在TCIA数据集和临床静态CT扫描上进行的大量实验表明,我们提出的方法显著降低了静态CT螺旋扫描中的锥角和稀疏角伪影。该方法实现了快速、高精度的螺旋重建,具有较高的精度和计算效率。
{"title":"A Helical Reconstruction Network for Multi-Source Static CT","authors":"Chunliang Ma;Kaiwen Tan;Yunxiang Li;Shouhua Luo","doi":"10.1109/TCI.2025.3597449","DOIUrl":"https://doi.org/10.1109/TCI.2025.3597449","url":null,"abstract":"Nanovision static CT is an innovative CT scanning technique that features the arrangement of the X-ray source array and detector array on two parallel planes with a consistent offset. This configuration significantly enhances temporal resolution compared to conventional CT, providing particular advantages for dynamic organ imaging and low-dose imaging applications. However, it also introduces cone angle and sparse angle artifacts during helical scanning. To address this, this paper proposes a novel theoretical analysis framework to systematically analyze the artifact generation mechanism of the traditional FDK algorithm in this scenario. Through numerical solutions and data superposition, we are able to attribute the causes of artifacts for the first time to two types of data incompleteness issues arising from the lack of cone angle data and insufficient sparse angular sampling. Building on these insights, we propose an innovative dual-module collaborative reconstruction network. First, we introduce the Helical Bi-directional xFDK algorithm (HbixFDK), which employs a limited-angle weighted compensation strategy to mitigate data incompleteness in the cone angle region. Next, we develop the attention-based Helical FISTA network (HFISTA-Net), which utilizes the output from HbixFDK as the initial reconstruction to effectively suppress sparse sampling artifacts. Extensive experiments conducted on the TCIA dataset and clinical static CT scans demonstrate that our proposed method significantly reduces both cone angle and sparse angle artifacts in static CT helical scanning. The approach achieves rapid and high-precision helical reconstruction, showcasing superior accuracy and computational efficiency.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1174-1189"},"PeriodicalIF":4.8,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145011319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-Line-of-Sight mmW SAR Imaging With Equivariant Adaptive Threshold Learning 基于等变自适应阈值学习的毫米波非视距SAR成像
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-08-11 DOI: 10.1109/TCI.2025.3597462
Xiang Cai;Shunjun Wei;Mou Wang;Hao Zhang;Kun Chen;Xinyuan Liu;Jun Shi;Guolong Cui
High-precision 2-D/3-D Synthetic Aperture Radar (SAR) image reconstruction from the indirect scattered echoes of hidden targets represents a core technical challenge in millimeter-wave (mmW) Non-Line-of-Sight (NLOS) environmental perception. Deep learning approaches have demonstrated exceptional performance in SAR imaging. However, existing methods are predominantly designed for Line-of-Sight (LOS) scenarios, where clean LOS simulation signals can be acquired for training purposes, a condition often difficult or impossible to meet in NLOS imaging due to complex multipath environments and noise. To tackle this issue within specific NLOS configurations, particularly those involving strong specular reflections from discrete, isolated hidden objects, we propose an Equivariant Imaging (EI) framework tailored for mmW SAR. The EI framework is a fully self-supervised learning approach that leverages the group invariance present in signal distributions, enabling robust image reconstruction from partial NLOS measurements contaminated with noise and multipath artifacts. In our method, the reconstruction function is based on a deep unfolding network with Total Variation (TV) constraints, mapping the NLOS scattered echoes to the target image. Moreover, we introduce an Adaptive Peak Convolution Network (APConv) into the reconstruction process to dynamically adjust thresholds, replacing traditional fixed-threshold methods. This enhances imaging flexibility and quality under these defined NLOS conditions. Finally, we validate the proposed method using various NLOS echo data collected through an experimental mmW system. Numerical and visual results both demonstrate the effectiveness of our approach for NLOS mmW SAR imaging tasks. The proposed EI framework thus offers a promising approach for advancing NLOS mmW SAR perception capabilities, particularly for environments and target configurations aligning with those investigated and supported by our current experiments.
从隐藏目标的间接散射回波中重建高精度二维/三维合成孔径雷达(SAR)图像是毫米波(mmW)非视距(NLOS)环境感知的核心技术挑战。深度学习方法在SAR成像中表现出优异的性能。然而,现有的方法主要是为视距(LOS)场景设计的,在视距场景中,干净的LOS模拟信号可以用于训练目的,由于复杂的多径环境和噪声,NLOS成像通常难以或不可能满足这一条件。为了在特定的NLOS配置中解决这个问题,特别是那些涉及离散的、孤立的隐藏物体的强镜面反射的NLOS配置,我们提出了一种为毫米波SAR定制的等变成像(EI)框架。EI框架是一种完全自监督的学习方法,利用信号分布中的群不变性,能够从受噪声和多路径伪像污染的部分NLOS测量中实现鲁棒图像重建。在我们的方法中,重构函数基于具有全变分(TV)约束的深度展开网络,将NLOS散射回波映射到目标图像。此外,我们在重建过程中引入自适应峰值卷积网络(APConv)来动态调整阈值,取代传统的固定阈值方法。这提高了在这些定义的NLOS条件下成像的灵活性和质量。最后,我们通过实验毫米波系统收集的各种NLOS回波数据验证了所提出的方法。数值和视觉结果都证明了我们的方法对NLOS毫米波SAR成像任务的有效性。因此,提出的EI框架为推进NLOS毫米波SAR感知能力提供了一种有希望的方法,特别是对于与我们当前实验研究和支持的环境和目标配置相一致的环境和目标配置。
{"title":"Non-Line-of-Sight mmW SAR Imaging With Equivariant Adaptive Threshold Learning","authors":"Xiang Cai;Shunjun Wei;Mou Wang;Hao Zhang;Kun Chen;Xinyuan Liu;Jun Shi;Guolong Cui","doi":"10.1109/TCI.2025.3597462","DOIUrl":"https://doi.org/10.1109/TCI.2025.3597462","url":null,"abstract":"High-precision 2-D/3-D Synthetic Aperture Radar (SAR) image reconstruction from the indirect scattered echoes of hidden targets represents a core technical challenge in millimeter-wave (mmW) Non-Line-of-Sight (NLOS) environmental perception. Deep learning approaches have demonstrated exceptional performance in SAR imaging. However, existing methods are predominantly designed for Line-of-Sight (LOS) scenarios, where clean LOS simulation signals can be acquired for training purposes, a condition often difficult or impossible to meet in NLOS imaging due to complex multipath environments and noise. To tackle this issue within specific NLOS configurations, particularly those involving strong specular reflections from discrete, isolated hidden objects, we propose an Equivariant Imaging (EI) framework tailored for mmW SAR. The EI framework is a fully self-supervised learning approach that leverages the group invariance present in signal distributions, enabling robust image reconstruction from partial NLOS measurements contaminated with noise and multipath artifacts. In our method, the reconstruction function is based on a deep unfolding network with Total Variation (TV) constraints, mapping the NLOS scattered echoes to the target image. Moreover, we introduce an Adaptive Peak Convolution Network (APConv) into the reconstruction process to dynamically adjust thresholds, replacing traditional fixed-threshold methods. This enhances imaging flexibility and quality under these defined NLOS conditions. Finally, we validate the proposed method using various NLOS echo data collected through an experimental mmW system. Numerical and visual results both demonstrate the effectiveness of our approach for NLOS mmW SAR imaging tasks. The proposed EI framework thus offers a promising approach for advancing NLOS mmW SAR perception capabilities, particularly for environments and target configurations aligning with those investigated and supported by our current experiments.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1190-1205"},"PeriodicalIF":4.8,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145036849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Computational Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1