首页 > 最新文献

IEEE Transactions on Computational Imaging最新文献

英文 中文
Joint Translational Motion Compensation of ISAR Imaging for Uniformly Accelerated Motion Targets Based on MPD-TSLS Under Low SNR 低信噪比下基于MPD-TSLS的均匀加速运动目标ISAR成像关节平移运动补偿
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-19 DOI: 10.1109/TCI.2026.3655460
Tao Liu;Yu Wang;Biao Tian;Shiyou Xu;Zengping Chen
Effective and accurate translational motion compensation is crucial for inverse synthetic aperture radar (ISAR) imaging. Traditional data-based translational motion compensation methods are inapplicable to uniformly accelerated targets in low signal-to-noise ratio (SNR) scenarios. In this paper, a parametric approach is proposed based on joint modified phase difference and two-step least squares (MPD-TSLS). The method employs a second-order polynomial model for translational motion, whereby the energy of all scatterers is converted to a single range and Doppler cell through MPD operation, respectively. The estimated acceleration and velocity are then obtained by TSLS. To enhance precision, an optimization approach based on LS for refining polynomial parameters is also employed. The proposed method achieves a substantial increase in SNR, ensuring precise compensation accuracy while maintaining high computational efficiency by relying exclusively on fast Fourier transform (FFT) and matrix operations. The experimental results obtained from both simulated and real datasets fully verify that the proposed method exhibits superior performance compared with the other implemented methods.
有效、准确的平移运动补偿是逆合成孔径雷达成像的关键。传统的基于数据的平移运动补偿方法不适用于低信噪比条件下的均匀加速目标。提出了一种基于联合修正相位差和两步最小二乘(MPD-TSLS)的参数化方法。该方法采用二阶多项式模型进行平移运动,通过MPD运算将所有散射体的能量分别转换为单个距离和多普勒单元。然后用TSLS得到估计的加速度和速度。为了提高精度,还采用了一种基于LS的多项式参数优化方法。该方法仅依靠快速傅里叶变换(FFT)和矩阵运算,在保证精确补偿精度的同时保持较高的计算效率,实现了信噪比的大幅提高。仿真和实际数据集的实验结果充分验证了该方法与其他实现方法相比具有优越的性能。
{"title":"Joint Translational Motion Compensation of ISAR Imaging for Uniformly Accelerated Motion Targets Based on MPD-TSLS Under Low SNR","authors":"Tao Liu;Yu Wang;Biao Tian;Shiyou Xu;Zengping Chen","doi":"10.1109/TCI.2026.3655460","DOIUrl":"https://doi.org/10.1109/TCI.2026.3655460","url":null,"abstract":"Effective and accurate translational motion compensation is crucial for inverse synthetic aperture radar (ISAR) imaging. Traditional data-based translational motion compensation methods are inapplicable to uniformly accelerated targets in low signal-to-noise ratio (SNR) scenarios. In this paper, a parametric approach is proposed based on joint modified phase difference and two-step least squares (MPD-TSLS). The method employs a second-order polynomial model for translational motion, whereby the energy of all scatterers is converted to a single range and Doppler cell through MPD operation, respectively. The estimated acceleration and velocity are then obtained by TSLS. To enhance precision, an optimization approach based on LS for refining polynomial parameters is also employed. The proposed method achieves a substantial increase in SNR, ensuring precise compensation accuracy while maintaining high computational efficiency by relying exclusively on fast Fourier transform (FFT) and matrix operations. The experimental results obtained from both simulated and real datasets fully verify that the proposed method exhibits superior performance compared with the other implemented methods.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"349-363"},"PeriodicalIF":4.8,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DPD-DEMD: Denoising Prior Guided Weakly Supervised Image Domain Dual-Energy Material Decomposition 去噪先验引导弱监督图像域双能材料分解
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-16 DOI: 10.1109/TCI.2026.3654807
Xinyun Zhong;Xu Zhuo;Tianling Lyu;Yikun Zhang;Qianjin Feng;Guotao Quan;Xu Ji;Yang Chen
Dual-energy material decomposition is widely used in clinical diagnosis, especially for material characterization. However, conventional image-domain methods suffer from noise amplification, thus reducing signal-to-noise ratio and compromising diagnostic accuracy. Although deep learning approaches have shown significant progress, they often require high-quality paired or unpaired labels, limiting their clinical application. To address these issues, this work explores the feasibility of weakly supervised methods and proposes a denoising prior guided weakly supervised learning framework, DPD-DEMD, to achieve high-accuracy image-domain dual-energy material decomposition. DPD-DEMD utilizes pretrained CT denoising models to construct robust priors for our dual energy material decomposition task. Furthermore, we propose an adaptive confidence mask mechanism for pseudo label generation and a multi-prior fusion strategy, thereby substantially improving the stability and reliability of the weakly supervised learning process. In addition, we fully exploit the correlation between dual energy images and further propose global-local regularization loss to improve the material decomposition accuracy. Extensive experiments conducted on both simulated and clinical datasets verify the superior performance and robustness of the proposed method, thereby demonstrating its potential clinical value in material decomposition.
双能物质分解在临床诊断中有着广泛的应用,尤其是在材料表征方面。然而,传统的图像域方法受到噪声放大的影响,从而降低了信噪比,影响了诊断的准确性。尽管深度学习方法已经取得了重大进展,但它们通常需要高质量的成对或非成对标签,这限制了它们的临床应用。为了解决这些问题,本研究探索了弱监督方法的可行性,并提出了一种去噪先验引导弱监督学习框架DPD-DEMD,以实现高精度图像域双能材料分解。DPD-DEMD利用预训练的CT去噪模型为我们的双能材料分解任务构建鲁棒先验。此外,我们提出了一种用于伪标签生成的自适应置信掩码机制和一种多先验融合策略,从而大大提高了弱监督学习过程的稳定性和可靠性。此外,我们充分利用了双能量图像之间的相关性,并进一步提出了全局-局部正则化损失来提高材料分解的精度。在模拟和临床数据集上进行的大量实验验证了所提出方法的优越性能和鲁棒性,从而展示了其在材料分解方面的潜在临床价值。
{"title":"DPD-DEMD: Denoising Prior Guided Weakly Supervised Image Domain Dual-Energy Material Decomposition","authors":"Xinyun Zhong;Xu Zhuo;Tianling Lyu;Yikun Zhang;Qianjin Feng;Guotao Quan;Xu Ji;Yang Chen","doi":"10.1109/TCI.2026.3654807","DOIUrl":"https://doi.org/10.1109/TCI.2026.3654807","url":null,"abstract":"Dual-energy material decomposition is widely used in clinical diagnosis, especially for material characterization. However, conventional image-domain methods suffer from noise amplification, thus reducing signal-to-noise ratio and compromising diagnostic accuracy. Although deep learning approaches have shown significant progress, they often require high-quality paired or unpaired labels, limiting their clinical application. To address these issues, this work explores the feasibility of weakly supervised methods and proposes a denoising prior guided weakly supervised learning framework, DPD-DEMD, to achieve high-accuracy image-domain dual-energy material decomposition. DPD-DEMD utilizes pretrained CT denoising models to construct robust priors for our dual energy material decomposition task. Furthermore, we propose an adaptive confidence mask mechanism for pseudo label generation and a multi-prior fusion strategy, thereby substantially improving the stability and reliability of the weakly supervised learning process. In addition, we fully exploit the correlation between dual energy images and further propose global-local regularization loss to improve the material decomposition accuracy. Extensive experiments conducted on both simulated and clinical datasets verify the superior performance and robustness of the proposed method, thereby demonstrating its potential clinical value in material decomposition.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"334-348"},"PeriodicalIF":4.8,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data-Driven Multi-keV Virtual Monoenergetic Images Generation From Single-Energy CT Guided by Image-Domain Material Decomposition 基于图像域材料分解的单能CT多键虚拟单能图像生成
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-15 DOI: 10.1109/TCI.2026.3653309
Wenwen Zhang;Zihan Chai;Yantao Niu;Zhijie Zhang;Linxuan Li;Baohua Sun;Junfang Xian;Wei Zhao
Virtual monoenergetic images (VMIs), reconstructed from dual-energy CT (DECT) by capturing photon attenuation data at two distinct energy levels, can reduce beam-hardening artifacts and provide more quantitatively accurate attenuation measurements. Data-driven deep learning approaches have demonstrated the feasibility of synthesizing VMIs from conventional single-energy CT (SECT) scans. However, the lack of incorporation of physics-related information in such methods compromises their interpretability and robustness. Here we propose a novel hybrid data-driven framework that synergizes convolutional neural networks with physics-based material decomposition derived from DECT principles. This approach directly yields high-quality VMIs across various keV levels from SECT acquisitions. Through rigorous validation on 130 clinical cases spanning diverse anatomical regions and pathological conditions, our method demonstrates significant improvements over conventional purely data-driven approaches, as evidenced by enhanced anatomical visualization and superior performance on quantitative metrics. By eliminating dependence on DECT hardware while maintaining computational efficiency and incorporating physics-guided constraints, our framework leverages the widespread availability of SECT to provide a cost-effective, high-performance solution for diagnostic imaging in routine clinical practice.
通过捕获两个不同能级的光子衰减数据,从双能CT (DECT)重建虚拟单能图像(VMIs),可以减少波束硬化伪影,并提供更定量准确的衰减测量。数据驱动的深度学习方法已经证明了从传统的单能量CT (SECT)扫描中合成vmi的可行性。然而,在这些方法中缺乏与物理相关的信息,损害了它们的可解释性和健壮性。在这里,我们提出了一种新的混合数据驱动框架,它将卷积神经网络与基于物理的材料分解协同起来,这些分解来源于DECT原理。这种方法直接产生高质量的vmi,跨越来自SECT收购的各种关键级别。通过对跨越不同解剖区域和病理条件的130例临床病例的严格验证,我们的方法比传统的纯数据驱动方法有了显着改进,这证明了解剖可视化的增强和定量指标的优越性能。通过消除对DECT硬件的依赖,同时保持计算效率并结合物理指导约束,我们的框架利用了广泛可用的SECT,为常规临床实践中的诊断成像提供了成本效益高、高性能的解决方案。
{"title":"Data-Driven Multi-keV Virtual Monoenergetic Images Generation From Single-Energy CT Guided by Image-Domain Material Decomposition","authors":"Wenwen Zhang;Zihan Chai;Yantao Niu;Zhijie Zhang;Linxuan Li;Baohua Sun;Junfang Xian;Wei Zhao","doi":"10.1109/TCI.2026.3653309","DOIUrl":"https://doi.org/10.1109/TCI.2026.3653309","url":null,"abstract":"Virtual monoenergetic images (VMIs), reconstructed from dual-energy CT (DECT) by capturing photon attenuation data at two distinct energy levels, can reduce beam-hardening artifacts and provide more quantitatively accurate attenuation measurements. Data-driven deep learning approaches have demonstrated the feasibility of synthesizing VMIs from conventional single-energy CT (SECT) scans. However, the lack of incorporation of physics-related information in such methods compromises their interpretability and robustness. Here we propose a novel hybrid data-driven framework that synergizes convolutional neural networks with physics-based material decomposition derived from DECT principles. This approach directly yields high-quality VMIs across various keV levels from SECT acquisitions. Through rigorous validation on 130 clinical cases spanning diverse anatomical regions and pathological conditions, our method demonstrates significant improvements over conventional purely data-driven approaches, as evidenced by enhanced anatomical visualization and superior performance on quantitative metrics. By eliminating dependence on DECT hardware while maintaining computational efficiency and incorporating physics-guided constraints, our framework leverages the widespread availability of SECT to provide a cost-effective, high-performance solution for diagnostic imaging in routine clinical practice.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"321-333"},"PeriodicalIF":4.8,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Degradation-Aware Diffusion Prior for Hyperspectral Reconstruction From RGB Image RGB图像高光谱重建的学习退化感知扩散先验
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-01 DOI: 10.1109/TCI.2025.3650359
Jingxiang Yang;Haifeng Xu;Heyuan Yin;Hongyi Liu;Liang Xiao
Hyperspectral image (HSI) is applicable in many fields due to the ability in discriminating different materials. Collecting HSI usually requires expensive hardware and long period. Reconstructing HSI from RGB image, also called spectral super-resolution (SSR), is an affordable and feasible way for HSI acquisition. Despite the SSR results achieved by existing deep unfolding networks (DUNs), they still face challenges in: 1) recovering the fine-grained and realistic details; 2) suppressing the spectral distortion. Diffusion model has advantages in generating diverse and realistic contents, while its fidelity is limited due to the inherent randomness. In this study, to reconstruct a faithful and realistic HSI, we integrate the diffusion model in DUN, and propose a degradation-aware unrolling diffusion model for SSR (deDiff-SSR). The generative diffusion prior is jointly leveraged with the spectral degradation and deep prior learning. Specifically, we first pre-train a channel attention enhanced denoising diffusion probabilistic model (DDPM), the spectral correlation is exploited for learning the diffusion prior of HSI. To aware the degradation, by optimizing a diffusion and deep priors regularized HSI SSR model, we propose a degradation-aware diffusion sampling method, the spectral degradation is learned to refine each diffusion sampling step. Via unrolling the degradation-aware diffusion sampling steps, we build the deDiff-SSR network. It contains diffusion and deep proximal operators to represent the diffusion and deep priors, respectively. We implement the diffusion proximal operator with one sampling step of the pre-trained DDPM. Moreover, we design a state-space Transformer as the deep proximal operator, the spectral-spatial long-range relationship of HSI can be efficiently captured. The experiments on several indoor and remote sensing datasets demonstrate the effectiveness of deDiff-SSR.
高光谱图像由于具有分辨不同材料的能力,在许多领域得到了应用。采集恒生指数通常需要昂贵的硬件和较长的周期。从RGB图像中重建HSI,也称为光谱超分辨率(SSR),是一种经济可行的HSI采集方法。尽管现有的深度展开网络(DUNs)取得了一定的SSR成果,但它们仍然面临着以下挑战:1)恢复细粒度和真实的细节;2)抑制光谱失真。扩散模型在生成内容的多样性和真实感方面具有优势,但其固有的随机性限制了其保真度。在本研究中,为了重建一个真实可信的HSI,我们将扩散模型整合到DUN中,提出了一个退化感知的SSR展开扩散模型(deff -SSR)。生成扩散先验与谱退化和深度先验学习相结合。具体而言,我们首先预训练一个信道注意增强去噪扩散概率模型(DDPM),利用谱相关性学习HSI的扩散先验。为了意识到退化,通过优化扩散和深度先验正则化HSI SSR模型,提出了一种退化感知扩散采样方法,通过学习光谱退化来细化每个扩散采样步骤。通过展开退化感知扩散采样步骤,我们构建了deff - ssr网络。它包含扩散算子和深度近端算子,分别表示扩散算子和深度先验算子。我们用预训练的DDPM的一个采样步来实现扩散近邻算子。此外,我们设计了一个状态空间转换器作为深度近端算子,可以有效地捕获HSI的频谱-空间远程关系。在室内和遥感数据集上的实验验证了deff - ssr的有效性。
{"title":"Learning Degradation-Aware Diffusion Prior for Hyperspectral Reconstruction From RGB Image","authors":"Jingxiang Yang;Haifeng Xu;Heyuan Yin;Hongyi Liu;Liang Xiao","doi":"10.1109/TCI.2025.3650359","DOIUrl":"https://doi.org/10.1109/TCI.2025.3650359","url":null,"abstract":"Hyperspectral image (HSI) is applicable in many fields due to the ability in discriminating different materials. Collecting HSI usually requires expensive hardware and long period. Reconstructing HSI from RGB image, also called spectral super-resolution (SSR), is an affordable and feasible way for HSI acquisition. Despite the SSR results achieved by existing deep unfolding networks (DUNs), they still face challenges in: 1) recovering the fine-grained and realistic details; 2) suppressing the spectral distortion. Diffusion model has advantages in generating diverse and realistic contents, while its fidelity is limited due to the inherent randomness. In this study, to reconstruct a faithful and realistic HSI, we integrate the diffusion model in DUN, and propose a degradation-aware unrolling diffusion model for SSR (deDiff-SSR). The generative diffusion prior is jointly leveraged with the spectral degradation and deep prior learning. Specifically, we first pre-train a channel attention enhanced denoising diffusion probabilistic model (DDPM), the spectral correlation is exploited for learning the diffusion prior of HSI. To aware the degradation, by optimizing a diffusion and deep priors regularized HSI SSR model, we propose a degradation-aware diffusion sampling method, the spectral degradation is learned to refine each diffusion sampling step. Via unrolling the degradation-aware diffusion sampling steps, we build the deDiff-SSR network. It contains diffusion and deep proximal operators to represent the diffusion and deep priors, respectively. We implement the diffusion proximal operator with one sampling step of the pre-trained DDPM. Moreover, we design a state-space Transformer as the deep proximal operator, the spectral-spatial long-range relationship of HSI can be efficiently captured. The experiments on several indoor and remote sensing datasets demonstrate the effectiveness of deDiff-SSR.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"256-269"},"PeriodicalIF":4.8,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ICSD-NeRF: Independent Canonical Spaces for Enhanced Dynamic Scene Modeling in Neural Radiance Fields 神经辐射场中增强动态场景建模的独立规范空间
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-30 DOI: 10.1109/TCI.2025.3649390
Suwoong Yeom;Hosung Son;Chanhee Kang;Eunho Shin;Joonsoo Kim;Kug-jin Yun;Suk-Ju Kang
Novel view synthesis for dynamic scenes is a critical challenge in computer vision and computational imaging. Despite significant advancements, generating realistic images from monocularly captured dynamic scenes remains a complex task. Recent methods leveraging neural radiance fields and 3D Gaussian splatting in canonical spaces have made notable progress. However, these approaches often estimate both color and geometric information within a single space, limiting their effectiveness in handling large deformations of dynamic objects or significant color variations. To address these challenges, we propose ICSD-NeRF, which optimizes color and geometry in separate canonical spaces. Additionally, we introduce decision fields to effectively distinguish and optimize static and dynamic objects, enabling dynamic regions to be disentangled from the static background during training. To further enhance the representation of geometric structures in static regions, we employ an MLP to refine geometric features. We validate our approach on widely used dynamic scene novel view synthesis datasets, demonstrating that ICSD-NeRF outperforms existing methods by achieving higher rendering accuracy. Notably, our method achieves higher PSNR scores on benchmark datasets than current state-of-the-art techniques.
动态场景的新颖视图合成是计算机视觉和计算成像领域的一个重要挑战。尽管取得了重大进展,但从单目捕捉的动态场景中生成逼真的图像仍然是一项复杂的任务。近年来利用神经辐射场和正则空间三维高斯溅射的方法取得了显著进展。然而,这些方法通常在单个空间内估计颜色和几何信息,限制了它们在处理动态对象的大变形或显著颜色变化时的有效性。为了应对这些挑战,我们提出了ICSD-NeRF,它在独立的规范空间中优化颜色和几何形状。此外,我们引入决策域来有效区分和优化静态和动态目标,使动态区域在训练过程中从静态背景中分离出来。为了进一步增强静态区域中几何结构的表示,我们采用MLP来细化几何特征。我们在广泛使用的动态场景新视图合成数据集上验证了我们的方法,证明ICSD-NeRF通过实现更高的渲染精度优于现有方法。值得注意的是,我们的方法在基准数据集上获得了比当前最先进技术更高的PSNR分数。
{"title":"ICSD-NeRF: Independent Canonical Spaces for Enhanced Dynamic Scene Modeling in Neural Radiance Fields","authors":"Suwoong Yeom;Hosung Son;Chanhee Kang;Eunho Shin;Joonsoo Kim;Kug-jin Yun;Suk-Ju Kang","doi":"10.1109/TCI.2025.3649390","DOIUrl":"https://doi.org/10.1109/TCI.2025.3649390","url":null,"abstract":"Novel view synthesis for dynamic scenes is a critical challenge in computer vision and computational imaging. Despite significant advancements, generating realistic images from monocularly captured dynamic scenes remains a complex task. Recent methods leveraging neural radiance fields and 3D Gaussian splatting in canonical spaces have made notable progress. However, these approaches often estimate both color and geometric information within a single space, limiting their effectiveness in handling large deformations of dynamic objects or significant color variations. To address these challenges, we propose ICSD-NeRF, which optimizes color and geometry in separate canonical spaces. Additionally, we introduce decision fields to effectively distinguish and optimize static and dynamic objects, enabling dynamic regions to be disentangled from the static background during training. To further enhance the representation of geometric structures in static regions, we employ an MLP to refine geometric features. We validate our approach on widely used dynamic scene novel view synthesis datasets, demonstrating that ICSD-NeRF outperforms existing methods by achieving higher rendering accuracy. Notably, our method achieves higher PSNR scores on benchmark datasets than current state-of-the-art techniques.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"242-255"},"PeriodicalIF":4.8,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conformalized Generative Bayesian Imaging: An Uncertainty Quantification Framework for Computational Imaging 符合化生成贝叶斯成像:计算成像的不确定性量化框架
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-30 DOI: 10.1109/TCI.2025.3649389
Canberk Ekmekci;Mujdat Cetin
Uncertainty quantification plays an important role in achieving trustworthy and reliable learning-based computational imaging. Recent advances in generative modeling and Bayesian neural networks have enabled the development of uncertainty-aware image reconstruction methods. Current generative model-based methods seek to quantify the inherent (aleatoric) uncertainty on the underlying image for given measurements by learning to sample from the posterior distribution of the underlying image. On the other hand, Bayesian neural network-based approaches aim to quantify the model (epistemic) uncertainty on the parameters of a deep neural network-based reconstruction method by approximating the posterior distribution of those parameters. Unfortunately, an ongoing need for an inversion method that can jointly quantify complex aleatoric uncertainty and epistemic uncertainty patterns still persists. In this paper, we present a scalable framework that can quantify both aleatoric and epistemic uncertainties. The proposed framework accepts an existing generative model-based posterior sampling method as an input and introduces an epistemic uncertainty quantification capability through Bayesian neural networks with latent variables and deep ensembling. Furthermore, by leveraging the conformal prediction methodology, the proposed framework can be easily calibrated to ensure rigorous uncertainty quantification. We evaluated the proposed framework on magnetic resonance imaging, computed tomography, and image inpainting problems and showed that the epistemic and aleatoric uncertainty estimates produced by the proposed framework display the characteristic features of true epistemic and aleatoric uncertainties. Furthermore, our results demonstrated that the use of conformal prediction on top of the proposed framework enables marginal coverage guarantees consistent with frequentist principles.
不确定性量化在实现可信、可靠的基于学习的计算成像中起着重要作用。生成建模和贝叶斯神经网络的最新进展使不确定性感知图像重建方法的发展成为可能。当前基于生成模型的方法试图通过学习从基础图像的后验分布中采样来量化给定测量的基础图像的固有(任意)不确定性。另一方面,基于贝叶斯神经网络的方法旨在通过近似参数的后验分布来量化基于深度神经网络的重建方法参数的模型(认知)不确定性。不幸的是,目前仍然需要一种能够联合量化复杂的任意不确定性和认知不确定性模式的反演方法。在本文中,我们提出了一个可扩展的框架,可以量化任意和认知的不确定性。该框架接受现有的基于生成模型的后验抽样方法作为输入,并通过具有潜在变量和深度集成的贝叶斯神经网络引入认知不确定性量化能力。此外,通过利用保形预测方法,所提出的框架可以很容易地校准,以确保严格的不确定性量化。我们在磁共振成像、计算机断层扫描和图像绘制问题上评估了所提出的框架,并表明由所提出的框架产生的认知和任意不确定性估计显示了真正的认知和任意不确定性的特征。此外,我们的结果表明,在建议的框架之上使用保形预测使边际覆盖保证与频率原理一致。
{"title":"Conformalized Generative Bayesian Imaging: An Uncertainty Quantification Framework for Computational Imaging","authors":"Canberk Ekmekci;Mujdat Cetin","doi":"10.1109/TCI.2025.3649389","DOIUrl":"https://doi.org/10.1109/TCI.2025.3649389","url":null,"abstract":"Uncertainty quantification plays an important role in achieving trustworthy and reliable learning-based computational imaging. Recent advances in generative modeling and Bayesian neural networks have enabled the development of uncertainty-aware image reconstruction methods. Current generative model-based methods seek to quantify the inherent (aleatoric) uncertainty on the underlying image for given measurements by learning to sample from the posterior distribution of the underlying image. On the other hand, Bayesian neural network-based approaches aim to quantify the model (epistemic) uncertainty on the parameters of a deep neural network-based reconstruction method by approximating the posterior distribution of those parameters. Unfortunately, an ongoing need for an inversion method that can jointly quantify complex aleatoric uncertainty and epistemic uncertainty patterns still persists. In this paper, we present a scalable framework that can quantify both aleatoric and epistemic uncertainties. The proposed framework accepts an existing generative model-based posterior sampling method as an input and introduces an epistemic uncertainty quantification capability through Bayesian neural networks with latent variables and deep ensembling. Furthermore, by leveraging the conformal prediction methodology, the proposed framework can be easily calibrated to ensure rigorous uncertainty quantification. We evaluated the proposed framework on magnetic resonance imaging, computed tomography, and image inpainting problems and showed that the epistemic and aleatoric uncertainty estimates produced by the proposed framework display the characteristic features of true epistemic and aleatoric uncertainties. Furthermore, our results demonstrated that the use of conformal prediction on top of the proposed framework enables marginal coverage guarantees consistent with frequentist principles.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"216-229"},"PeriodicalIF":4.8,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-Based Inpainting for Sparse Arrays in Ultrafast Ultrasound Imaging 基于深度学习的超快超声成像稀疏阵列图像绘制
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-25 DOI: 10.1109/TCI.2025.3648531
Roser Viñals;Jean-Philippe Thiran
Sparse arrays offer a promising solution for reducing the data volume required to reconstruct ultrasound images, making them well-suited for portable and wireless devices. However, the quality of images beamformed from a limited number of transducer elements is significantly degraded. This study proposes a deep learning-based inpainting technique to estimate complete radio frequency (RF) signals (before beamforming) from downsampled RF signals obtained using a subset of transducer elements. This approach allows for quality enhancement without the need to beamform images, offering flexibility particularly beneficial for applications such as speed-of-sound estimation algorithms. We introduce a model-based loss function that combines the signal and image domains by incorporating a measurement model associated with image reconstruction. We then compare this loss with another that accounts solely for the RF signal domain. Additionally, we train our network exclusively in the RF image domain, mapping images beamformed from downsampled RF signals to those from complete signals. We compare these approaches qualitatively and quantitatively, with all enhancing image quality. The proposed method with the model-based loss achieves superior detail and quality metrics. Although trained on downsampled RF signals simulating sparse arrays in reception, all methods — especially our inpainting approach with the model-based loss — demonstrate strong adaptability to ultrafast acquisitions with reduced transducer elements in both transmission and reception. This highlights their potential for reducing the number of transducer elements in ultrasound probes. Furthermore, the proposed method exhibits superior generalization performance when evaluated on a different probe than the one used to acquire the training dataset.
稀疏阵列为减少重建超声图像所需的数据量提供了一个很有前途的解决方案,使它们非常适合便携式和无线设备。然而,由有限数量的换能器元件形成的图像质量明显下降。本研究提出了一种基于深度学习的图像绘制技术,用于从使用传感器元件子集获得的下采样射频信号中估计完整的射频(RF)信号(在波束形成之前)。这种方法可以在不需要波束形成图像的情况下提高质量,为声速估计算法等应用提供了特别有益的灵活性。我们引入了一个基于模型的损失函数,通过结合与图像重建相关的测量模型,将信号域和图像域结合起来。然后,我们将这种损耗与仅占RF信号域的另一种损耗进行比较。此外,我们专门在射频图像域训练我们的网络,将从下采样射频信号形成的图像映射到完整信号的图像。我们在定性和定量上比较了这些方法,它们都增强了图像质量。基于模型损失的方法可以获得更好的细节和质量度量。尽管在接收中对模拟稀疏阵列的下采样射频信号进行了训练,但所有方法-特别是我们基于模型损失的图像绘制方法-在发射和接收中都显示出对减少传感器元件的超快采集的强适应性。这突出了它们在减少超声探头中换能器元件数量方面的潜力。此外,当在不同的探针上评估时,所提出的方法比用于获取训练数据集的探针表现出更好的泛化性能。
{"title":"Deep Learning-Based Inpainting for Sparse Arrays in Ultrafast Ultrasound Imaging","authors":"Roser Viñals;Jean-Philippe Thiran","doi":"10.1109/TCI.2025.3648531","DOIUrl":"https://doi.org/10.1109/TCI.2025.3648531","url":null,"abstract":"Sparse arrays offer a promising solution for reducing the data volume required to reconstruct ultrasound images, making them well-suited for portable and wireless devices. However, the quality of images beamformed from a limited number of transducer elements is significantly degraded. This study proposes a deep learning-based inpainting technique to estimate complete radio frequency (RF) signals (before beamforming) from downsampled RF signals obtained using a subset of transducer elements. This approach allows for quality enhancement without the need to beamform images, offering flexibility particularly beneficial for applications such as speed-of-sound estimation algorithms. We introduce a model-based loss function that combines the signal and image domains by incorporating a measurement model associated with image reconstruction. We then compare this loss with another that accounts solely for the RF signal domain. Additionally, we train our network exclusively in the RF image domain, mapping images beamformed from downsampled RF signals to those from complete signals. We compare these approaches qualitatively and quantitatively, with all enhancing image quality. The proposed method with the model-based loss achieves superior detail and quality metrics. Although trained on downsampled RF signals simulating sparse arrays in reception, all methods — especially our inpainting approach with the model-based loss — demonstrate strong adaptability to ultrafast acquisitions with reduced transducer elements in both transmission and reception. This highlights their potential for reducing the number of transducer elements in ultrasound probes. Furthermore, the proposed method exhibits superior generalization performance when evaluated on a different probe than the one used to acquire the training dataset.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"187-202"},"PeriodicalIF":4.8,"publicationDate":"2025-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11316255","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Sinogram-Based System Calibration for Field Free Line Magnetic Particle Imaging 基于快速汉字图的磁场自由线磁粒子成像系统标定
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-23 DOI: 10.1109/TCI.2025.3647229
Serhat Ilbey;Justin Ackers;Heinrich Lehr;Matthias Graeser;Jochen Franke
In this study, we propose a novel calibration method for magnetic particle imaging (MPI), in which a field-based calibration procedure is employed directly in the sinogram domain for a dynamic 3D field free line (FFL) sinusoidal trajectory. Unlike the conventional voxel-wise calibration in the image-domain, our approach involves performing calibration measurements in sinogram domain with alternating magnetic field offsets. The resulting sinogram-based calibration data are then used to synthesize the 3D system matrix (SM) in the image domain by exploiting the shift invariance property of MPI systems. With the proposed method, frequency mixing terms originating from the simultaneous translation and rotation of the FFL can be acquired without the use of an external calibration device, such as a precise 3D-axis robot. The proposed sinogram-based calibration method for the 3D FFL trajectory with 100 ms duration achieved acquisition of a 57 × 57 × 11 SM in approximately 1 minute with minimal image degradation. In contrast, the conventional system calibration method requires about 1 h for the same spatial resolution.
在这项研究中,我们提出了一种新的磁颗粒成像(MPI)校准方法,该方法直接在动态三维场自由线(FFL)正弦轨迹的正弦图域中采用基于场的校准程序。与传统的图像域体素方向校准不同,我们的方法涉及在交替磁场偏移的正弦图域中执行校准测量。利用MPI系统的平移不变性,利用基于图像图的标定数据在图像域合成三维系统矩阵(SM)。利用所提出的方法,可以在不使用外部校准设备(如精密3d轴机器人)的情况下获得FFL同时平移和旋转产生的混频项。所提出的基于图像图的100 ms三维FFL轨迹标定方法在大约1分钟内获得了57 × 57 × 11 SM,且图像退化最小。相比之下,对于相同的空间分辨率,传统的系统校准方法大约需要1小时。
{"title":"Fast Sinogram-Based System Calibration for Field Free Line Magnetic Particle Imaging","authors":"Serhat Ilbey;Justin Ackers;Heinrich Lehr;Matthias Graeser;Jochen Franke","doi":"10.1109/TCI.2025.3647229","DOIUrl":"https://doi.org/10.1109/TCI.2025.3647229","url":null,"abstract":"In this study, we propose a novel calibration method for magnetic particle imaging (MPI), in which a field-based calibration procedure is employed directly in the sinogram domain for a dynamic 3D field free line (FFL) sinusoidal trajectory. Unlike the conventional voxel-wise calibration in the image-domain, our approach involves performing calibration measurements in sinogram domain with alternating magnetic field offsets. The resulting sinogram-based calibration data are then used to synthesize the 3D system matrix (SM) in the image domain by exploiting the shift invariance property of MPI systems. With the proposed method, frequency mixing terms originating from the simultaneous translation and rotation of the FFL can be acquired without the use of an external calibration device, such as a precise 3D-axis robot. The proposed sinogram-based calibration method for the 3D FFL trajectory with 100 ms duration achieved acquisition of a 57 × 57 × 11 SM in approximately 1 minute with minimal image degradation. In contrast, the conventional system calibration method requires about 1 h for the same spatial resolution.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"270-281"},"PeriodicalIF":4.8,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11313552","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-Probe Pulse-Echo Speed-of-Sound Imaging 双探头脉冲回波声速成像
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-19 DOI: 10.1109/TCI.2025.3646325
Michael Jaeger;Vera H.J. van Hal;Richard Lopata;Hans-Martin Schwab
Knowledge of the spatial distribution of speed of sound (SoS) will greatly improve the diagnostic power of abdominal ultrasound (US) imaging. It can be used to account for wavefront aberrations and improving the B-mode image quality, but SoS is also an important disease marker as it can reflect disease-related changes in tissue composition. However, the relatively small aperture size compared to the desired image depth in abdominal US imaging inherently limits the axial resolution and the estimation accuracy of determining SoS. Fundamentally, a larger physical aperture would produce images with improved contrast and resolution. In this study, we assess the impact of combining two probes to a larger aperture on the estimation accuracy of SoS. Two ways of dual-probe operation are investigated, monostatic (where only one probe transmits and receives at a time) and bistatic (where both probes receive on each transmit), as well as classical single-probe operation. The quality of SoS maps is compared in digital and physical phantoms mimicking abdominal imaging. The primary result is that axial resolution of bistatic operation clearly outperforms the one of monostatic dual-probe operation, both in simulations (by 74$%$ and 96$%$) and in the experiment (by 103$%$), indicating that the larger angle differences provided by trans-aperture data (where one probe receives and the other transmits) is essential for the superior axial resolution. Furthermore, bistatic dual-probe operation produced SoS maps with improved background uniformity and performed best at recovering layer contrast, while the latter was substantially more underestimated in monostatic dual-probe and classical single-probe operation. This study emphasizes the importance of larger effective apertures for future ultrasound device designs.
了解声速的空间分布将大大提高腹部超声(US)成像的诊断能力。它可以用来解释波前像差和改善b模式图像质量,但SoS也是一个重要的疾病标志物,因为它可以反映疾病相关的组织成分变化。然而,与腹部超声成像所需的图像深度相比,相对较小的孔径大小固有地限制了轴向分辨率和确定SoS的估计精度。从根本上说,更大的物理光圈会产生对比度和分辨率更高的图像。在本研究中,我们评估了将两个探针组合到更大孔径对SoS估计精度的影响。研究了双探针操作的两种方式,单探针(一次只有一个探针发送和接收)和双探针(两个探针在每次发送时接收),以及经典的单探针操作。在模拟腹部成像的数字和物理幻象中比较SoS地图的质量。主要结果是,双基地操作的轴向分辨率明显优于单基地双探头操作的轴向分辨率,无论是在模拟(74$%$和96$%$)还是在实验中(103$%$),这表明跨孔径数据(其中一个探头接收和另一个探头发送)提供的较大角度差异对于优越的轴向分辨率至关重要。此外,双基地双探针操作生成的SoS地图具有更好的背景均匀性,并且在恢复层对比度方面表现最好,而后者在单基地双探针和经典单探针操作中被大大低估。这项研究强调了更大的有效孔径对未来超声设备设计的重要性。
{"title":"Dual-Probe Pulse-Echo Speed-of-Sound Imaging","authors":"Michael Jaeger;Vera H.J. van Hal;Richard Lopata;Hans-Martin Schwab","doi":"10.1109/TCI.2025.3646325","DOIUrl":"https://doi.org/10.1109/TCI.2025.3646325","url":null,"abstract":"Knowledge of the spatial distribution of speed of sound (SoS) will greatly improve the diagnostic power of abdominal ultrasound (US) imaging. It can be used to account for wavefront aberrations and improving the B-mode image quality, but SoS is also an important disease marker as it can reflect disease-related changes in tissue composition. However, the relatively small aperture size compared to the desired image depth in abdominal US imaging inherently limits the axial resolution and the estimation accuracy of determining SoS. Fundamentally, a larger physical aperture would produce images with improved contrast and resolution. In this study, we assess the impact of combining two probes to a larger aperture on the estimation accuracy of SoS. Two ways of dual-probe operation are investigated, monostatic (where only one probe transmits and receives at a time) and bistatic (where both probes receive on each transmit), as well as classical single-probe operation. The quality of SoS maps is compared in digital and physical phantoms mimicking abdominal imaging. The primary result is that axial resolution of bistatic operation clearly outperforms the one of monostatic dual-probe operation, both in simulations (by 74<inline-formula><tex-math>$%$</tex-math></inline-formula> and 96<inline-formula><tex-math>$%$</tex-math></inline-formula>) and in the experiment (by 103<inline-formula><tex-math>$%$</tex-math></inline-formula>), indicating that the larger angle differences provided by trans-aperture data (where one probe receives and the other transmits) is essential for the superior axial resolution. Furthermore, bistatic dual-probe operation produced SoS maps with improved background uniformity and performed best at recovering layer contrast, while the latter was substantially more underestimated in monostatic dual-probe and classical single-probe operation. This study emphasizes the importance of larger effective apertures for future ultrasound device designs.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"230-241"},"PeriodicalIF":4.8,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Computational Imaging Publication Information IEEE计算成像出版信息汇刊
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-17 DOI: 10.1109/TCI.2025.3643713
{"title":"IEEE Transactions on Computational Imaging Publication Information","authors":"","doi":"10.1109/TCI.2025.3643713","DOIUrl":"https://doi.org/10.1109/TCI.2025.3643713","url":null,"abstract":"","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"C2-C2"},"PeriodicalIF":4.8,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11302894","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145766226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Computational Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1