首页 > 最新文献

IEEE transactions on medical imaging最新文献

英文 中文
Boosting Your Context by Dual Similarity Checkup for In-Context Learning Medical Image Segmentation. 通过双重相似性检查提升你的上下文,实现内文学习医学图像分割。
Pub Date : 2024-08-08 DOI: 10.1109/TMI.2024.3440311
Jun Gao, Qicheng Lao, Qingbo Kang, Paul Liu, Chenlin Du, Kang Li, Le Zhang

The recent advent of in-context learning (ICL) capabilities in large pre-trained models has yielded significant advancements in the generalization of segmentation models. By supplying domain-specific image-mask pairs, the ICL model can be effectively guided to produce optimal segmentation outcomes, eliminating the necessity for model fine-tuning or interactive prompting. However, current existing ICL-based segmentation models exhibit significant limitations when applied to medical segmentation datasets with substantial diversity. To address this issue, we propose a dual similarity checkup approach to guarantee the effectiveness of selected in-context samples so that their guidance can be maximally leveraged during inference. We first employ large pre-trained vision models for extracting strong semantic representations from input images and constructing a feature embedding memory bank for semantic similarity checkup during inference. Assuring the similarity in the input semantic space, we then minimize the discrepancy in the mask appearance distribution between the support set and the estimated mask appearance prior through similarity-weighted sampling and augmentation. We validate our proposed dual similarity checkup approach on eight publicly available medical segmentation datasets, and extensive experimental results demonstrate that our proposed method significantly improves the performance metrics of existing ICL-based segmentation models, particularly when applied to medical image datasets characterized by substantial diversity.

最近,在大型预训练模型中出现了上下文学习(ICL)功能,大大提高了分割模型的通用性。通过提供特定领域的图像-掩码对,ICL 模型可以有效地引导产生最佳分割结果,从而消除了模型微调或交互式提示的必要性。然而,目前现有的基于 ICL 的分割模型在应用于具有大量多样性的医学分割数据集时表现出明显的局限性。为了解决这个问题,我们提出了一种双重相似性检查方法,以保证所选上下文样本的有效性,从而在推理过程中最大限度地利用它们的指导作用。首先,我们采用大型预训练视觉模型从输入图像中提取强语义表征,并构建一个特征嵌入记忆库,以便在推理过程中进行语义相似性检查。在确保输入语义空间的相似性后,我们通过相似性加权采样和增强,使支持集和估计的掩码外观先验之间的掩码外观分布差异最小化。我们在八个公开的医学分割数据集上验证了我们提出的双重相似性检查方法,大量实验结果表明,我们提出的方法显著提高了现有基于 ICL 的分割模型的性能指标,尤其是在应用于具有大量多样性特征的医学图像数据集时。
{"title":"Boosting Your Context by Dual Similarity Checkup for In-Context Learning Medical Image Segmentation.","authors":"Jun Gao, Qicheng Lao, Qingbo Kang, Paul Liu, Chenlin Du, Kang Li, Le Zhang","doi":"10.1109/TMI.2024.3440311","DOIUrl":"https://doi.org/10.1109/TMI.2024.3440311","url":null,"abstract":"<p><p>The recent advent of in-context learning (ICL) capabilities in large pre-trained models has yielded significant advancements in the generalization of segmentation models. By supplying domain-specific image-mask pairs, the ICL model can be effectively guided to produce optimal segmentation outcomes, eliminating the necessity for model fine-tuning or interactive prompting. However, current existing ICL-based segmentation models exhibit significant limitations when applied to medical segmentation datasets with substantial diversity. To address this issue, we propose a dual similarity checkup approach to guarantee the effectiveness of selected in-context samples so that their guidance can be maximally leveraged during inference. We first employ large pre-trained vision models for extracting strong semantic representations from input images and constructing a feature embedding memory bank for semantic similarity checkup during inference. Assuring the similarity in the input semantic space, we then minimize the discrepancy in the mask appearance distribution between the support set and the estimated mask appearance prior through similarity-weighted sampling and augmentation. We validate our proposed dual similarity checkup approach on eight publicly available medical segmentation datasets, and extensive experimental results demonstrate that our proposed method significantly improves the performance metrics of existing ICL-based segmentation models, particularly when applied to medical image datasets characterized by substantial diversity.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141908633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diffusion Modeling with Domain-conditioned Prior Guidance for Accelerated MRI and qMRI Reconstruction. 采用领域条件先验指导的扩散建模,用于加速 MRI 和 qMRI 重建。
Pub Date : 2024-08-08 DOI: 10.1109/TMI.2024.3440227
Wanyu Bian, Albert Jang, Liping Zhang, Xiaonan Yang, Zachary Stewart, Fang Liu

This study introduces a novel image reconstruction technique based on a diffusion model that is conditioned on the native data domain. Our method is applied to multi-coil MRI and quantitative MRI (qMRI) reconstruction, leveraging the domain-conditioned diffusion model within the frequency and parameter domains. The prior MRI physics are used as embeddings in the diffusion model, enforcing data consistency to guide the training and sampling process, characterizing MRI k-space encoding in MRI reconstruction, and leveraging MR signal modeling for qMRI reconstruction. Furthermore, a gradient descent optimization is incorporated into the diffusion steps, enhancing feature learning and improving denoising. The proposed method demonstrates a significant promise, particularly for reconstructing images at high acceleration factors. Notably, it maintains great reconstruction accuracy for static and quantitative MRI reconstruction across diverse anatomical structures. Beyond its immediate applications, this method provides potential generalization capability, making it adaptable to inverse problems across various domains.

本研究介绍了一种基于以原始数据域为条件的扩散模型的新型图像重建技术。我们的方法适用于多线圈磁共振成像和定量磁共振成像(qMRI)重建,利用频率域和参数域内的域条件扩散模型。先验核磁共振物理学被用作扩散模型中的嵌入,加强数据一致性以指导训练和采样过程,在核磁共振重建中描述核磁共振 k 空间编码,并利用核磁共振信号建模进行 qMRI 重建。此外,还在扩散步骤中加入了梯度下降优化,从而加强了特征学习并改善了去噪效果。所提出的方法前景广阔,尤其适用于高加速度系数下的图像重建。值得注意的是,它在各种解剖结构的静态和定量 MRI 重建中保持了极高的重建精度。除了直接应用,该方法还具有潜在的通用能力,使其能够适应各种领域的逆问题。
{"title":"Diffusion Modeling with Domain-conditioned Prior Guidance for Accelerated MRI and qMRI Reconstruction.","authors":"Wanyu Bian, Albert Jang, Liping Zhang, Xiaonan Yang, Zachary Stewart, Fang Liu","doi":"10.1109/TMI.2024.3440227","DOIUrl":"https://doi.org/10.1109/TMI.2024.3440227","url":null,"abstract":"<p><p>This study introduces a novel image reconstruction technique based on a diffusion model that is conditioned on the native data domain. Our method is applied to multi-coil MRI and quantitative MRI (qMRI) reconstruction, leveraging the domain-conditioned diffusion model within the frequency and parameter domains. The prior MRI physics are used as embeddings in the diffusion model, enforcing data consistency to guide the training and sampling process, characterizing MRI k-space encoding in MRI reconstruction, and leveraging MR signal modeling for qMRI reconstruction. Furthermore, a gradient descent optimization is incorporated into the diffusion steps, enhancing feature learning and improving denoising. The proposed method demonstrates a significant promise, particularly for reconstructing images at high acceleration factors. Notably, it maintains great reconstruction accuracy for static and quantitative MRI reconstruction across diverse anatomical structures. Beyond its immediate applications, this method provides potential generalization capability, making it adaptable to inverse problems across various domains.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141908634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Metal Artifacts Reducing Method Based on Diffusion Model Using Intraoral Optical Scanning Data for Dental Cone-beam CT. 基于扩散模型的牙科锥形束 CT 口内光学扫描数据金属伪影消除方法
Pub Date : 2024-08-07 DOI: 10.1109/TMI.2024.3440009
Yuyang Wang, Xiaomo Liu, Liang Li

In dental cone-beam computed tomography (CBCT), metal implants can cause metal artifacts, affecting image quality and the final medical diagnosis. To reduce the impact of metal artifacts, our proposed metal artifacts reduction (MAR) method takes a novel approach by integrating CBCT data with intraoral optical scanning data, utilizing information from these two different modalities to correct metal artifacts in the projection domain using a guided-diffusion model. The intraoral optical scanning data provides a more accurate generation domain for the diffusion model. We have proposed a multi-channel generation method in the training and generation stage of the diffusion model, considering the physical mechanism of CBCT, to ensure the consistency of the diffusion model generation. In this paper, we present experimental results that convincingly demonstrate the feasibility and efficacy of our approach, which introduces intraoral optical scanning data into the analysis and processing of projection domain data using the diffusion model for the first time, and modifies the diffusion model to better adapt to the physical model of CBCT.

在牙科锥束计算机断层扫描(CBCT)中,金属植入物会造成金属伪影,影响图像质量和最终医疗诊断。为了减少金属伪影的影响,我们提出的减少金属伪影(MAR)方法采用了一种新颖的方法,将 CBCT 数据与口腔内光学扫描数据整合在一起,利用这两种不同模式的信息,在投影域使用引导扩散模型修正金属伪影。口内光学扫描数据为扩散模型提供了更精确的生成域。考虑到 CBCT 的物理机制,我们在扩散模型的训练和生成阶段提出了一种多通道生成方法,以确保扩散模型生成的一致性。在本文中,我们首次将口内光学扫描数据引入到使用扩散模型的投影域数据分析和处理中,并对扩散模型进行修改,使其更好地适应 CBCT 的物理模型,实验结果令人信服地证明了我们的方法的可行性和有效性。
{"title":"Metal Artifacts Reducing Method Based on Diffusion Model Using Intraoral Optical Scanning Data for Dental Cone-beam CT.","authors":"Yuyang Wang, Xiaomo Liu, Liang Li","doi":"10.1109/TMI.2024.3440009","DOIUrl":"10.1109/TMI.2024.3440009","url":null,"abstract":"<p><p>In dental cone-beam computed tomography (CBCT), metal implants can cause metal artifacts, affecting image quality and the final medical diagnosis. To reduce the impact of metal artifacts, our proposed metal artifacts reduction (MAR) method takes a novel approach by integrating CBCT data with intraoral optical scanning data, utilizing information from these two different modalities to correct metal artifacts in the projection domain using a guided-diffusion model. The intraoral optical scanning data provides a more accurate generation domain for the diffusion model. We have proposed a multi-channel generation method in the training and generation stage of the diffusion model, considering the physical mechanism of CBCT, to ensure the consistency of the diffusion model generation. In this paper, we present experimental results that convincingly demonstrate the feasibility and efficacy of our approach, which introduces intraoral optical scanning data into the analysis and processing of projection domain data using the diffusion model for the first time, and modifies the diffusion model to better adapt to the physical model of CBCT.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141903950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Supervised Cyclic Diffeomorphic Mapping for Soft Tissue Deformation Recovery in Robotic Surgery Scenes. 用于机器人手术场景中软组织变形恢复的自监督循环异构映射。
Pub Date : 2024-08-07 DOI: 10.1109/TMI.2024.3439701
Shizhan Gong, Yonghao Long, Kai Chen, Jiaqi Liu, Yuliang Xiao, Alexis Cheng, Zerui Wang, Qi Dou

The ability to recover tissue deformation from visual features is fundamental for many robotic surgery applications. This has been a long-standing research topic in computer vision, however, is still unsolved due to complex dynamics of soft tissues when being manipulated by surgical instruments. The ambiguous pixel correspondence caused by homogeneous texture makes achieving dense and accurate tissue tracking even more challenging. In this paper, we propose a novel self-supervised framework to recover tissue deformations from stereo surgical videos. Our approach integrates semantics, cross-frame motion flow, and long-range temporal dependencies to enable the recovered deformations to represent actual tissue dynamics. Moreover, we incorporate diffeomorphic mapping to regularize the warping field to be physically realistic. To comprehensively evaluate our method, we collected stereo surgical video clips containing three types of tissue manipulation (i.e., pushing, dissection and retraction) from two different types of surgeries (i.e., hemicolectomy and mesorectal excision). Our method has achieved impressive results in capturing deformation in 3D mesh, and generalized well across manipulations and surgeries. It also outperforms current state-of-the-art methods on non-rigid registration and optical flow estimation. To the best of our knowledge, this is the first work on self-supervised learning for dense tissue deformation modeling from stereo surgical videos. Our code will be released.

从视觉特征中恢复组织变形的能力是许多机器人手术应用的基础。这一直是计算机视觉领域的一个长期研究课题,但由于软组织在手术器械作用下的复杂动态特性,这一课题至今仍未得到解决。同质纹理造成的模糊像素对应关系使得实现密集而精确的组织跟踪更具挑战性。在本文中,我们提出了一种新颖的自监督框架来恢复立体手术视频中的组织变形。我们的方法整合了语义、跨帧运动流和长时程依赖性,使恢复的变形能够代表实际的组织动态。此外,我们还结合了差异形态映射技术,对扭曲场进行正则化处理,使其符合物理实际。为了全面评估我们的方法,我们收集了两种不同类型手术(即半结肠切除术和直肠系膜切除术)的立体手术视频剪辑,其中包含三种类型的组织操作(即推动、剥离和牵拉)。我们的方法在捕捉三维网状结构的形变方面取得了令人印象深刻的成果,并在各种操作和手术中具有良好的通用性。在非刚性配准和光流估计方面,它也优于目前最先进的方法。据我们所知,这是第一项从立体手术视频中对致密组织变形建模进行自我监督学习的工作。我们的代码即将发布。
{"title":"Self-Supervised Cyclic Diffeomorphic Mapping for Soft Tissue Deformation Recovery in Robotic Surgery Scenes.","authors":"Shizhan Gong, Yonghao Long, Kai Chen, Jiaqi Liu, Yuliang Xiao, Alexis Cheng, Zerui Wang, Qi Dou","doi":"10.1109/TMI.2024.3439701","DOIUrl":"https://doi.org/10.1109/TMI.2024.3439701","url":null,"abstract":"<p><p>The ability to recover tissue deformation from visual features is fundamental for many robotic surgery applications. This has been a long-standing research topic in computer vision, however, is still unsolved due to complex dynamics of soft tissues when being manipulated by surgical instruments. The ambiguous pixel correspondence caused by homogeneous texture makes achieving dense and accurate tissue tracking even more challenging. In this paper, we propose a novel self-supervised framework to recover tissue deformations from stereo surgical videos. Our approach integrates semantics, cross-frame motion flow, and long-range temporal dependencies to enable the recovered deformations to represent actual tissue dynamics. Moreover, we incorporate diffeomorphic mapping to regularize the warping field to be physically realistic. To comprehensively evaluate our method, we collected stereo surgical video clips containing three types of tissue manipulation (i.e., pushing, dissection and retraction) from two different types of surgeries (i.e., hemicolectomy and mesorectal excision). Our method has achieved impressive results in capturing deformation in 3D mesh, and generalized well across manipulations and surgeries. It also outperforms current state-of-the-art methods on non-rigid registration and optical flow estimation. To the best of our knowledge, this is the first work on self-supervised learning for dense tissue deformation modeling from stereo surgical videos. Our code will be released.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141903951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Row-column array (RCA)-based 3D ultrasound vascular imaging with spatial-temporal similarity weighting. 利用时空相似性加权增强基于行列式阵列(RCA)的三维超声血管成像。
Pub Date : 2024-08-06 DOI: 10.1109/TMI.2024.3439615
Jingke Zhang, Chengwu Huang, U-Wai Lok, Zhijie Dong, Hui Liu, Ping Gong, Pengfei Song, Shigao Chen

Ultrasound vascular imaging (UVI) is a valuable tool for monitoring the physiological states and evaluating the pathological diseases. Advancing from conventional two-dimensional (2D) to three-dimensional (3D) UVI would enhance the vasculature visualization, thereby improving its reliability. Row-column array (RCA) has emerged as a promising approach for cost-effective ultrafast 3D imaging with a low channel count. However, ultrafast RCA imaging is often hampered by high-level sidelobe artifacts and low signal-to-noise ratio (SNR), which makes RCA-based UVI challenging. In this study, we propose a spatial-temporal similarity weighting (St-SW) method to overcome these challenges by exploiting the incoherence of sidelobe artifacts and noise between datasets acquired using orthogonal transmissions. Simulation, in vitro blood flow phantom, and in vivo experiments were conducted to compare the proposed method with existing orthogonal plane wave imaging (OPW), row-column-specific frame-multiply-and-sum beamforming (RC-FMAS), and XDoppler techniques. Qualitative and quantitative results demonstrate the superior performance of the proposed method. In simulations, the proposed method reduced the sidelobe level by 31.3 dB, 20.8 dB, and 14.0 dB, compared to OPW, XDoppler, and RC-FMAS, respectively. In the blood flow phantom experiment, the proposed method significantly improved the contrast-to-noise ratio (CNR) of the tube by 26.8 dB, 25.5 dB, and 19.7 dB, compared to OPW, XDoppler, and RC-FMAS methods, respectively. In the human submandibular gland experiment, it not only reconstructed a more complete vasculature but also improved the CNR by more than 15 dB, compared to OPW, XDoppler, and RC-FMAS methods. In summary, the proposed method effectively suppresses the side-lobe artifacts and noise in images collected using an RCA under low SNR conditions, leading to improved visualization of 3D vasculatures.

超声血管成像(UVI)是监测生理状态和评估病理疾病的重要工具。从传统的二维(2D)到三维(3D)超声血管成像(UVI)将增强血管的可视化,从而提高其可靠性。行列式阵列(RCA)是一种很有前途的方法,它能以较少的通道数进行经济高效的超快三维成像。然而,RCA 的超快成像往往受到高水平侧叶伪影和低信噪比(SNR)的影响,这使得基于 RCA 的超快三维成像具有挑战性。在这项研究中,我们提出了一种空间-时间相似性加权(St-SW)方法,利用正交传输获取的数据集之间的边瓣伪影和噪声的不一致性来克服这些挑战。通过仿真、体外血流模型和体内实验,将提出的方法与现有的正交平面波成像(OPW)、行列特定帧乘和波束成形(RC-FMAS)和 XDoppler 技术进行了比较。定性和定量结果都证明了所提方法的优越性能。在模拟实验中,与 OPW、XDoppler 和 RC-FMAS 相比,所提出的方法分别降低了 31.3 dB、20.8 dB 和 14.0 dB 的侧叶水平。在血流模型实验中,与 OPW、XDoppler 和 RC-FMAS 方法相比,所提出的方法大大提高了管道的对比度-噪声比(CNR),分别为 26.8 dB、25.5 dB 和 19.7 dB。在人体下颌下腺实验中,与 OPW、XDoppler 和 RC-FMAS 方法相比,该方法不仅重建了更完整的脉管,还将 CNR 提高了 15 分贝以上。总之,所提出的方法能有效抑制低信噪比条件下使用 RCA 采集图像中的侧叶伪影和噪声,从而改善三维血管的可视化。
{"title":"Enhancing Row-column array (RCA)-based 3D ultrasound vascular imaging with spatial-temporal similarity weighting.","authors":"Jingke Zhang, Chengwu Huang, U-Wai Lok, Zhijie Dong, Hui Liu, Ping Gong, Pengfei Song, Shigao Chen","doi":"10.1109/TMI.2024.3439615","DOIUrl":"https://doi.org/10.1109/TMI.2024.3439615","url":null,"abstract":"<p><p>Ultrasound vascular imaging (UVI) is a valuable tool for monitoring the physiological states and evaluating the pathological diseases. Advancing from conventional two-dimensional (2D) to three-dimensional (3D) UVI would enhance the vasculature visualization, thereby improving its reliability. Row-column array (RCA) has emerged as a promising approach for cost-effective ultrafast 3D imaging with a low channel count. However, ultrafast RCA imaging is often hampered by high-level sidelobe artifacts and low signal-to-noise ratio (SNR), which makes RCA-based UVI challenging. In this study, we propose a spatial-temporal similarity weighting (St-SW) method to overcome these challenges by exploiting the incoherence of sidelobe artifacts and noise between datasets acquired using orthogonal transmissions. Simulation, in vitro blood flow phantom, and in vivo experiments were conducted to compare the proposed method with existing orthogonal plane wave imaging (OPW), row-column-specific frame-multiply-and-sum beamforming (RC-FMAS), and XDoppler techniques. Qualitative and quantitative results demonstrate the superior performance of the proposed method. In simulations, the proposed method reduced the sidelobe level by 31.3 dB, 20.8 dB, and 14.0 dB, compared to OPW, XDoppler, and RC-FMAS, respectively. In the blood flow phantom experiment, the proposed method significantly improved the contrast-to-noise ratio (CNR) of the tube by 26.8 dB, 25.5 dB, and 19.7 dB, compared to OPW, XDoppler, and RC-FMAS methods, respectively. In the human submandibular gland experiment, it not only reconstructed a more complete vasculature but also improved the CNR by more than 15 dB, compared to OPW, XDoppler, and RC-FMAS methods. In summary, the proposed method effectively suppresses the side-lobe artifacts and noise in images collected using an RCA under low SNR conditions, leading to improved visualization of 3D vasculatures.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141899235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2V-CBCT: Two-Orthogonal-Projection based CBCT Reconstruction and Dose Calculation for Radiation Therapy using Real Projection Data. 2V-CBCT:使用真实投影数据进行基于双正交投影的 CBCT 重建和放射治疗剂量计算。
Pub Date : 2024-08-06 DOI: 10.1109/TMI.2024.3439573
Yikun Zhang, Dianlin Hu, Wangyao Li, Weijie Zhang, Gaoyu Chen, Ronald C Chen, Yang Chen, Hao Gao

This work demonstrates the feasibility of two-orthogonal-projection-based CBCT (2V-CBCT) reconstruction and dose calculation for radiation therapy (RT) using real projection data, which is the first 2V-CBCT feasibility study with real projection data, to the best of our knowledge. RT treatments are often delivered in multiple fractions, for which on-board CBCT is desirable to calculate the delivered dose per fraction for the purpose of RT delivery quality assurance and adaptive RT. However, not all RT treatments/fractions have CBCT acquired, but two orthogonal projections are always available. The question to be addressed in this work is the feasibility of 2V-CBCT for the purpose of RT dose calculation. 2V-CBCT is a severely ill-posed inverse problem for which we propose a coarse-to-fine learning strategy. First, a 3D deep neural network that can extract and exploit the inter-slice and intra-slice information is adopted to predict the initial 3D volumes. Then, a 2D deep neural network is utilized to fine-tune the initial 3D volumes slice-by-slice. During the fine-tuning stage, a perceptual loss based on multi-frequency features is employed to enhance the image reconstruction. Dose calculation results from both photon and proton RT demonstrate that 2V-CBCT provides comparable accuracy with full-view CBCT based on real projection data.

据我们所知,这是第一项使用真实投影数据进行的 2V-CBCT 可行性研究。放射治疗通常分多个部分进行,因此,为了保证放射治疗的质量和适应性放射治疗,需要机载 CBCT 来计算每个部分的放射剂量。然而,并非所有的 RT 治疗/分段都能获得 CBCT,但两个正交投影总是可用的。这项工作要解决的问题是 2V-CBCT 用于 RT 剂量计算的可行性。2V-CBCT 是一个严重求解困难的逆问题,为此我们提出了一种从粗到细的学习策略。首先,采用能提取和利用切片间和切片内信息的三维深度神经网络来预测初始三维体积。然后,利用二维深度神经网络对初始三维体积进行逐片微调。在微调阶段,采用基于多频率特性的感知损失来增强图像重建。光子和质子 RT 的剂量计算结果表明,基于真实投影数据的 2V-CBCT 可提供与全视角 CBCT 相当的精确度。
{"title":"2V-CBCT: Two-Orthogonal-Projection based CBCT Reconstruction and Dose Calculation for Radiation Therapy using Real Projection Data.","authors":"Yikun Zhang, Dianlin Hu, Wangyao Li, Weijie Zhang, Gaoyu Chen, Ronald C Chen, Yang Chen, Hao Gao","doi":"10.1109/TMI.2024.3439573","DOIUrl":"10.1109/TMI.2024.3439573","url":null,"abstract":"<p><p>This work demonstrates the feasibility of two-orthogonal-projection-based CBCT (2V-CBCT) reconstruction and dose calculation for radiation therapy (RT) using real projection data, which is the first 2V-CBCT feasibility study with real projection data, to the best of our knowledge. RT treatments are often delivered in multiple fractions, for which on-board CBCT is desirable to calculate the delivered dose per fraction for the purpose of RT delivery quality assurance and adaptive RT. However, not all RT treatments/fractions have CBCT acquired, but two orthogonal projections are always available. The question to be addressed in this work is the feasibility of 2V-CBCT for the purpose of RT dose calculation. 2V-CBCT is a severely ill-posed inverse problem for which we propose a coarse-to-fine learning strategy. First, a 3D deep neural network that can extract and exploit the inter-slice and intra-slice information is adopted to predict the initial 3D volumes. Then, a 2D deep neural network is utilized to fine-tune the initial 3D volumes slice-by-slice. During the fine-tuning stage, a perceptual loss based on multi-frequency features is employed to enhance the image reconstruction. Dose calculation results from both photon and proton RT demonstrate that 2V-CBCT provides comparable accuracy with full-view CBCT based on real projection data.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141899234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Poroelastography Method for High-quality Estimation of Lateral Strain, Solid Stress and Fluid Pressure In Vivo. 用于高质量估算体内侧向应变、固体应力和流体压力的新型透气造影法
Pub Date : 2024-08-05 DOI: 10.1109/TMI.2024.3438564
Md Hadiur Rahman Khan, Raffaella Righetti

Assessment of mechanical and transport properties of tissues using ultrasound elasticity imaging requires accurate estimations of the spatiotemporal distribution of volumetric strain. Due to physical constraints such as pitch limitation and the lack of phase information in the lateral direction, the quality of lateral strain estimation is typically significantly lower than the quality of axial strain estimation. In this paper, a novel lateral strain estimation technique based on the physics of compressible porous media is developed, tested and validated. This technique is referred to as "Poroelastography-based Ultrasound Lateral Strain Estimation" (PULSE). PULSE differs from previously proposed lateral strain estimators as it uses the underlying physics of internal fluid flow within a local region of the tissue as theoretical foundation. PULSE establishes a relation between spatiotemporal changes in the axial strains and corresponding spatiotemporal changes in the lateral strains, effectively allowing assessment of lateral strains with comparable quality of axial strain estimators. We demonstrate that PULSE can also be used to accurately track compression-induced solid stresses and fluid pressure in cancers using ultrasound poroelastography (USPE). In this study, we report the theoretical formulation for PULSE and validation using finite element (FE) and ultrasound simulations. PULSE-generated results exhibit less than 5% percentage relative error (PRE) and greater than 90% structural similarity index (SSIM) compared to ground truth simulations. Experimental results are included to qualitatively assess the performance of PULSE in vivo. The proposed method can be used to overcome the inherent limitations of non-axial strain imaging and improve clinical translatability of USPE.

利用超声弹性成像评估组织的机械和传输特性需要准确估计体积应变的时空分布。由于间距限制和横向相位信息缺乏等物理限制,横向应变估计的质量通常明显低于轴向应变估计的质量。本文基于可压缩多孔介质的物理学原理,开发、测试和验证了一种新型横向应变估算技术。该技术被称为 "基于超声波侧向应变估算(PULSE)"。PULSE 与之前提出的侧向应变估算器不同,它使用组织局部区域内内部流体流动的基本物理学作为理论基础。PULSE 建立了轴向应变的时空变化与侧向应变的相应时空变化之间的关系,从而有效地评估了侧向应变,其质量与轴向应变估算器相当。我们证明了 PULSE 还可用于利用超声孔弹性成像 (USPE) 精确跟踪癌症中压缩引起的固体应力和流体压力。在本研究中,我们报告了 PULSE 的理论公式,并使用有限元 (FE) 和超声模拟进行了验证。与地面实况模拟相比,PULSE 生成的结果显示出小于 5% 的百分比相对误差 (PRE),以及大于 90% 的结构相似性指数 (SSIM)。实验结果对 PULSE 在体内的性能进行了定性评估。所提出的方法可用于克服非轴应变成像的固有局限性,并提高 USPE 的临床转化能力。
{"title":"A Novel Poroelastography Method for High-quality Estimation of Lateral Strain, Solid Stress and Fluid Pressure In Vivo.","authors":"Md Hadiur Rahman Khan, Raffaella Righetti","doi":"10.1109/TMI.2024.3438564","DOIUrl":"https://doi.org/10.1109/TMI.2024.3438564","url":null,"abstract":"<p><p>Assessment of mechanical and transport properties of tissues using ultrasound elasticity imaging requires accurate estimations of the spatiotemporal distribution of volumetric strain. Due to physical constraints such as pitch limitation and the lack of phase information in the lateral direction, the quality of lateral strain estimation is typically significantly lower than the quality of axial strain estimation. In this paper, a novel lateral strain estimation technique based on the physics of compressible porous media is developed, tested and validated. This technique is referred to as \"Poroelastography-based Ultrasound Lateral Strain Estimation\" (PULSE). PULSE differs from previously proposed lateral strain estimators as it uses the underlying physics of internal fluid flow within a local region of the tissue as theoretical foundation. PULSE establishes a relation between spatiotemporal changes in the axial strains and corresponding spatiotemporal changes in the lateral strains, effectively allowing assessment of lateral strains with comparable quality of axial strain estimators. We demonstrate that PULSE can also be used to accurately track compression-induced solid stresses and fluid pressure in cancers using ultrasound poroelastography (USPE). In this study, we report the theoretical formulation for PULSE and validation using finite element (FE) and ultrasound simulations. PULSE-generated results exhibit less than 5% percentage relative error (PRE) and greater than 90% structural similarity index (SSIM) compared to ground truth simulations. Experimental results are included to qualitatively assess the performance of PULSE in vivo. The proposed method can be used to overcome the inherent limitations of non-axial strain imaging and improve clinical translatability of USPE.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141895018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SegMorph: Concurrent Motion Estimation and Segmentation for Cardiac MRI Sequences. SegMorph:心脏磁共振成像序列的并发运动估计和分割。
Pub Date : 2024-08-05 DOI: 10.1109/TMI.2024.3435000
Ning Bi, Arezoo Zakeri, Yan Xia, Nina Cheng, Zeike A Taylor, Alejandro F Frangi, Ali Gooya

We propose a novel recurrent variational network, SegMorph, to perform concurrent segmentation and motion estimation on cardiac cine magnetic resonance image (CMR) sequences. Our model establishes a recurrent latent space that captures spatiotemporal features from cine-MRI sequences for multitask inference and synthesis. The proposed model follows a recurrent variational auto-encoder framework and adopts a learnt prior from the temporal inputs. We utilise a multi-branch decoder to handle bi-ventricular segmentation and motion estimation simultaneously. In addition to the spatiotemporal features from the latent space, motion estimation enriches the supervision of sequential segmentation tasks by providing pseudo-ground truth. On the other hand, the segmentation branch helps with motion estimation by predicting deformation vector fields (DVFs) based on anatomical information. Experimental results demonstrate that the proposed method performs better than state-of-the-art approaches qualitatively and quantitatively for both segmentation and motion estimation tasks. We achieved an 81% average Dice Similarity Coefficient (DSC) and a less than 3.5 mm average Hausdorff distance on segmentation. Meanwhile, we achieved a motion estimation Dice Similarity Coefficient of over 79%, with approximately 0.14% of pixels displaying a negative Jacobian determinant in the estimated DVFs.

我们提出了一种新颖的递归变异网络 SegMorph,用于同时对心脏电影磁共振图像(CMR)序列进行分割和运动估计。我们的模型建立了一个递归潜空间,可捕捉电影磁共振成像序列的时空特征,用于多任务推理和合成。所提议的模型遵循递归变异自动编码器框架,并采用从时间输入中学习的先验。我们利用多分支解码器同时处理双心室分割和运动估计。除了来自潜在空间的时空特征外,运动估计通过提供伪地面真实来丰富对顺序分割任务的监督。另一方面,分割分支根据解剖信息预测变形矢量场(DVF),从而帮助进行运动估计。实验结果表明,在分割和运动估计任务方面,所提出的方法在质量和数量上都优于最先进的方法。在分割方面,我们取得了 81% 的平均骰子相似系数(DSC)和小于 3.5 mm 的平均豪斯多夫距离。同时,我们的运动估计骰子相似系数超过了 79%,约 0.14% 的像素在估计的 DVF 中显示负雅各布行列式。
{"title":"SegMorph: Concurrent Motion Estimation and Segmentation for Cardiac MRI Sequences.","authors":"Ning Bi, Arezoo Zakeri, Yan Xia, Nina Cheng, Zeike A Taylor, Alejandro F Frangi, Ali Gooya","doi":"10.1109/TMI.2024.3435000","DOIUrl":"https://doi.org/10.1109/TMI.2024.3435000","url":null,"abstract":"<p><p>We propose a novel recurrent variational network, SegMorph, to perform concurrent segmentation and motion estimation on cardiac cine magnetic resonance image (CMR) sequences. Our model establishes a recurrent latent space that captures spatiotemporal features from cine-MRI sequences for multitask inference and synthesis. The proposed model follows a recurrent variational auto-encoder framework and adopts a learnt prior from the temporal inputs. We utilise a multi-branch decoder to handle bi-ventricular segmentation and motion estimation simultaneously. In addition to the spatiotemporal features from the latent space, motion estimation enriches the supervision of sequential segmentation tasks by providing pseudo-ground truth. On the other hand, the segmentation branch helps with motion estimation by predicting deformation vector fields (DVFs) based on anatomical information. Experimental results demonstrate that the proposed method performs better than state-of-the-art approaches qualitatively and quantitatively for both segmentation and motion estimation tasks. We achieved an 81% average Dice Similarity Coefficient (DSC) and a less than 3.5 mm average Hausdorff distance on segmentation. Meanwhile, we achieved a motion estimation Dice Similarity Coefficient of over 79%, with approximately 0.14% of pixels displaying a negative Jacobian determinant in the estimated DVFs.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141895072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OTMorph: Unsupervised Multi-domain Abdominal Medical Image Registration Using Neural Optimal Transport. OTMorph:使用神经优化传输的无监督多域腹部医学图像注册。
Pub Date : 2024-08-02 DOI: 10.1109/TMI.2024.3437295
Boah Kim, Yan Zhuang, Tejas Sudharshan Mathai, Ronald M Summers

Deformable image registration is one of the essential processes in analyzing medical images. In particular, when diagnosing abdominal diseases such as hepatic cancer and lymphoma, multi-domain images scanned from different modalities or different imaging protocols are often used. However, they are not aligned due to scanning times, patient breathing, movement, etc. Although recent learning-based approaches can provide deformations in real-time with high performance, multi-domain abdominal image registration using deep learning is still challenging since the images in different domains have different characteristics such as image contrast and intensity ranges. To address this, this paper proposes a novel unsupervised multi-domain image registration framework using neural optimal transport, dubbed OTMorph. When moving and fixed volumes are given as input, a transport module of our proposed model learns the optimal transport plan to map data distributions from the moving to the fixed volumes and estimates a domain-transported volume. Subsequently, a registration module taking the transported volume can effectively estimate the deformation field, leading to deformation performance improvement. Experimental results on multi-domain image registration using multi-modality and multi-parametric abdominal medical images demonstrate that the proposed method provides superior deformable registration via the domain-transported image that alleviates the domain gap between the input images. Also, we attain the improvement even on out-of-distribution data, which indicates the superior generalizability of our model for the registration of various medical images. Our source code is available at https://github.com/boahK/OTMorph.

可变形图像配准是分析医学图像的重要过程之一。特别是在诊断肝癌和淋巴瘤等腹部疾病时,通常会使用不同模式或不同成像方案扫描的多域图像。然而,由于扫描时间、病人呼吸、运动等原因,这些图像并不对齐。虽然最近基于学习的方法可以提供高性能的实时变形,但使用深度学习进行多域腹部图像配准仍具有挑战性,因为不同域的图像具有不同的特征,如图像对比度和强度范围。针对这一问题,本文提出了一种新颖的无监督多域图像配准框架,该框架采用神经最优传输技术,被称为 OTMorph。当输入移动体量和固定体量时,我们提出的模型中的传输模块会学习最优传输方案,将数据分布从移动体量映射到固定体量,并估算出域传输体量。随后,套准模块利用移动体可以有效地估计变形场,从而提高变形性能。使用多模态和多参数腹部医学图像进行多域图像配准的实验结果表明,所提出的方法通过域传输图像提供了卓越的可变形配准,缓解了输入图像之间的域差距。此外,我们甚至在非分布数据上也取得了改进,这表明我们的模型对各种医学图像的配准具有卓越的通用性。我们的源代码可在 https://github.com/boahK/OTMorph 上获取。
{"title":"OTMorph: Unsupervised Multi-domain Abdominal Medical Image Registration Using Neural Optimal Transport.","authors":"Boah Kim, Yan Zhuang, Tejas Sudharshan Mathai, Ronald M Summers","doi":"10.1109/TMI.2024.3437295","DOIUrl":"https://doi.org/10.1109/TMI.2024.3437295","url":null,"abstract":"<p><p>Deformable image registration is one of the essential processes in analyzing medical images. In particular, when diagnosing abdominal diseases such as hepatic cancer and lymphoma, multi-domain images scanned from different modalities or different imaging protocols are often used. However, they are not aligned due to scanning times, patient breathing, movement, etc. Although recent learning-based approaches can provide deformations in real-time with high performance, multi-domain abdominal image registration using deep learning is still challenging since the images in different domains have different characteristics such as image contrast and intensity ranges. To address this, this paper proposes a novel unsupervised multi-domain image registration framework using neural optimal transport, dubbed OTMorph. When moving and fixed volumes are given as input, a transport module of our proposed model learns the optimal transport plan to map data distributions from the moving to the fixed volumes and estimates a domain-transported volume. Subsequently, a registration module taking the transported volume can effectively estimate the deformation field, leading to deformation performance improvement. Experimental results on multi-domain image registration using multi-modality and multi-parametric abdominal medical images demonstrate that the proposed method provides superior deformable registration via the domain-transported image that alleviates the domain gap between the input images. Also, we attain the improvement even on out-of-distribution data, which indicates the superior generalizability of our model for the registration of various medical images. Our source code is available at https://github.com/boahK/OTMorph.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141879979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Consistency-guided Differential Decoding for Enhancing Semi-supervised Medical Image Segmentation. 一致性指导的差分解码用于增强半监督医学图像分割。
Pub Date : 2024-08-01 DOI: 10.1109/TMI.2024.3429340
Qingjie Zeng, Yutong Xie, Zilin Lu, Mengkang Lu, Jingfeng Zhang, Yuyin Zhou, Yong Xia

Semi-supervised learning (SSL) has been proven beneficial for mitigating the issue of limited labeled data, especially on volumetric medical image segmentation. Unlike previous SSL methods which focus on exploring highly confident pseudo-labels or developing consistency regularization schemes, our empirical findings suggest that differential decoder features emerge naturally when two decoders strive to generate consistent predictions. Based on the observation, we first analyze the treasure of discrepancy in learning towards consistency, under both pseudo-labeling and consistency regularization settings, and subsequently propose a novel SSL method called LeFeD, which learns the feature-level discrepancies obtained from two decoders, by feeding such information as feedback signals to the encoder. The core design of LeFeD is to enlarge the discrepancies by training differential decoders, and then learn from the differential features iteratively. We evaluate LeFeD against eight state-of-the-art (SOTA) methods on three public datasets. Experiments show LeFeD surpasses competitors without any bells and whistles, such as uncertainty estimation and strong constraints, as well as setting a new state of the art for semi-supervised medical image segmentation. Code has been released at https://github.com/maxwell0027/LeFeD.

半监督学习(SSL)已被证明有利于缓解标记数据有限的问题,尤其是在体积医学图像分割方面。与以往侧重于探索高可信度伪标签或开发一致性正则化方案的半监督学习方法不同,我们的实证研究结果表明,当两个解码器努力生成一致的预测时,差异解码器特征就会自然出现。基于这一观察结果,我们首先分析了在伪标签和一致性正则化设置下,差异在学习一致性过程中的重要性,随后提出了一种名为 LeFeD 的新型 SSL 方法,该方法可学习两个解码器获得的特征级差异,并将这些信息作为反馈信号反馈给编码器。LeFeD 的核心设计是通过训练差异解码器来扩大差异,然后从差异特征中迭代学习。我们在三个公共数据集上对 LeFeD 与八种最先进(SOTA)方法进行了评估。实验结果表明,LeFeD 在没有不确定性估计和强约束等任何附加功能的情况下就超越了竞争对手,并为半监督医学影像分割技术开创了新局面。代码已发布于 https://github.com/maxwell0027/LeFeD。
{"title":"Consistency-guided Differential Decoding for Enhancing Semi-supervised Medical Image Segmentation.","authors":"Qingjie Zeng, Yutong Xie, Zilin Lu, Mengkang Lu, Jingfeng Zhang, Yuyin Zhou, Yong Xia","doi":"10.1109/TMI.2024.3429340","DOIUrl":"10.1109/TMI.2024.3429340","url":null,"abstract":"<p><p>Semi-supervised learning (SSL) has been proven beneficial for mitigating the issue of limited labeled data, especially on volumetric medical image segmentation. Unlike previous SSL methods which focus on exploring highly confident pseudo-labels or developing consistency regularization schemes, our empirical findings suggest that differential decoder features emerge naturally when two decoders strive to generate consistent predictions. Based on the observation, we first analyze the treasure of discrepancy in learning towards consistency, under both pseudo-labeling and consistency regularization settings, and subsequently propose a novel SSL method called LeFeD, which learns the feature-level discrepancies obtained from two decoders, by feeding such information as feedback signals to the encoder. The core design of LeFeD is to enlarge the discrepancies by training differential decoders, and then learn from the differential features iteratively. We evaluate LeFeD against eight state-of-the-art (SOTA) methods on three public datasets. Experiments show LeFeD surpasses competitors without any bells and whistles, such as uncertainty estimation and strong constraints, as well as setting a new state of the art for semi-supervised medical image segmentation. Code has been released at https://github.com/maxwell0027/LeFeD.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141876992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on medical imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1