首页 > 最新文献

IEEE transactions on medical imaging最新文献

英文 中文
Bi-Constraints Diffusion: A Conditional Diffusion Model with Degradation Guidance for Metal Artifact Reduction. 双约束扩散:带退化指导的条件扩散模型,用于减少金属伪影。
Pub Date : 2024-08-15 DOI: 10.1109/TMI.2024.3442950
Mengting Luo, Nan Zhou, Tao Wang, Linchao He, Wang Wang, Hu Chen, Peixi Liao, Yi Zhang

In recent years, score-based diffusion models have emerged as effective tools for estimating score functions from empirical data distributions, particularly in integrating implicit priors with inverse problems like CT reconstruction. However, score-based diffusion models are rarely explored in challenging tasks such as metal artifact reduction (MAR). In this paper, we introduce the BiConstraints Diffusion Model for Metal Artifact Reduction (BCDMAR), an innovative approach that enhances iterative reconstruction with a conditional diffusion model for MAR. This method employs a metal artifact degradation operator in place of the traditional metal-excluded projection operator in the data-fidelity term, thereby preserving structure details around metal regions. However, scorebased diffusion models tend to be susceptible to grayscale shifts and unreliable structures, making it challenging to reach an optimal solution. To address this, we utilize a precorrected image as a prior constraint, guiding the generation of the score-based diffusion model. By iteratively applying the score-based diffusion model and the data-fidelity step in each sampling iteration, BCDMAR effectively maintains reliable tissue representation around metal regions and produces highly consistent structures in non-metal regions. Through extensive experiments focused on metal artifact reduction tasks, BCDMAR demonstrates superior performance over other state-of-the-art unsupervised and supervised methods, both quantitatively and in terms of visual results.

近年来,基于分数的扩散模型已成为从经验数据分布中估计分数函数的有效工具,特别是在将隐含先验与 CT 重建等逆问题相结合时。然而,基于分数的扩散模型很少在金属伪影减少(MAR)等具有挑战性的任务中得到应用。在本文中,我们介绍了用于减少金属伪影的双约束扩散模型(BiConstraints Diffusion Model for Metal Artifact Reduction,BCDMAR),这是一种用条件扩散模型增强迭代重建的创新方法。该方法在数据保真度项中采用金属伪影降级算子代替传统的金属排除投影算子,从而保留金属区域周围的结构细节。然而,基于分数的扩散模型往往容易受到灰度偏移和不可靠结构的影响,因此要获得最佳解决方案具有挑战性。为了解决这个问题,我们利用预校正图像作为先验约束,指导生成基于分数的扩散模型。通过在每次采样迭代中迭代应用基于分数的扩散模型和数据保真步骤,BCDMAR 能有效保持金属区域周围可靠的组织表示,并在非金属区域生成高度一致的结构。通过大量以减少金属伪影任务为重点的实验,BCDMAR 在定量和视觉效果方面都表现出优于其他最先进的无监督和有监督方法的性能。
{"title":"Bi-Constraints Diffusion: A Conditional Diffusion Model with Degradation Guidance for Metal Artifact Reduction.","authors":"Mengting Luo, Nan Zhou, Tao Wang, Linchao He, Wang Wang, Hu Chen, Peixi Liao, Yi Zhang","doi":"10.1109/TMI.2024.3442950","DOIUrl":"10.1109/TMI.2024.3442950","url":null,"abstract":"<p><p>In recent years, score-based diffusion models have emerged as effective tools for estimating score functions from empirical data distributions, particularly in integrating implicit priors with inverse problems like CT reconstruction. However, score-based diffusion models are rarely explored in challenging tasks such as metal artifact reduction (MAR). In this paper, we introduce the BiConstraints Diffusion Model for Metal Artifact Reduction (BCDMAR), an innovative approach that enhances iterative reconstruction with a conditional diffusion model for MAR. This method employs a metal artifact degradation operator in place of the traditional metal-excluded projection operator in the data-fidelity term, thereby preserving structure details around metal regions. However, scorebased diffusion models tend to be susceptible to grayscale shifts and unreliable structures, making it challenging to reach an optimal solution. To address this, we utilize a precorrected image as a prior constraint, guiding the generation of the score-based diffusion model. By iteratively applying the score-based diffusion model and the data-fidelity step in each sampling iteration, BCDMAR effectively maintains reliable tissue representation around metal regions and produces highly consistent structures in non-metal regions. Through extensive experiments focused on metal artifact reduction tasks, BCDMAR demonstrates superior performance over other state-of-the-art unsupervised and supervised methods, both quantitatively and in terms of visual results.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141989745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Domain-interactive Contrastive Learning and Prototype-guided Self-training for Cross-domain Polyp Segmentation. 跨领域息肉分割的领域交互式对比学习和原型指导下的自我训练。
Pub Date : 2024-08-14 DOI: 10.1109/TMI.2024.3443262
Ziru Lu, Yizhe Zhang, Yi Zhou, Ye Wu, Tao Zhou

Accurate polyp segmentation plays a critical role from colonoscopy images in the diagnosis and treatment of colorectal cancer. While deep learning-based polyp segmentation models have made significant progress, they often suffer from performance degradation when applied to unseen target domain datasets collected from different imaging devices. To address this challenge, unsupervised domain adaptation (UDA) methods have gained attention by leveraging labeled source data and unlabeled target data to reduce the domain gap. However, existing UDA methods primarily focus on capturing class-wise representations, neglecting domain-wise representations. Additionally, uncertainty in pseudo labels could hinder the segmentation performance. To tackle these issues, we propose a novel Domain-interactive Contrastive Learning and Prototype-guided Self-training (DCL-PS) framework for cross-domain polyp segmentation. Specifically, domaininteractive contrastive learning (DCL) with a domain-mixed prototype updating strategy is proposed to discriminate class-wise feature representations across domains. Then, to enhance the feature extraction ability of the encoder, we present a contrastive learning-based cross-consistency training (CL-CCT) strategy, which is imposed on both the prototypes obtained by the outputs of the main decoder and perturbed auxiliary outputs. Furthermore, we propose a prototype-guided self-training (PS) strategy, which dynamically assigns a weight for each pixel during selftraining, filtering out unreliable pixels and improving the quality of pseudo-labels. Experimental results demonstrate the superiority of DCL-PS in improving polyp segmentation performance in the target domain. The code will be released at https://github.com/taozh2017/DCLPS.

根据结肠镜图像进行精确的息肉分割在结肠直肠癌的诊断和治疗中起着至关重要的作用。虽然基于深度学习的息肉分割模型取得了重大进展,但当它们应用于从不同成像设备收集的未见目标域数据集时,往往会出现性能下降的问题。为应对这一挑战,无监督领域适应(UDA)方法利用标记源数据和未标记目标数据来缩小领域差距,因而受到关注。然而,现有的 UDA 方法主要侧重于捕捉类表征,而忽略了域表征。此外,伪标签的不确定性也会影响分割性能。为了解决这些问题,我们提出了一种用于跨领域息肉分割的新型领域交互式对比学习和原型指导自我训练(DCL-PS)框架。具体来说,我们提出了采用领域混合原型更新策略的领域交互式对比学习(DCL)来区分跨领域的类特征表征。然后,为了增强编码器的特征提取能力,我们提出了基于对比学习的交叉一致性训练(CL-CCT)策略,该策略同时适用于主解码器输出和扰动辅助输出所获得的原型。此外,我们还提出了一种原型引导的自我训练(PS)策略,在自我训练过程中为每个像素动态分配权重,从而过滤掉不可靠的像素,提高伪标签的质量。实验结果表明,DCL-PS 在提高目标域息肉分割性能方面具有优势。代码将在 https://github.com/taozh2017/DCLPS 上发布。
{"title":"Domain-interactive Contrastive Learning and Prototype-guided Self-training for Cross-domain Polyp Segmentation.","authors":"Ziru Lu, Yizhe Zhang, Yi Zhou, Ye Wu, Tao Zhou","doi":"10.1109/TMI.2024.3443262","DOIUrl":"https://doi.org/10.1109/TMI.2024.3443262","url":null,"abstract":"<p><p>Accurate polyp segmentation plays a critical role from colonoscopy images in the diagnosis and treatment of colorectal cancer. While deep learning-based polyp segmentation models have made significant progress, they often suffer from performance degradation when applied to unseen target domain datasets collected from different imaging devices. To address this challenge, unsupervised domain adaptation (UDA) methods have gained attention by leveraging labeled source data and unlabeled target data to reduce the domain gap. However, existing UDA methods primarily focus on capturing class-wise representations, neglecting domain-wise representations. Additionally, uncertainty in pseudo labels could hinder the segmentation performance. To tackle these issues, we propose a novel Domain-interactive Contrastive Learning and Prototype-guided Self-training (DCL-PS) framework for cross-domain polyp segmentation. Specifically, domaininteractive contrastive learning (DCL) with a domain-mixed prototype updating strategy is proposed to discriminate class-wise feature representations across domains. Then, to enhance the feature extraction ability of the encoder, we present a contrastive learning-based cross-consistency training (CL-CCT) strategy, which is imposed on both the prototypes obtained by the outputs of the main decoder and perturbed auxiliary outputs. Furthermore, we propose a prototype-guided self-training (PS) strategy, which dynamically assigns a weight for each pixel during selftraining, filtering out unreliable pixels and improving the quality of pseudo-labels. Experimental results demonstrate the superiority of DCL-PS in improving polyp segmentation performance in the target domain. The code will be released at https://github.com/taozh2017/DCLPS.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141984206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prompt-driven Latent Domain Generalization for Medical Image Classification. 用于医学图像分类的提示驱动潜域泛化。
Pub Date : 2024-08-13 DOI: 10.1109/TMI.2024.3443119
Siyuan Yan, Zhen Yu, Chi Liu, Lie Ju, Dwarikanath Mahapatra, Brigid Betz-Stablein, Victoria Mar, Monika Janda, Peter Soyer, Zongyuan Ge

Deep learning models for medical image analysis easily suffer from distribution shifts caused by dataset artifact bias, camera variations, differences in the imaging station, etc., leading to unreliable diagnoses in real-world clinical settings. Domain generalization (DG) methods, which aim to train models on multiple domains to perform well on unseen domains, offer a promising direction to solve the problem. However, existing DG methods assume domain labels of each image are available and accurate, which is typically feasible for only a limited number of medical datasets. To address these challenges, we propose a unified DG framework for medical image classification without relying on domain labels, called Prompt-driven Latent Domain Generalization (PLDG). PLDG consists of unsupervised domain discovery and prompt learning. This framework first discovers pseudo domain labels by clustering the bias-associated style features, then leverages collaborative domain prompts to guide a Vision Transformer to learn knowledge from discovered diverse domains. To facilitate cross-domain knowledge learning between different prompts, we introduce a domain prompt generator that enables knowledge sharing between domain prompts and a shared prompt. A domain mixup strategy is additionally employed for more flexible decision margins and mitigates the risk of incorrect domain assignments. Extensive experiments on three medical image classification tasks and one debiasing task demonstrate that our method can achieve comparable or even superior performance than conventional DG algorithms without relying on domain labels. Our code is publicly available at https://github.com/SiyuanYan1/PLDG/tree/main.

用于医学图像分析的深度学习模型很容易受到数据集伪装偏差、相机变化、成像站差异等因素造成的分布偏移的影响,从而导致实际临床环境中诊断结果不可靠。领域泛化(DG)方法旨在训练多个领域的模型,使其在未见领域中表现良好,为解决这一问题提供了一个很有前景的方向。然而,现有的领域泛化方法假定每张图像的领域标签都是可用且准确的,而这通常只对有限的医疗数据集可行。为了应对这些挑战,我们提出了一种不依赖域标签的统一医学图像分类 DG 框架,称为提示驱动潜域泛化(Prompt-driven Latent Domain Generalization,PLDG)。PLDG 包括无监督领域发现和提示学习。该框架首先通过聚类与偏差相关的风格特征来发现伪领域标签,然后利用协作领域提示来引导视觉转换器从发现的不同领域中学习知识。为了促进不同提示之间的跨领域知识学习,我们引入了一个领域提示生成器,使领域提示和共享提示之间能够共享知识。此外,我们还采用了领域混合策略,以获得更灵活的决策空间,并降低错误领域分配的风险。在三个医学图像分类任务和一个除杂任务上的广泛实验表明,我们的方法无需依赖领域标签,就能获得与传统 DG 算法相当甚至更优的性能。我们的代码可通过 https://github.com/SiyuanYan1/PLDG/tree/main 公开获取。
{"title":"Prompt-driven Latent Domain Generalization for Medical Image Classification.","authors":"Siyuan Yan, Zhen Yu, Chi Liu, Lie Ju, Dwarikanath Mahapatra, Brigid Betz-Stablein, Victoria Mar, Monika Janda, Peter Soyer, Zongyuan Ge","doi":"10.1109/TMI.2024.3443119","DOIUrl":"https://doi.org/10.1109/TMI.2024.3443119","url":null,"abstract":"<p><p>Deep learning models for medical image analysis easily suffer from distribution shifts caused by dataset artifact bias, camera variations, differences in the imaging station, etc., leading to unreliable diagnoses in real-world clinical settings. Domain generalization (DG) methods, which aim to train models on multiple domains to perform well on unseen domains, offer a promising direction to solve the problem. However, existing DG methods assume domain labels of each image are available and accurate, which is typically feasible for only a limited number of medical datasets. To address these challenges, we propose a unified DG framework for medical image classification without relying on domain labels, called Prompt-driven Latent Domain Generalization (PLDG). PLDG consists of unsupervised domain discovery and prompt learning. This framework first discovers pseudo domain labels by clustering the bias-associated style features, then leverages collaborative domain prompts to guide a Vision Transformer to learn knowledge from discovered diverse domains. To facilitate cross-domain knowledge learning between different prompts, we introduce a domain prompt generator that enables knowledge sharing between domain prompts and a shared prompt. A domain mixup strategy is additionally employed for more flexible decision margins and mitigates the risk of incorrect domain assignments. Extensive experiments on three medical image classification tasks and one debiasing task demonstrate that our method can achieve comparable or even superior performance than conventional DG algorithms without relying on domain labels. Our code is publicly available at https://github.com/SiyuanYan1/PLDG/tree/main.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141977493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A New Benchmark: Clinical Uncertainty and Severity Aware Labeled Chest X-Ray Images with Multi-Relationship Graph Learning. 新基准:利用多关系图学习识别临床不确定性和严重程度标记胸部 X 光图像
Pub Date : 2024-08-09 DOI: 10.1109/TMI.2024.3441494
Mengliang Zhang, Xinyue Hu, Lin Gu, Liangchen Liu, Kazuma Kobayashi, Tatsuya Harada, Yan Yan, Ronald M Summers, Yingying Zhu

Chest radiography, commonly known as CXR, is frequently utilized in clinical settings to detect cardiopulmonary conditions. However, even seasoned radiologists might offer different evaluations regarding the seriousness and uncertainty associated with observed abnormalities. Previous research has attempted to utilize clinical notes to extract abnormal labels for training deep-learning models in CXR image diagnosis. However, these methods often neglected the varying degrees of severity and uncertainty linked to different labels. In our study, we initially assembled a comprehensive new dataset of CXR images based on clinical textual data, which incorporated radiologists' assessments of uncertainty and severity. Using this dataset, we introduced a multi-relationship graph learning framework that leverages spatial and semantic relationships while addressing expert uncertainty through a dedicated loss function. Our research showcases a notable enhancement in CXR image diagnosis and the interpretability of the diagnostic model, surpassing existing state-of-the-art methodologies. The dataset address of disease severity and uncertainty we extracted is: https://physionet.org/content/cad-chest/1.0/.

胸部放射线检查(俗称 CXR)在临床上经常用于检测心肺疾病。然而,即使是经验丰富的放射科医生也会对观察到的异常情况的严重性和不确定性做出不同的评价。以前的研究曾尝试利用临床笔记提取异常标签,用于训练 CXR 图像诊断中的深度学习模型。然而,这些方法往往忽略了与不同标签相关的不同严重程度和不确定性。在我们的研究中,我们基于临床文本数据,结合放射科医生对不确定性和严重程度的评估,初步建立了一个全面的 CXR 图像新数据集。利用该数据集,我们引入了多关系图学习框架,该框架利用空间和语义关系,同时通过专用损失函数解决专家的不确定性问题。我们的研究展示了 CXR 图像诊断和诊断模型可解释性的显著提升,超越了现有的最先进方法。我们提取的疾病严重性和不确定性数据集地址为:https://physionet.org/content/cad-chest/1.0/。
{"title":"A New Benchmark: Clinical Uncertainty and Severity Aware Labeled Chest X-Ray Images with Multi-Relationship Graph Learning.","authors":"Mengliang Zhang, Xinyue Hu, Lin Gu, Liangchen Liu, Kazuma Kobayashi, Tatsuya Harada, Yan Yan, Ronald M Summers, Yingying Zhu","doi":"10.1109/TMI.2024.3441494","DOIUrl":"https://doi.org/10.1109/TMI.2024.3441494","url":null,"abstract":"<p><p>Chest radiography, commonly known as CXR, is frequently utilized in clinical settings to detect cardiopulmonary conditions. However, even seasoned radiologists might offer different evaluations regarding the seriousness and uncertainty associated with observed abnormalities. Previous research has attempted to utilize clinical notes to extract abnormal labels for training deep-learning models in CXR image diagnosis. However, these methods often neglected the varying degrees of severity and uncertainty linked to different labels. In our study, we initially assembled a comprehensive new dataset of CXR images based on clinical textual data, which incorporated radiologists' assessments of uncertainty and severity. Using this dataset, we introduced a multi-relationship graph learning framework that leverages spatial and semantic relationships while addressing expert uncertainty through a dedicated loss function. Our research showcases a notable enhancement in CXR image diagnosis and the interpretability of the diagnostic model, surpassing existing state-of-the-art methodologies. The dataset address of disease severity and uncertainty we extracted is: https://physionet.org/content/cad-chest/1.0/.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141910184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RemixFormer++: A Multi-modal Transformer Model for Precision Skin Tumor Differential Diagnosis with Memory-efficient Attention. RemixFormer++:用于精确皮肤肿瘤鉴别诊断的多模态变压器模型,具有记忆效率高的注意力。
Pub Date : 2024-08-09 DOI: 10.1109/TMI.2024.3441012
Jing Xu, Kai Huang, Lianzhen Zhong, Yuan Gao, Kai Sun, Wei Liu, Yanjie Zhou, Wenchao Guo, Yuan Guo, Yuanqiang Zou, Yuping Duan, Le Lu, Yu Wang, Xiang Chen, Shuang Zhao

Diagnosing malignant skin tumors accurately at an early stage can be challenging due to ambiguous and even confusing visual characteristics displayed by various categories of skin tumors. To improve diagnosis precision, all available clinical data from multiple sources, particularly clinical images, dermoscopy images, and medical history, could be considered. Aligning with clinical practice, we propose a novel Transformer model, named Remix-Former++ that consists of a clinical image branch, a dermoscopy image branch, and a metadata branch. Given the unique characteristics inherent in clinical and dermoscopy images, specialized attention strategies are adopted for each type. Clinical images are processed through a top-down architecture, capturing both localized lesion details and global contextual information. Conversely, dermoscopy images undergo a bottom-up processing with two-level hierarchical encoders, designed to pinpoint fine-grained structural and textural features. A dedicated metadata branch seamlessly integrates non-visual information by encoding relevant patient data. Fusing features from three branches substantially boosts disease classification accuracy. RemixFormer++ demonstrates exceptional performance on four single-modality datasets (PAD-UFES-20, ISIC 2017/2018/2019). Compared with the previous best method using a public multi-modal Derm7pt dataset, we achieved an absolute 5.3% increase in averaged F1 and 1.2% in accuracy for the classification of five skin tumors. Furthermore, using a large-scale in-house dataset of 10,351 patients with the twelve most common skin tumors, our method obtained an overall classification accuracy of 92.6%. These promising results, on par or better with the performance of 191 dermatologists through a comprehensive reader study, evidently imply the potential clinical usability of our method.

由于各类皮肤肿瘤显示的视觉特征模糊不清,甚至容易混淆,因此在早期准确诊断恶性皮肤肿瘤具有挑战性。为了提高诊断的精确度,可以考虑从多个来源获取所有可用的临床数据,特别是临床图像、皮肤镜图像和病史。根据临床实践,我们提出了一种名为 Remix-Former++ 的新型转换器模型,它由临床图像分支、皮肤镜图像分支和元数据分支组成。鉴于临床图像和皮肤镜图像的固有特性,每种类型的图像都采用了专门的关注策略。临床图像通过自上而下的架构进行处理,同时捕捉局部病变细节和全局上下文信息。相反,皮肤镜图像则采用两级分层编码器进行自下而上的处理,旨在精确定位细粒度的结构和纹理特征。一个专门的元数据分支通过对相关患者数据进行编码,无缝整合了非视觉信息。融合三个分支的特征可大幅提高疾病分类的准确性。RemixFormer++ 在四个单模态数据集(PAD-UFES-20、ISIC 2017/2018/2019)上表现出卓越的性能。与之前使用公共多模态 Derm7pt 数据集的最佳方法相比,我们在对五种皮肤肿瘤进行分类时,平均 F1 绝对值提高了 5.3%,准确率提高了 1.2%。此外,在使用由 10351 名患有 12 种最常见皮肤肿瘤的患者组成的大规模内部数据集时,我们的方法获得了 92.6% 的总体分类准确率。这些令人鼓舞的结果与 191 位皮肤科医生通过综合读者研究得出的结果相当或更好,这显然意味着我们的方法具有潜在的临床实用性。
{"title":"RemixFormer++: A Multi-modal Transformer Model for Precision Skin Tumor Differential Diagnosis with Memory-efficient Attention.","authors":"Jing Xu, Kai Huang, Lianzhen Zhong, Yuan Gao, Kai Sun, Wei Liu, Yanjie Zhou, Wenchao Guo, Yuan Guo, Yuanqiang Zou, Yuping Duan, Le Lu, Yu Wang, Xiang Chen, Shuang Zhao","doi":"10.1109/TMI.2024.3441012","DOIUrl":"10.1109/TMI.2024.3441012","url":null,"abstract":"<p><p>Diagnosing malignant skin tumors accurately at an early stage can be challenging due to ambiguous and even confusing visual characteristics displayed by various categories of skin tumors. To improve diagnosis precision, all available clinical data from multiple sources, particularly clinical images, dermoscopy images, and medical history, could be considered. Aligning with clinical practice, we propose a novel Transformer model, named Remix-Former++ that consists of a clinical image branch, a dermoscopy image branch, and a metadata branch. Given the unique characteristics inherent in clinical and dermoscopy images, specialized attention strategies are adopted for each type. Clinical images are processed through a top-down architecture, capturing both localized lesion details and global contextual information. Conversely, dermoscopy images undergo a bottom-up processing with two-level hierarchical encoders, designed to pinpoint fine-grained structural and textural features. A dedicated metadata branch seamlessly integrates non-visual information by encoding relevant patient data. Fusing features from three branches substantially boosts disease classification accuracy. RemixFormer++ demonstrates exceptional performance on four single-modality datasets (PAD-UFES-20, ISIC 2017/2018/2019). Compared with the previous best method using a public multi-modal Derm7pt dataset, we achieved an absolute 5.3% increase in averaged F1 and 1.2% in accuracy for the classification of five skin tumors. Furthermore, using a large-scale in-house dataset of 10,351 patients with the twelve most common skin tumors, our method obtained an overall classification accuracy of 92.6%. These promising results, on par or better with the performance of 191 dermatologists through a comprehensive reader study, evidently imply the potential clinical usability of our method.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141910185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PRECISION: A Physics-Constrained and Noise-Controlled Diffusion Model for Photon Counting Computed Tomography PRECISION:用于光子计数计算机断层扫描的物理约束和噪声控制扩散模型。
Pub Date : 2024-08-08 DOI: 10.1109/TMI.2024.3440651
Ruifeng Chen;Zhongliang Zhang;Guotao Quan;Yanfeng Du;Yang Chen;Yinsheng Li
Recently, the use of photon counting detectors in computed tomography (PCCT) has attracted extensive attention. It is highly desired to improve the quality of material basis image and the quantitative accuracy of elemental composition, particularly when PCCT data is acquired at lower radiation dose levels. In this work, we develop a physics-constrained and noise-controlled diffusion model, PRECISION in short, to address the degraded quality of material basis images and inaccurate quantification of elemental composition mainly caused by imperfect noise model and/or hand-crafted regularization of material basis images, such as local smoothness and/or sparsity, leveraged in the existing direct material basis image reconstruction approaches. In stark contrast, PRECISION learns distribution-level regularization to describe the feature of ideal material basis images via training a noise-controlled spatial-spectral diffusion model. The optimal material basis images of each individual subject are sampled from this learned distribution under the constraint of the physical model of a given PCCT and the measured data obtained from the subject. PRECISION exhibits the potential to improve the quality of material basis images and the quantitative accuracy of elemental composition for PCCT.
最近,光子计数探测器在计算机断层扫描(PCCT)中的应用引起了广泛关注。人们非常希望提高物质基础图像的质量和元素组成的定量准确性,尤其是在以较低辐射剂量水平获取 PCCT 数据时。在这项工作中,我们开发了一种物理约束和噪声控制的扩散模型(简称 PRECISION),以解决现有的直接物质基础图像重建方法中主要由不完善的噪声模型和/或对物质基础图像的手工正则化(如局部平滑和/或稀疏性)造成的物质基础图像质量下降和元素成分定量不准确的问题。与此形成鲜明对比的是,PRECISION 通过训练噪声控制的空间-光谱扩散模型,学习分布级正则化来描述理想物质基础图像的特征。每个受试者的最佳物质基础图像都是在给定 PCCT 物理模型和受试者测量数据的约束下,从学习到的分布中采样得到的。PRECISION 具有提高材料基础图像质量和 PCCT 元素组成定量准确性的潜力。
{"title":"PRECISION: A Physics-Constrained and Noise-Controlled Diffusion Model for Photon Counting Computed Tomography","authors":"Ruifeng Chen;Zhongliang Zhang;Guotao Quan;Yanfeng Du;Yang Chen;Yinsheng Li","doi":"10.1109/TMI.2024.3440651","DOIUrl":"10.1109/TMI.2024.3440651","url":null,"abstract":"Recently, the use of photon counting detectors in computed tomography (PCCT) has attracted extensive attention. It is highly desired to improve the quality of material basis image and the quantitative accuracy of elemental composition, particularly when PCCT data is acquired at lower radiation dose levels. In this work, we develop a \u0000<underline>p</u>\u0000hysics-const\u0000<underline>r</u>\u0000ained and nois\u0000<underline>e</u>\u0000-\u0000<underline>c</u>\u0000ontrolled d\u0000<underline>i</u>\u0000ffu\u0000<underline>sion</u>\u0000 model, PRECISION in short, to address the degraded quality of material basis images and inaccurate quantification of elemental composition mainly caused by imperfect noise model and/or hand-crafted regularization of material basis images, such as local smoothness and/or sparsity, leveraged in the existing direct material basis image reconstruction approaches. In stark contrast, PRECISION learns distribution-level regularization to describe the feature of ideal material basis images via training a noise-controlled spatial-spectral diffusion model. The optimal material basis images of each individual subject are sampled from this learned distribution under the constraint of the physical model of a given PCCT and the measured data obtained from the subject. PRECISION exhibits the potential to improve the quality of material basis images and the quantitative accuracy of elemental composition for PCCT.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"43 10","pages":"3476-3489"},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141908635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diffusion Modeling with Domain-conditioned Prior Guidance for Accelerated MRI and qMRI Reconstruction. 采用领域条件先验指导的扩散建模,用于加速 MRI 和 qMRI 重建。
Pub Date : 2024-08-08 DOI: 10.1109/TMI.2024.3440227
Wanyu Bian, Albert Jang, Liping Zhang, Xiaonan Yang, Zachary Stewart, Fang Liu

This study introduces a novel image reconstruction technique based on a diffusion model that is conditioned on the native data domain. Our method is applied to multi-coil MRI and quantitative MRI (qMRI) reconstruction, leveraging the domain-conditioned diffusion model within the frequency and parameter domains. The prior MRI physics are used as embeddings in the diffusion model, enforcing data consistency to guide the training and sampling process, characterizing MRI k-space encoding in MRI reconstruction, and leveraging MR signal modeling for qMRI reconstruction. Furthermore, a gradient descent optimization is incorporated into the diffusion steps, enhancing feature learning and improving denoising. The proposed method demonstrates a significant promise, particularly for reconstructing images at high acceleration factors. Notably, it maintains great reconstruction accuracy for static and quantitative MRI reconstruction across diverse anatomical structures. Beyond its immediate applications, this method provides potential generalization capability, making it adaptable to inverse problems across various domains.

本研究介绍了一种基于以原始数据域为条件的扩散模型的新型图像重建技术。我们的方法适用于多线圈磁共振成像和定量磁共振成像(qMRI)重建,利用频率域和参数域内的域条件扩散模型。先验核磁共振物理学被用作扩散模型中的嵌入,加强数据一致性以指导训练和采样过程,在核磁共振重建中描述核磁共振 k 空间编码,并利用核磁共振信号建模进行 qMRI 重建。此外,还在扩散步骤中加入了梯度下降优化,从而加强了特征学习并改善了去噪效果。所提出的方法前景广阔,尤其适用于高加速度系数下的图像重建。值得注意的是,它在各种解剖结构的静态和定量 MRI 重建中保持了极高的重建精度。除了直接应用,该方法还具有潜在的通用能力,使其能够适应各种领域的逆问题。
{"title":"Diffusion Modeling with Domain-conditioned Prior Guidance for Accelerated MRI and qMRI Reconstruction.","authors":"Wanyu Bian, Albert Jang, Liping Zhang, Xiaonan Yang, Zachary Stewart, Fang Liu","doi":"10.1109/TMI.2024.3440227","DOIUrl":"10.1109/TMI.2024.3440227","url":null,"abstract":"<p><p>This study introduces a novel image reconstruction technique based on a diffusion model that is conditioned on the native data domain. Our method is applied to multi-coil MRI and quantitative MRI (qMRI) reconstruction, leveraging the domain-conditioned diffusion model within the frequency and parameter domains. The prior MRI physics are used as embeddings in the diffusion model, enforcing data consistency to guide the training and sampling process, characterizing MRI k-space encoding in MRI reconstruction, and leveraging MR signal modeling for qMRI reconstruction. Furthermore, a gradient descent optimization is incorporated into the diffusion steps, enhancing feature learning and improving denoising. The proposed method demonstrates a significant promise, particularly for reconstructing images at high acceleration factors. Notably, it maintains great reconstruction accuracy for static and quantitative MRI reconstruction across diverse anatomical structures. Beyond its immediate applications, this method provides potential generalization capability, making it adaptable to inverse problems across various domains.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141908634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Boosting Your Context by Dual Similarity Checkup for In-Context Learning Medical Image Segmentation. 通过双重相似性检查提升你的上下文,实现内文学习医学图像分割。
Pub Date : 2024-08-08 DOI: 10.1109/TMI.2024.3440311
Jun Gao, Qicheng Lao, Qingbo Kang, Paul Liu, Chenlin Du, Kang Li, Le Zhang

The recent advent of in-context learning (ICL) capabilities in large pre-trained models has yielded significant advancements in the generalization of segmentation models. By supplying domain-specific image-mask pairs, the ICL model can be effectively guided to produce optimal segmentation outcomes, eliminating the necessity for model fine-tuning or interactive prompting. However, current existing ICL-based segmentation models exhibit significant limitations when applied to medical segmentation datasets with substantial diversity. To address this issue, we propose a dual similarity checkup approach to guarantee the effectiveness of selected in-context samples so that their guidance can be maximally leveraged during inference. We first employ large pre-trained vision models for extracting strong semantic representations from input images and constructing a feature embedding memory bank for semantic similarity checkup during inference. Assuring the similarity in the input semantic space, we then minimize the discrepancy in the mask appearance distribution between the support set and the estimated mask appearance prior through similarity-weighted sampling and augmentation. We validate our proposed dual similarity checkup approach on eight publicly available medical segmentation datasets, and extensive experimental results demonstrate that our proposed method significantly improves the performance metrics of existing ICL-based segmentation models, particularly when applied to medical image datasets characterized by substantial diversity.

最近,在大型预训练模型中出现了上下文学习(ICL)功能,大大提高了分割模型的通用性。通过提供特定领域的图像-掩码对,ICL 模型可以有效地引导产生最佳分割结果,从而消除了模型微调或交互式提示的必要性。然而,目前现有的基于 ICL 的分割模型在应用于具有大量多样性的医学分割数据集时表现出明显的局限性。为了解决这个问题,我们提出了一种双重相似性检查方法,以保证所选上下文样本的有效性,从而在推理过程中最大限度地利用它们的指导作用。首先,我们采用大型预训练视觉模型从输入图像中提取强语义表征,并构建一个特征嵌入记忆库,以便在推理过程中进行语义相似性检查。在确保输入语义空间的相似性后,我们通过相似性加权采样和增强,使支持集和估计的掩码外观先验之间的掩码外观分布差异最小化。我们在八个公开的医学分割数据集上验证了我们提出的双重相似性检查方法,大量实验结果表明,我们提出的方法显著提高了现有基于 ICL 的分割模型的性能指标,尤其是在应用于具有大量多样性特征的医学图像数据集时。
{"title":"Boosting Your Context by Dual Similarity Checkup for In-Context Learning Medical Image Segmentation.","authors":"Jun Gao, Qicheng Lao, Qingbo Kang, Paul Liu, Chenlin Du, Kang Li, Le Zhang","doi":"10.1109/TMI.2024.3440311","DOIUrl":"https://doi.org/10.1109/TMI.2024.3440311","url":null,"abstract":"<p><p>The recent advent of in-context learning (ICL) capabilities in large pre-trained models has yielded significant advancements in the generalization of segmentation models. By supplying domain-specific image-mask pairs, the ICL model can be effectively guided to produce optimal segmentation outcomes, eliminating the necessity for model fine-tuning or interactive prompting. However, current existing ICL-based segmentation models exhibit significant limitations when applied to medical segmentation datasets with substantial diversity. To address this issue, we propose a dual similarity checkup approach to guarantee the effectiveness of selected in-context samples so that their guidance can be maximally leveraged during inference. We first employ large pre-trained vision models for extracting strong semantic representations from input images and constructing a feature embedding memory bank for semantic similarity checkup during inference. Assuring the similarity in the input semantic space, we then minimize the discrepancy in the mask appearance distribution between the support set and the estimated mask appearance prior through similarity-weighted sampling and augmentation. We validate our proposed dual similarity checkup approach on eight publicly available medical segmentation datasets, and extensive experimental results demonstrate that our proposed method significantly improves the performance metrics of existing ICL-based segmentation models, particularly when applied to medical image datasets characterized by substantial diversity.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141908633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Metal Artifacts Reducing Method Based on Diffusion Model Using Intraoral Optical Scanning Data for Dental Cone-beam CT. 基于扩散模型的牙科锥形束 CT 口内光学扫描数据金属伪影消除方法
Pub Date : 2024-08-07 DOI: 10.1109/TMI.2024.3440009
Yuyang Wang, Xiaomo Liu, Liang Li

In dental cone-beam computed tomography (CBCT), metal implants can cause metal artifacts, affecting image quality and the final medical diagnosis. To reduce the impact of metal artifacts, our proposed metal artifacts reduction (MAR) method takes a novel approach by integrating CBCT data with intraoral optical scanning data, utilizing information from these two different modalities to correct metal artifacts in the projection domain using a guided-diffusion model. The intraoral optical scanning data provides a more accurate generation domain for the diffusion model. We have proposed a multi-channel generation method in the training and generation stage of the diffusion model, considering the physical mechanism of CBCT, to ensure the consistency of the diffusion model generation. In this paper, we present experimental results that convincingly demonstrate the feasibility and efficacy of our approach, which introduces intraoral optical scanning data into the analysis and processing of projection domain data using the diffusion model for the first time, and modifies the diffusion model to better adapt to the physical model of CBCT.

在牙科锥束计算机断层扫描(CBCT)中,金属植入物会造成金属伪影,影响图像质量和最终医疗诊断。为了减少金属伪影的影响,我们提出的减少金属伪影(MAR)方法采用了一种新颖的方法,将 CBCT 数据与口腔内光学扫描数据整合在一起,利用这两种不同模式的信息,在投影域使用引导扩散模型修正金属伪影。口内光学扫描数据为扩散模型提供了更精确的生成域。考虑到 CBCT 的物理机制,我们在扩散模型的训练和生成阶段提出了一种多通道生成方法,以确保扩散模型生成的一致性。在本文中,我们首次将口内光学扫描数据引入到使用扩散模型的投影域数据分析和处理中,并对扩散模型进行修改,使其更好地适应 CBCT 的物理模型,实验结果令人信服地证明了我们的方法的可行性和有效性。
{"title":"Metal Artifacts Reducing Method Based on Diffusion Model Using Intraoral Optical Scanning Data for Dental Cone-beam CT.","authors":"Yuyang Wang, Xiaomo Liu, Liang Li","doi":"10.1109/TMI.2024.3440009","DOIUrl":"10.1109/TMI.2024.3440009","url":null,"abstract":"<p><p>In dental cone-beam computed tomography (CBCT), metal implants can cause metal artifacts, affecting image quality and the final medical diagnosis. To reduce the impact of metal artifacts, our proposed metal artifacts reduction (MAR) method takes a novel approach by integrating CBCT data with intraoral optical scanning data, utilizing information from these two different modalities to correct metal artifacts in the projection domain using a guided-diffusion model. The intraoral optical scanning data provides a more accurate generation domain for the diffusion model. We have proposed a multi-channel generation method in the training and generation stage of the diffusion model, considering the physical mechanism of CBCT, to ensure the consistency of the diffusion model generation. In this paper, we present experimental results that convincingly demonstrate the feasibility and efficacy of our approach, which introduces intraoral optical scanning data into the analysis and processing of projection domain data using the diffusion model for the first time, and modifies the diffusion model to better adapt to the physical model of CBCT.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141903950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Supervised Cyclic Diffeomorphic Mapping for Soft Tissue Deformation Recovery in Robotic Surgery Scenes. 用于机器人手术场景中软组织变形恢复的自监督循环异构映射。
Pub Date : 2024-08-07 DOI: 10.1109/TMI.2024.3439701
Shizhan Gong, Yonghao Long, Kai Chen, Jiaqi Liu, Yuliang Xiao, Alexis Cheng, Zerui Wang, Qi Dou

The ability to recover tissue deformation from visual features is fundamental for many robotic surgery applications. This has been a long-standing research topic in computer vision, however, is still unsolved due to complex dynamics of soft tissues when being manipulated by surgical instruments. The ambiguous pixel correspondence caused by homogeneous texture makes achieving dense and accurate tissue tracking even more challenging. In this paper, we propose a novel self-supervised framework to recover tissue deformations from stereo surgical videos. Our approach integrates semantics, cross-frame motion flow, and long-range temporal dependencies to enable the recovered deformations to represent actual tissue dynamics. Moreover, we incorporate diffeomorphic mapping to regularize the warping field to be physically realistic. To comprehensively evaluate our method, we collected stereo surgical video clips containing three types of tissue manipulation (i.e., pushing, dissection and retraction) from two different types of surgeries (i.e., hemicolectomy and mesorectal excision). Our method has achieved impressive results in capturing deformation in 3D mesh, and generalized well across manipulations and surgeries. It also outperforms current state-of-the-art methods on non-rigid registration and optical flow estimation. To the best of our knowledge, this is the first work on self-supervised learning for dense tissue deformation modeling from stereo surgical videos. Our code will be released.

从视觉特征中恢复组织变形的能力是许多机器人手术应用的基础。这一直是计算机视觉领域的一个长期研究课题,但由于软组织在手术器械作用下的复杂动态特性,这一课题至今仍未得到解决。同质纹理造成的模糊像素对应关系使得实现密集而精确的组织跟踪更具挑战性。在本文中,我们提出了一种新颖的自监督框架来恢复立体手术视频中的组织变形。我们的方法整合了语义、跨帧运动流和长时程依赖性,使恢复的变形能够代表实际的组织动态。此外,我们还结合了差异形态映射技术,对扭曲场进行正则化处理,使其符合物理实际。为了全面评估我们的方法,我们收集了两种不同类型手术(即半结肠切除术和直肠系膜切除术)的立体手术视频剪辑,其中包含三种类型的组织操作(即推动、剥离和牵拉)。我们的方法在捕捉三维网状结构的形变方面取得了令人印象深刻的成果,并在各种操作和手术中具有良好的通用性。在非刚性配准和光流估计方面,它也优于目前最先进的方法。据我们所知,这是第一项从立体手术视频中对致密组织变形建模进行自我监督学习的工作。我们的代码即将发布。
{"title":"Self-Supervised Cyclic Diffeomorphic Mapping for Soft Tissue Deformation Recovery in Robotic Surgery Scenes.","authors":"Shizhan Gong, Yonghao Long, Kai Chen, Jiaqi Liu, Yuliang Xiao, Alexis Cheng, Zerui Wang, Qi Dou","doi":"10.1109/TMI.2024.3439701","DOIUrl":"https://doi.org/10.1109/TMI.2024.3439701","url":null,"abstract":"<p><p>The ability to recover tissue deformation from visual features is fundamental for many robotic surgery applications. This has been a long-standing research topic in computer vision, however, is still unsolved due to complex dynamics of soft tissues when being manipulated by surgical instruments. The ambiguous pixel correspondence caused by homogeneous texture makes achieving dense and accurate tissue tracking even more challenging. In this paper, we propose a novel self-supervised framework to recover tissue deformations from stereo surgical videos. Our approach integrates semantics, cross-frame motion flow, and long-range temporal dependencies to enable the recovered deformations to represent actual tissue dynamics. Moreover, we incorporate diffeomorphic mapping to regularize the warping field to be physically realistic. To comprehensively evaluate our method, we collected stereo surgical video clips containing three types of tissue manipulation (i.e., pushing, dissection and retraction) from two different types of surgeries (i.e., hemicolectomy and mesorectal excision). Our method has achieved impressive results in capturing deformation in 3D mesh, and generalized well across manipulations and surgeries. It also outperforms current state-of-the-art methods on non-rigid registration and optical flow estimation. To the best of our knowledge, this is the first work on self-supervised learning for dense tissue deformation modeling from stereo surgical videos. Our code will be released.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141903951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on medical imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1