首页 > 最新文献

Computerized Medical Imaging and Graphics最新文献

英文 中文
Convergent–Diffusion Denoising Model for multi-scenario CT Image Reconstruction 多场景CT图像重构的收敛-扩散去噪模型。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-04 DOI: 10.1016/j.compmedimag.2024.102491
Xinghua Ma , Mingye Zou , Xinyan Fang , Gongning Luo , Wei Wang , Suyu Dong , Xiangyu Li , Kuanquan Wang , Qing Dong , Ye Tian , Shuo Li
A generic and versatile CT Image Reconstruction (CTIR) scheme can efficiently mitigate imaging noise resulting from inherent physical limitations, substantially bolstering the dependability of CT imaging diagnostics across a wider spectrum of patient cases. Current CTIR techniques often concentrate on distinct areas such as Low-Dose CT denoising (LDCTD), Sparse-View CT reconstruction (SVCTR), and Metal Artifact Reduction (MAR). Nevertheless, due to the intricate nature of multi-scenario CTIR, these techniques frequently narrow their focus to specific tasks, resulting in limited generalization capabilities for diverse scenarios. We propose a novel Convergent–Diffusion Denoising Model (CDDM) for multi-scenario CTIR, which utilizes a stepwise denoising process to converge toward an imaging-noise-free image with high generalization. CDDM uses a diffusion-based process based on a priori decay distribution to steadily correct imaging noise, thus avoiding the overfitting of individual samples. Within CDDM, a domain-correlated sampling network (DS-Net) provides an innovative sinogram-guided noise prediction scheme to leverage both image and sinogram (i.e., dual-domain) information. DS-Net analyzes the correlation of the dual-domain representations for sampling the noise distribution, introducing sinogram semantics to avoid secondary artifacts. Experimental results validate the practical applicability of our scheme across various CTIR scenarios, including LDCTD, MAR, and SVCTR, with the support of sinogram knowledge.
一种通用的、通用的CT图像重建(CTIR)方案可以有效地减轻由于固有物理限制而产生的成像噪声,从而大大增强了CT成像诊断在更广泛的患者病例中的可靠性。目前的ctr技术通常集中在不同的领域,如低剂量CT去噪(LDCTD)、稀疏视图CT重建(SVCTR)和金属伪影还原(MAR)。然而,由于多场景CTIR的复杂性,这些技术经常将其重点缩小到特定任务上,导致对不同场景的泛化能力有限。本文提出了一种新的多场景CTIR的收敛-扩散去噪模型(CDDM),该模型利用逐步去噪过程收敛到具有高泛化性的无成像噪声图像。CDDM使用基于先验衰减分布的扩散过程来稳定地校正成像噪声,从而避免了单个样本的过拟合。在CDDM中,域相关采样网络(DS-Net)提供了一种创新的正弦图引导噪声预测方案,以利用图像和正弦图(即双域)信息。DS-Net分析了双域表示的相关性来采样噪声分布,引入了正弦图语义来避免二次伪影。实验结果验证了该方案在各种CTIR场景下的实用性,包括LDCTD、MAR和SVCTR,并得到了正弦图知识的支持。
{"title":"Convergent–Diffusion Denoising Model for multi-scenario CT Image Reconstruction","authors":"Xinghua Ma ,&nbsp;Mingye Zou ,&nbsp;Xinyan Fang ,&nbsp;Gongning Luo ,&nbsp;Wei Wang ,&nbsp;Suyu Dong ,&nbsp;Xiangyu Li ,&nbsp;Kuanquan Wang ,&nbsp;Qing Dong ,&nbsp;Ye Tian ,&nbsp;Shuo Li","doi":"10.1016/j.compmedimag.2024.102491","DOIUrl":"10.1016/j.compmedimag.2024.102491","url":null,"abstract":"<div><div>A generic and versatile CT Image Reconstruction (CTIR) scheme can efficiently mitigate imaging noise resulting from inherent physical limitations, substantially bolstering the dependability of CT imaging diagnostics across a wider spectrum of patient cases. Current CTIR techniques often concentrate on distinct areas such as Low-Dose CT denoising (LDCTD), Sparse-View CT reconstruction (SVCTR), and Metal Artifact Reduction (MAR). Nevertheless, due to the intricate nature of multi-scenario CTIR, these techniques frequently narrow their focus to specific tasks, resulting in limited generalization capabilities for diverse scenarios. We propose a novel Convergent–Diffusion Denoising Model (CDDM) for multi-scenario CTIR, which utilizes a stepwise denoising process to converge toward an imaging-noise-free image with high generalization. CDDM uses a diffusion-based process based on a priori decay distribution to steadily correct imaging noise, thus avoiding the overfitting of individual samples. Within CDDM, a domain-correlated sampling network (DS-Net) provides an innovative sinogram-guided noise prediction scheme to leverage both image and sinogram (<em>i.e.</em>, dual-domain) information. DS-Net analyzes the correlation of the dual-domain representations for sampling the noise distribution, introducing sinogram semantics to avoid secondary artifacts. Experimental results validate the practical applicability of our scheme across various CTIR scenarios, including LDCTD, MAR, and SVCTR, with the support of sinogram knowledge.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"120 ","pages":"Article 102491"},"PeriodicalIF":5.4,"publicationDate":"2025-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142958436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic medical report generation based on deep learning: A state of the art survey 基于深度学习的自动医疗报告生成:一项最新的调查。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-04 DOI: 10.1016/j.compmedimag.2024.102486
Xinyao Liu , Junchang Xin , Qi Shen , Zhihong Huang , Zhiqiong Wang
With the increasing popularity of medical imaging and its expanding applications, posing significant challenges for radiologists. Radiologists need to spend substantial time and effort to review images and manually writing reports every day. To address these challenges and speed up the process of patient care, researchers have employed deep learning methods to automatically generate medical reports. In recent years, researchers have been increasingly focusing on this task and a large amount of related work has emerged. Although there have been some review articles summarizing the state of the art in this field, their discussions remain relatively limited. Therefore, this paper provides a comprehensive review of the latest advancements in automatic medical report generation, focusing on four key aspects: (1) describing the problem of automatic medical report generation, (2) introducing datasets of different modalities, (3) thoroughly analyzing existing evaluation metrics, (4) classifying existing studies into five categories: retrieval-based, domain knowledge-based, attention-based, reinforcement learning-based, large language models-based, and merged model. In addition, we point out the problems in this field and discuss the directions of future challenges. We hope that this review provides a thorough understanding of automatic medical report generation and encourages the continued development in this area.
随着医学影像的日益普及及其应用的扩大,对放射科医生提出了重大挑战。放射科医生每天需要花费大量的时间和精力来检查图像并手动编写报告。为了应对这些挑战并加快患者护理过程,研究人员采用深度学习方法自动生成医疗报告。近年来,研究人员越来越关注这一课题,并出现了大量的相关工作。虽然已经有一些综述文章总结了这一领域的技术状况,但它们的讨论仍然相对有限。因此,本文对医学报告自动生成的最新进展进行了全面综述,重点介绍了四个关键方面:(1)描述了医学报告自动生成的问题;(2)引入了不同模式的数据集;(3)深入分析了现有的评估指标;(4)将现有的研究分为五类:基于检索的、基于领域知识的、基于注意的、基于强化学习的、基于大型语言模型的和合并模型的。此外,我们还指出了该领域存在的问题,并讨论了未来挑战的方向。我们希望这篇综述能提供对自动医学报告生成的全面理解,并鼓励这一领域的持续发展。
{"title":"Automatic medical report generation based on deep learning: A state of the art survey","authors":"Xinyao Liu ,&nbsp;Junchang Xin ,&nbsp;Qi Shen ,&nbsp;Zhihong Huang ,&nbsp;Zhiqiong Wang","doi":"10.1016/j.compmedimag.2024.102486","DOIUrl":"10.1016/j.compmedimag.2024.102486","url":null,"abstract":"<div><div>With the increasing popularity of medical imaging and its expanding applications, posing significant challenges for radiologists. Radiologists need to spend substantial time and effort to review images and manually writing reports every day. To address these challenges and speed up the process of patient care, researchers have employed deep learning methods to automatically generate medical reports. In recent years, researchers have been increasingly focusing on this task and a large amount of related work has emerged. Although there have been some review articles summarizing the state of the art in this field, their discussions remain relatively limited. Therefore, this paper provides a comprehensive review of the latest advancements in automatic medical report generation, focusing on four key aspects: (1) describing the problem of automatic medical report generation, (2) introducing datasets of different modalities, (3) thoroughly analyzing existing evaluation metrics, (4) classifying existing studies into five categories: retrieval-based, domain knowledge-based, attention-based, reinforcement learning-based, large language models-based, and merged model. In addition, we point out the problems in this field and discuss the directions of future challenges. We hope that this review provides a thorough understanding of automatic medical report generation and encourages the continued development in this area.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"120 ","pages":"Article 102486"},"PeriodicalIF":5.4,"publicationDate":"2025-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142958421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DDEvENet: Evidence-based ensemble learning for uncertainty-aware brain parcellation using diffusion MRI 基于证据的集成学习在不确定性感知脑包裹中的应用。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-04 DOI: 10.1016/j.compmedimag.2024.102489
Chenjun Li , Dian Yang , Shun Yao , Shuyue Wang , Ye Wu , Le Zhang , Qiannuo Li , Kang Ik Kevin Cho , Johanna Seitz-Holland , Lipeng Ning , Jon Haitz Legarreta , Yogesh Rathi , Carl-Fredrik Westin , Lauren J. O’Donnell , Nir A. Sochen , Ofer Pasternak , Fan Zhang
In this study, we developed an Evidential Ensemble Neural Network based on Deep learning and Diffusion MRI, namely DDEvENet, for anatomical brain parcellation. The key innovation of DDEvENet is the design of an evidential deep learning framework to quantify predictive uncertainty at each voxel during a single inference. To do so, we design an evidence-based ensemble learning framework for uncertainty-aware parcellation to leverage the multiple dMRI parameters derived from diffusion MRI. Using DDEvENet, we obtained accurate parcellation and uncertainty estimates across different datasets from healthy and clinical populations and with different imaging acquisitions. The overall network includes five parallel subnetworks, where each is dedicated to learning the FreeSurfer parcellation for a certain diffusion MRI parameter. An evidence-based ensemble methodology is then proposed to fuse the individual outputs. We perform experimental evaluations on large-scale datasets from multiple imaging sources, including high-quality diffusion MRI data from healthy adults and clinically diffusion MRI data from participants with various brain diseases (schizophrenia, bipolar disorder, attention-deficit/hyperactivity disorder, Parkinson’s disease, cerebral small vessel disease, and neurosurgical patients with brain tumors). Compared to several state-of-the-art methods, our experimental results demonstrate highly improved parcellation accuracy across the multiple testing datasets despite the differences in dMRI acquisition protocols and health conditions. Furthermore, thanks to the uncertainty estimation, our DDEvENet approach demonstrates a good ability to detect abnormal brain regions in patients with lesions that are consistent with expert-drawn results, enhancing the interpretability and reliability of the segmentation results.
在这项研究中,我们开发了一个基于深度学习和弥散MRI的证据集成神经网络,即DDEvENet,用于解剖脑包裹。DDEvENet的关键创新是设计了一个证据深度学习框架,用于量化单个推理过程中每个体素的预测不确定性。为此,我们设计了一个基于证据的集成学习框架,用于不确定性感知的分割,以利用来自扩散MRI的多个dMRI参数。使用DDEvENet,我们获得了来自健康人群和临床人群以及不同成像获取的不同数据集的准确分割和不确定性估计。整个网络包括5个并行的子网络,每个子网络都致力于学习FreeSurfer对特定扩散MRI参数的分割。然后提出了一种基于证据的集成方法来融合各个输出。我们对来自多个成像来源的大规模数据集进行了实验评估,包括来自健康成人的高质量弥散性MRI数据和来自各种脑部疾病(精神分裂症、双相情感障碍、注意力缺陷/多动障碍、帕金森病、脑小血管疾病和脑肿瘤神经外科患者)参与者的临床弥散性MRI数据。与几种最先进的方法相比,尽管dMRI采集方案和健康条件存在差异,但我们的实验结果表明,在多个测试数据集上,我们的包裹精度得到了极大的提高。此外,由于不确定性估计,我们的DDEvENet方法能够很好地检测出与专家绘制结果一致的病变患者的异常大脑区域,从而增强了分割结果的可解释性和可靠性。
{"title":"DDEvENet: Evidence-based ensemble learning for uncertainty-aware brain parcellation using diffusion MRI","authors":"Chenjun Li ,&nbsp;Dian Yang ,&nbsp;Shun Yao ,&nbsp;Shuyue Wang ,&nbsp;Ye Wu ,&nbsp;Le Zhang ,&nbsp;Qiannuo Li ,&nbsp;Kang Ik Kevin Cho ,&nbsp;Johanna Seitz-Holland ,&nbsp;Lipeng Ning ,&nbsp;Jon Haitz Legarreta ,&nbsp;Yogesh Rathi ,&nbsp;Carl-Fredrik Westin ,&nbsp;Lauren J. O’Donnell ,&nbsp;Nir A. Sochen ,&nbsp;Ofer Pasternak ,&nbsp;Fan Zhang","doi":"10.1016/j.compmedimag.2024.102489","DOIUrl":"10.1016/j.compmedimag.2024.102489","url":null,"abstract":"<div><div>In this study, we developed an Evidential Ensemble Neural Network based on Deep learning and Diffusion MRI, namely DDEvENet, for anatomical brain parcellation. The key innovation of DDEvENet is the design of an evidential deep learning framework to quantify predictive uncertainty at each voxel during a single inference. To do so, we design an evidence-based ensemble learning framework for uncertainty-aware parcellation to leverage the multiple dMRI parameters derived from diffusion MRI. Using DDEvENet, we obtained accurate parcellation and uncertainty estimates across different datasets from healthy and clinical populations and with different imaging acquisitions. The overall network includes five parallel subnetworks, where each is dedicated to learning the FreeSurfer parcellation for a certain diffusion MRI parameter. An evidence-based ensemble methodology is then proposed to fuse the individual outputs. We perform experimental evaluations on large-scale datasets from multiple imaging sources, including high-quality diffusion MRI data from healthy adults and clinically diffusion MRI data from participants with various brain diseases (schizophrenia, bipolar disorder, attention-deficit/hyperactivity disorder, Parkinson’s disease, cerebral small vessel disease, and neurosurgical patients with brain tumors). Compared to several state-of-the-art methods, our experimental results demonstrate highly improved parcellation accuracy across the multiple testing datasets despite the differences in dMRI acquisition protocols and health conditions. Furthermore, thanks to the uncertainty estimation, our DDEvENet approach demonstrates a good ability to detect abnormal brain regions in patients with lesions that are consistent with expert-drawn results, enhancing the interpretability and reliability of the segmentation results.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"120 ","pages":"Article 102489"},"PeriodicalIF":5.4,"publicationDate":"2025-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142958428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated segmentation of deep brain structures from Inversion-Recovery MRI 基于反转-恢复MRI的深部脑结构自动分割。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-03 DOI: 10.1016/j.compmedimag.2024.102488
Aigerim Dautkulova , Omar Ait Aider , Céline Teulière , Jérôme Coste , Rémi Chaix , Omar Ouachik , Bruno Pereira , Jean-Jacques Lemaire
Methods for the automated segmentation of brain structures are a major subject of medical research. The small structures of the deep brain have received scant attention, notably for lack of manual delineations by medical experts. In this study, we assessed an automated segmentation of a novel clinical dataset containing White Matter Attenuated Inversion-Recovery (WAIR) MRI images and five manually segmented structures (substantia nigra (SN), subthalamic nucleus (STN), red nucleus (RN), mammillary body (MB) and mammillothalamic fascicle (MT-fa)) in 53 patients with severe Parkinson’s disease. T1 and DTI images were additionally used. We also assessed the reorientation of DTI diffusion vectors with reference to the ACPC line. A state-of-the-art nnU-Net method was trained and tested on subsets of 38 and 15 image datasets respectively. We used Dice similarity coefficient (DSC), 95% Hausdorff distance (95HD), and volumetric similarity (VS) as metrics to evaluate network efficiency in reproducing manual contouring. Random-effects models statistically compared values according to structures, accounting for between- and within-participant variability. Results show that WAIR significantly outperformed T1 for DSC (0.739 ± 0.073), 95HD (1.739 ± 0.398), and VS (0.892 ± 0.044). The DSC values for automated segmentation of MB, RN, SN, STN, and MT-fa decreased in that order, in line with the increasing complexity observed in manual segmentation. Based on training results, the reorientation of DTI vectors improved the automated segmentation.
脑结构的自动分割方法是医学研究的一个重要课题。大脑深部的小结构很少受到关注,特别是因为缺乏医学专家的手工描绘。在这项研究中,我们评估了一个新的临床数据集的自动分割,该数据集包含白质衰减反演恢复(WAIR) MRI图像和5个手动分割的结构(黑质(SN)、丘脑下核(STN)、红核(RN)、乳腺体(MB)和乳丘脑束(MT-fa)),共53例严重帕金森病患者。另外使用T1和DTI图像。我们还评估了参考ACPC线的DTI扩散矢量的重新定向。在38和15个图像数据集的子集上分别训练和测试了最先进的nnU-Net方法。我们使用Dice相似系数(DSC)、95% Hausdorff距离(95HD)和体积相似度(VS)作为指标来评估人工轮廓再现的网络效率。随机效应模型根据结构统计比较值,考虑参与者之间和参与者内部的可变性。结果显示,WAIR在DSC(0.739±0.073)、95HD(1.739±0.398)和VS(0.892±0.044)方面均显著优于T1。自动分割的MB、RN、SN、STN和MT-fa的DSC值依次下降,与人工分割的复杂性增加一致。在训练结果的基础上,重新定位DTI向量,提高了自动分割的效果。
{"title":"Automated segmentation of deep brain structures from Inversion-Recovery MRI","authors":"Aigerim Dautkulova ,&nbsp;Omar Ait Aider ,&nbsp;Céline Teulière ,&nbsp;Jérôme Coste ,&nbsp;Rémi Chaix ,&nbsp;Omar Ouachik ,&nbsp;Bruno Pereira ,&nbsp;Jean-Jacques Lemaire","doi":"10.1016/j.compmedimag.2024.102488","DOIUrl":"10.1016/j.compmedimag.2024.102488","url":null,"abstract":"<div><div>Methods for the automated segmentation of brain structures are a major subject of medical research. The small structures of the deep brain have received scant attention, notably for lack of manual delineations by medical experts. In this study, we assessed an automated segmentation of a novel clinical dataset containing White Matter Attenuated Inversion-Recovery (WAIR) MRI images and five manually segmented structures (substantia nigra (SN), subthalamic nucleus (STN), red nucleus (RN), mammillary body (MB) and mammillothalamic fascicle (MT-fa)) in 53 patients with severe Parkinson’s disease. T1 and DTI images were additionally used. We also assessed the reorientation of DTI diffusion vectors with reference to the ACPC line. A state-of-the-art nnU-Net method was trained and tested on subsets of 38 and 15 image datasets respectively. We used Dice similarity coefficient (DSC), 95% Hausdorff distance (95HD), and volumetric similarity (VS) as metrics to evaluate network efficiency in reproducing manual contouring. Random-effects models statistically compared values according to structures, accounting for between- and within-participant variability. Results show that WAIR significantly outperformed T1 for DSC (0.739 ± 0.073), 95HD (1.739 ± 0.398), and VS (0.892 ± 0.044). The DSC values for automated segmentation of MB, RN, SN, STN, and MT-fa decreased in that order, in line with the increasing complexity observed in manual segmentation. Based on training results, the reorientation of DTI vectors improved the automated segmentation.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"120 ","pages":"Article 102488"},"PeriodicalIF":5.4,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142958416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Portable head CT motion artifact correction via diffusion-based generative model 基于扩散生成模型的便携式头部CT运动伪影校正。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-01 DOI: 10.1016/j.compmedimag.2024.102478
Zhennong Chen , Siyeop Yoon , Quirin Strotzer , Rehab Naeem Khalid , Matthew Tivnan , Quanzheng Li , Rajiv Gupta , Dufan Wu
Portable head CT images often suffer motion artifacts due to the prolonged scanning time and critically ill patients who are unable to hold still. Image-domain motion correction is attractive for this application as it does not require CT projection data. This paper describes and evaluates a generative model based on conditional diffusion to correct motion artifacts in portable head CT scans. This model was trained to find the motion-free CT image conditioned on the paired motion-corrupted image. Our method utilizes histogram equalization to resolve the intensity range discrepancy of skull and brain tissue and an advanced Elucidated Diffusion Model (EDM) framework for faster sampling and better motion correction performance. Our EDM framework is superior in correcting artifacts in the brain tissue region and across the entire image compared to CNN-based methods and standard diffusion approach (DDPM) in a simulation study and a phantom study with known motion-free ground truth. Furthermore, we conducted a reader study on real-world portable CT scans to demonstrate improvement of image quality using our method.
便携式头部CT图像由于扫描时间过长和危重病人无法保持静止,经常出现运动伪影。由于不需要CT投影数据,图像域运动校正对该应用很有吸引力。本文描述并评估了一种基于条件扩散的生成模型,用于校正便携式头部CT扫描中的运动伪影。对该模型进行训练,找出以成对的运动损坏图像为条件的无运动CT图像。该方法利用直方图均衡化来解决颅骨和脑组织的强度范围差异,并采用先进的阐明扩散模型(EDM)框架来实现更快的采样和更好的运动校正性能。与基于cnn的方法和标准扩散方法(DDPM)相比,我们的EDM框架在模拟研究和已知无运动地面真相的幻影研究中,在纠正脑组织区域和整个图像中的伪影方面更胜一筹。此外,我们对真实世界的便携式CT扫描进行了读者研究,以证明使用我们的方法可以改善图像质量。
{"title":"Portable head CT motion artifact correction via diffusion-based generative model","authors":"Zhennong Chen ,&nbsp;Siyeop Yoon ,&nbsp;Quirin Strotzer ,&nbsp;Rehab Naeem Khalid ,&nbsp;Matthew Tivnan ,&nbsp;Quanzheng Li ,&nbsp;Rajiv Gupta ,&nbsp;Dufan Wu","doi":"10.1016/j.compmedimag.2024.102478","DOIUrl":"10.1016/j.compmedimag.2024.102478","url":null,"abstract":"<div><div>Portable head CT images often suffer motion artifacts due to the prolonged scanning time and critically ill patients who are unable to hold still. Image-domain motion correction is attractive for this application as it does not require CT projection data. This paper describes and evaluates a generative model based on conditional diffusion to correct motion artifacts in portable head CT scans. This model was trained to find the motion-free CT image conditioned on the paired motion-corrupted image. Our method utilizes histogram equalization to resolve the intensity range discrepancy of skull and brain tissue and an advanced Elucidated Diffusion Model (EDM) framework for faster sampling and better motion correction performance. Our EDM framework is superior in correcting artifacts in the brain tissue region and across the entire image compared to CNN-based methods and standard diffusion approach (DDPM) in a simulation study and a phantom study with known motion-free ground truth. Furthermore, we conducted a reader study on real-world portable CT scans to demonstrate improvement of image quality using our method.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"119 ","pages":"Article 102478"},"PeriodicalIF":5.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142873358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inspect quantitative signals in placental histopathology: Computer-assisted multiple functional tissues identification through multi-model fusion and distillation framework 检测胎盘组织病理学中的定量信号:通过多模型融合和蒸馏框架进行计算机辅助的多功能组织识别。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-01 DOI: 10.1016/j.compmedimag.2024.102482
Yiming Liu , Ling Zhang , Mingxue Gu , Yaoxing Xiao , Ting Yu , Xiang Tao , Qing Zhang , Yan Wang , Dinggang Shen , Qingli Li
Pathological analysis of placenta is currently a valuable tool for gaining insights into pregnancy outcomes. In placental histopathology, multiple functional tissues can be inspected as potential signals reflecting the transfer functionality between fetal and maternal circulations. However, the identification of multiple functional tissues is challenging due to (1) severe heterogeneity in texture, size and shape, (2) distribution across different scales and (3) the need for comprehensive assessment at the whole slide image (WSI) level. To solve aforementioned problems, we establish a brand new dataset and propose a computer-aided segmentation framework through multi-model fusion and distillation to identify multiple functional tissues in placental histopathologic images, including villi, capillaries, fibrin deposits and trophoblast aggregations. Specifically, we propose a two-stage Multi-model Fusion and Distillation (MMFD) framework. Considering the multi-scale distribution and heterogeneity of multiple functional tissues, we enhance the visual representation in the first stage by fusing feature from multiple models to boost the effectiveness of the network. However, the multi-model fusion stage contributes to extra parameters and a significant computational burden, which is impractical for recognizing gigapixels of WSIs within clinical practice. In the second stage, we propose straightforward plug-in feature distillation method that transfers knowledge from the large fused model to a compact student model. In self-collected placental dataset, our proposed MMFD framework demonstrates an improvement of 4.3% in mean Intersection over Union (mIoU) while achieving an approximate 50% increase in inference speed and utilizing only 10% of parameters and computational resources, compared to the parameter-efficient fine-tuned Segment Anything Model (SAM) baseline. Visualization of segmentation results across entire WSIs on unseen cases demonstrates the generalizability of our proposed MMFD framework. Besides, experimental results on a public dataset further prove the effectiveness of MMFD framework on other tasks. Our work can present a fundamental method to expedite quantitative analysis of placental histopathology.
目前,胎盘病理分析是了解妊娠结局的一种有价值的工具。在胎盘组织病理学中,可以检查多个功能组织作为反映胎儿和母体循环之间转移功能的潜在信号。然而,由于(1)纹理、大小和形状的严重异质性,(2)不同尺度的分布,(3)需要在整个幻灯片图像(WSI)水平上进行综合评估,对多种功能组织的识别具有挑战性。为了解决上述问题,我们建立了一个全新的数据集,并通过多模型融合和精馏提出了一个计算机辅助分割框架,以识别胎盘组织病理图像中的多种功能组织,包括绒毛、毛细血管、纤维蛋白沉积和滋养细胞聚集。具体来说,我们提出了一个两阶段的多模型融合和蒸馏(MMFD)框架。考虑到多个功能组织的多尺度分布和异质性,我们在第一阶段通过融合多个模型的特征来增强视觉表征,以提高网络的有效性。然而,多模型融合阶段会带来额外的参数和巨大的计算负担,这对于临床实践中识别千兆像素的wsi是不切实际的。在第二阶段,我们提出了直接的插件特征蒸馏方法,将知识从大型融合模型转移到紧凑的学生模型。在自我收集的胎盘数据集中,与参数高效的微调分段任意模型(SAM)基线相比,我们提出的MMFD框架在平均交叉交叉(mIoU)上提高了4.3%,同时在推理速度上提高了约50%,仅利用了10%的参数和计算资源。对未见案例的整个wsi分割结果的可视化证明了我们提出的MMFD框架的通用性。此外,在公共数据集上的实验结果进一步证明了MMFD框架在其他任务上的有效性。我们的工作为加快胎盘组织病理学定量分析提供了一种基本方法。
{"title":"Inspect quantitative signals in placental histopathology: Computer-assisted multiple functional tissues identification through multi-model fusion and distillation framework","authors":"Yiming Liu ,&nbsp;Ling Zhang ,&nbsp;Mingxue Gu ,&nbsp;Yaoxing Xiao ,&nbsp;Ting Yu ,&nbsp;Xiang Tao ,&nbsp;Qing Zhang ,&nbsp;Yan Wang ,&nbsp;Dinggang Shen ,&nbsp;Qingli Li","doi":"10.1016/j.compmedimag.2024.102482","DOIUrl":"10.1016/j.compmedimag.2024.102482","url":null,"abstract":"<div><div>Pathological analysis of placenta is currently a valuable tool for gaining insights into pregnancy outcomes. In placental histopathology, multiple functional tissues can be inspected as potential signals reflecting the transfer functionality between fetal and maternal circulations. However, the identification of multiple functional tissues is challenging due to (1) severe heterogeneity in texture, size and shape, (2) distribution across different scales and (3) the need for comprehensive assessment at the whole slide image (WSI) level. To solve aforementioned problems, we establish a brand new dataset and propose a computer-aided segmentation framework through multi-model fusion and distillation to identify multiple functional tissues in placental histopathologic images, including villi, capillaries, fibrin deposits and trophoblast aggregations. Specifically, we propose a two-stage Multi-model Fusion and Distillation (MMFD) framework. Considering the multi-scale distribution and heterogeneity of multiple functional tissues, we enhance the visual representation in the first stage by fusing feature from multiple models to boost the effectiveness of the network. However, the multi-model fusion stage contributes to extra parameters and a significant computational burden, which is impractical for recognizing gigapixels of WSIs within clinical practice. In the second stage, we propose straightforward plug-in feature distillation method that transfers knowledge from the large fused model to a compact student model. In self-collected placental dataset, our proposed MMFD framework demonstrates an improvement of 4.3% in mean Intersection over Union (mIoU) while achieving an approximate 50% increase in inference speed and utilizing only 10% of parameters and computational resources, compared to the parameter-efficient fine-tuned Segment Anything Model (SAM) baseline. Visualization of segmentation results across entire WSIs on unseen cases demonstrates the generalizability of our proposed MMFD framework. Besides, experimental results on a public dataset further prove the effectiveness of MMFD framework on other tasks. Our work can present a fundamental method to expedite quantitative analysis of placental histopathology.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"119 ","pages":"Article 102482"},"PeriodicalIF":5.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142923514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Guidelines for cerebrovascular segmentation: Managing imperfect annotations in the context of semi-supervised learning 脑血管分割指南:在半监督学习的背景下管理不完善的注释。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-01 DOI: 10.1016/j.compmedimag.2024.102474
Pierre Rougé , Pierre-Henri Conze , Nicolas Passat , Odyssée Merveille
Segmentation in medical imaging is an essential and often preliminary task in the image processing chain, driving numerous efforts towards the design of robust segmentation algorithms. Supervised learning methods achieve excellent performances when fed with a sufficient amount of labeled data. However, such labels are typically highly time-consuming, error-prone and expensive to produce. Alternatively, semi-supervised learning approaches leverage both labeled and unlabeled data, and are very useful when only a small fraction of the dataset is labeled. They are particularly useful for cerebrovascular segmentation, given that labeling a single volume requires several hours for an expert. In addition to the challenge posed by insufficient annotations, there are concerns regarding annotation consistency. The task of annotating the cerebrovascular tree is inherently ambiguous. Due to the discrete nature of images, the borders and extremities of vessels are often unclear. Consequently, annotations heavily rely on the expert subjectivity and on the underlying clinical objective. These discrepancies significantly increase the complexity of the segmentation task for the model and consequently impair the results. Consequently, it becomes imperative to provide clinicians with precise guidelines to improve the annotation process and construct more uniform datasets. In this article, we investigate the data dependency of deep learning methods within the context of imperfect data and semi-supervised learning, for cerebrovascular segmentation. Specifically, this study compares various state-of-the-art semi-supervised methods based on unsupervised regularization and evaluates their performance in diverse quantity and quality data scenarios. Based on these experiments, we provide guidelines for the annotation and training of cerebrovascular segmentation models.
医学成像中的分割是图像处理链中的一项基本且通常是初步的任务,它推动了对鲁棒分割算法设计的大量努力。当有足够数量的标记数据时,监督学习方法可以获得出色的性能。然而,这种标签通常非常耗时,容易出错,而且生产成本昂贵。或者,半监督学习方法利用标记和未标记的数据,并且在只有一小部分数据集被标记时非常有用。它们对脑血管分割特别有用,因为专家标记单个体积需要几个小时。除了注释不足带来的挑战之外,还有关于注释一致性的问题。注释脑血管树的任务本质上是模棱两可的。由于图像的离散性,血管的边界和末端往往是不清楚的。因此,注释严重依赖于专家的主观性和潜在的临床目标。这些差异大大增加了模型分割任务的复杂性,从而影响了结果。因此,必须为临床医生提供精确的指导,以改进注释过程并构建更统一的数据集。在本文中,我们研究了在不完全数据和半监督学习的背景下,深度学习方法对脑血管分割的数据依赖性。具体而言,本研究比较了基于无监督正则化的各种最先进的半监督方法,并评估了它们在不同数量和质量数据场景下的性能。基于这些实验,我们为脑血管分割模型的标注和训练提供了指导。
{"title":"Guidelines for cerebrovascular segmentation: Managing imperfect annotations in the context of semi-supervised learning","authors":"Pierre Rougé ,&nbsp;Pierre-Henri Conze ,&nbsp;Nicolas Passat ,&nbsp;Odyssée Merveille","doi":"10.1016/j.compmedimag.2024.102474","DOIUrl":"10.1016/j.compmedimag.2024.102474","url":null,"abstract":"<div><div>Segmentation in medical imaging is an essential and often preliminary task in the image processing chain, driving numerous efforts towards the design of robust segmentation algorithms. Supervised learning methods achieve excellent performances when fed with a sufficient amount of labeled data. However, such labels are typically highly time-consuming, error-prone and expensive to produce. Alternatively, semi-supervised learning approaches leverage both labeled and unlabeled data, and are very useful when only a small fraction of the dataset is labeled. They are particularly useful for cerebrovascular segmentation, given that labeling a single volume requires several hours for an expert. In addition to the challenge posed by insufficient annotations, there are concerns regarding annotation consistency. The task of annotating the cerebrovascular tree is inherently ambiguous. Due to the discrete nature of images, the borders and extremities of vessels are often unclear. Consequently, annotations heavily rely on the expert subjectivity and on the underlying clinical objective. These discrepancies significantly increase the complexity of the segmentation task for the model and consequently impair the results. Consequently, it becomes imperative to provide clinicians with precise guidelines to improve the annotation process and construct more uniform datasets. In this article, we investigate the data dependency of deep learning methods within the context of imperfect data and semi-supervised learning, for cerebrovascular segmentation. Specifically, this study compares various state-of-the-art semi-supervised methods based on unsupervised regularization and evaluates their performance in diverse quantity and quality data scenarios. Based on these experiments, we provide guidelines for the annotation and training of cerebrovascular segmentation models.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"119 ","pages":"Article 102474"},"PeriodicalIF":5.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142873357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review of the Segment Anything Model (SAM) for medical image analysis: Accomplishments and perspectives 医学影像分析的片段任意模型(SAM):成就与展望。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-01 DOI: 10.1016/j.compmedimag.2024.102473
Mudassar Ali , Tong Wu , Haoji Hu , Qiong Luo , Dong Xu , Weizeng Zheng , Neng Jin , Chen Yang , Jincao Yao
The purpose of this paper is to provide an overview of the developments that have occurred in the Segment Anything Model (SAM) within the medical image segmentation category over the course of the past year. However, SAM has demonstrated notable achievements in adapting to medical image segmentation tasks through fine-tuning on medical datasets, transitioning from 2D to 3D datasets, and optimizing prompting engineering. This is despite the fact that direct application on medical datasets has shown mixed results. Despite the difficulties, the paper emphasizes the significant potential that SAM possesses in the field of medical segmentation. One of the suggested directions for the future is to investigate the construction of large-scale datasets, to address multi-modal and multi-scale information, to integrate with semi-supervised learning structures, and to extend the application methods of SAM in clinical settings. In addition to making a significant contribution to the field of medical segmentation.
本文的目的是提供在过去一年中医学图像分割类别中细分任何模型(SAM)的发展概况。然而,SAM通过对医学数据集进行微调、从2D数据集过渡到3D数据集以及优化提示工程,在适应医学图像分割任务方面取得了显著成就。尽管在医疗数据集上的直接应用显示出喜忧参半的结果。尽管存在困难,但本文强调了SAM在医学分割领域具有的巨大潜力。未来的研究方向之一是研究大规模数据集的构建,解决多模态和多尺度的信息,与半监督学习结构相结合,并扩展SAM在临床环境中的应用方法。除了在医疗细分领域做出重大贡献之外。
{"title":"A review of the Segment Anything Model (SAM) for medical image analysis: Accomplishments and perspectives","authors":"Mudassar Ali ,&nbsp;Tong Wu ,&nbsp;Haoji Hu ,&nbsp;Qiong Luo ,&nbsp;Dong Xu ,&nbsp;Weizeng Zheng ,&nbsp;Neng Jin ,&nbsp;Chen Yang ,&nbsp;Jincao Yao","doi":"10.1016/j.compmedimag.2024.102473","DOIUrl":"10.1016/j.compmedimag.2024.102473","url":null,"abstract":"<div><div>The purpose of this paper is to provide an overview of the developments that have occurred in the Segment Anything Model (SAM) within the medical image segmentation category over the course of the past year. However, SAM has demonstrated notable achievements in adapting to medical image segmentation tasks through fine-tuning on medical datasets, transitioning from 2D to 3D datasets, and optimizing prompting engineering. This is despite the fact that direct application on medical datasets has shown mixed results. Despite the difficulties, the paper emphasizes the significant potential that SAM possesses in the field of medical segmentation. One of the suggested directions for the future is to investigate the construction of large-scale datasets, to address multi-modal and multi-scale information, to integrate with semi-supervised learning structures, and to extend the application methods of SAM in clinical settings. In addition to making a significant contribution to the field of medical segmentation.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"119 ","pages":"Article 102473"},"PeriodicalIF":5.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142824125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Utilizing domain knowledge to improve the classification of intravenous contrast phase of CT scans 利用领域知识改进CT扫描静脉对比期的分类。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-01 DOI: 10.1016/j.compmedimag.2024.102458
Liangchen Liu , Jianfei Liu , Bikash Santra , Christopher Parnell , Pritam Mukherjee , Tejas Mathai , Yingying Zhu , Akshaya Anand , Ronald M. Summers
Multiple intravenous contrast phases of CT scans are commonly used in clinical practice to facilitate disease diagnosis. However, contrast phase information is commonly missing or incorrect due to discrepancies in CT series descriptions and imaging practices. This work aims to develop a classification algorithm to automatically determine the contrast phase of a CT scan. We hypothesize that image intensities of key organs (e.g. aorta, inferior vena cava) affected by contrast enhancement are inherent feature information to decide the contrast phase. These organs are segmented by TotalSegmentator followed by generating intensity features on each segmented organ region. Two internal and one external dataset were collected to validate the classification accuracy. In comparison with the baseline ResNet classification method that did not make use of key organs features, the proposed method achieved the comparable accuracy of 92.5% and F1 score of 92.5% in one internal dataset. The accuracy was improved from 63.9% to 79.8% and F1 score from 43.9% to 65.0% using the proposed method on the other internal dataset. The accuracy improved from 63.5% to 85.1% and the F1 score from 56.4% to 83.9% on the external dataset. Image intensity features from key organs are critical for improving the classification accuracy of contrast phases of CT scans. The classification method based on these features is robust to different scanners and imaging protocols from different institutes. Our results suggested improved classification accuracy over existing approaches, which advances the application of automatic contrast phase classification toward real clinical practice. The code for this work can be found here: (https://github.com/rsummers11/CT_Contrast_Phase_Classifier).
在临床实践中,多次静脉CT造影剂扫描常用于疾病诊断。然而,由于CT系列描述和成像实践的差异,对比相位信息通常缺失或不正确。本工作旨在开发一种分类算法来自动确定CT扫描的对比相位。我们假设受增强影响的关键器官(如主动脉、下腔静脉)的图像强度是决定对比相位的固有特征信息。这些器官被TotalSegmentator分割,然后在每个被分割的器官区域上生成强度特征。收集了两个内部数据集和一个外部数据集来验证分类精度。与未使用关键器官特征的基线ResNet分类方法相比,该方法在一个内部数据集中的准确率为92.5%,F1评分为92.5%。在另一个内部数据集上使用该方法,准确率从63.9%提高到79.8%,F1分数从43.9%提高到65.0%。在外部数据集上,准确率从63.5%提高到85.1%,F1得分从56.4%提高到83.9%。关键器官的图像强度特征对于提高CT扫描对比相位的分类精度至关重要。基于这些特征的分类方法对不同的扫描仪和不同研究所的成像协议具有鲁棒性。我们的研究结果表明,与现有的方法相比,自动对比期分类的准确率有所提高,从而推动了自动对比期分类在实际临床中的应用。这项工作的代码可以在这里找到:(https://github.com/rsummers11/CT_Contrast_Phase_Classifier)。
{"title":"Utilizing domain knowledge to improve the classification of intravenous contrast phase of CT scans","authors":"Liangchen Liu ,&nbsp;Jianfei Liu ,&nbsp;Bikash Santra ,&nbsp;Christopher Parnell ,&nbsp;Pritam Mukherjee ,&nbsp;Tejas Mathai ,&nbsp;Yingying Zhu ,&nbsp;Akshaya Anand ,&nbsp;Ronald M. Summers","doi":"10.1016/j.compmedimag.2024.102458","DOIUrl":"10.1016/j.compmedimag.2024.102458","url":null,"abstract":"<div><div>Multiple intravenous contrast phases of CT scans are commonly used in clinical practice to facilitate disease diagnosis. However, contrast phase information is commonly missing or incorrect due to discrepancies in CT series descriptions and imaging practices. This work aims to develop a classification algorithm to automatically determine the contrast phase of a CT scan. We hypothesize that image intensities of key organs (e.g. aorta, inferior vena cava) affected by contrast enhancement are inherent feature information to decide the contrast phase. These organs are segmented by TotalSegmentator followed by generating intensity features on each segmented organ region. Two internal and one external dataset were collected to validate the classification accuracy. In comparison with the baseline ResNet classification method that did not make use of key organs features, the proposed method achieved the comparable accuracy of 92.5% and F1 score of 92.5% in one internal dataset. The accuracy was improved from 63.9% to 79.8% and F1 score from 43.9% to 65.0% using the proposed method on the other internal dataset. The accuracy improved from 63.5% to 85.1% and the F1 score from 56.4% to 83.9% on the external dataset. Image intensity features from key organs are critical for improving the classification accuracy of contrast phases of CT scans. The classification method based on these features is robust to different scanners and imaging protocols from different institutes. Our results suggested improved classification accuracy over existing approaches, which advances the application of automatic contrast phase classification toward real clinical practice. The code for this work can be found here: (<span><span>https://github.com/rsummers11/CT_Contrast_Phase_Classifier</span><svg><path></path></svg></span>).</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"119 ","pages":"Article 102458"},"PeriodicalIF":5.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142911012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multi-view contrastive learning and semi-supervised self-distillation framework for early recurrence prediction in ovarian cancer 用于卵巢癌早期复发预测的多视角对比学习和半监督自馏框架
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-01 DOI: 10.1016/j.compmedimag.2024.102477
Chi Dong , Yujiao Wu , Bo Sun , Jiayi Bo , Yufei Huang , Yikang Geng , Qianhui Zhang , Ruixiang Liu , Wei Guo , Xingling Wang , Xiran Jiang

Objective

This study presents a novel framework that integrates contrastive learning and knowledge distillation to improve early ovarian cancer (OC) recurrence prediction, addressing the challenges posed by limited labeled data and tumor heterogeneity.

Methods

The research utilized CT imaging data from 585 OC patients, including 142 cases with complete follow-up information and 125 cases with unknown recurrence status. To pre-train the teacher network, 318 unlabeled images were sourced from public datasets (TCGA-OV and PLAGH-202-OC). Multi-view contrastive learning (MVCL) was employed to generate multi-view 2D tumor slices, enhancing the teacher network’s ability to extract features from complex, heterogeneous tumors with high intra-class variability. Building on this foundation, the proposed semi-supervised multi-task self-distillation (Semi-MTSD) framework integrated OC subtyping as an auxiliary task using multi-task learning (MTL). This approach allowed the co-training of a student network for recurrence prediction, leveraging both labeled and unlabeled data to improve predictive performance in data-limited settings. The student network's performance was assessed using preoperative CT images with known recurrence outcomes. Evaluation metrics included area under the receiver operating characteristic curve (AUC), accuracy (ACC), sensitivity (SEN), specificity (SPE), F1 score, floating-point operations (FLOPs), parameter count, training time, inference time, and mean corruption error (mCE).

Results

The proposed framework achieved an ACC of 0.862, an AUC of 0.916, a SPE of 0.895, and an F1 score of 0.831, surpassing existing methods for OC recurrence prediction. Comparative and ablation studies validated the model’s robustness, particularly in scenarios characterized by data scarcity and tumor heterogeneity.

Conclusion

The MVCL and Semi-MTSD framework demonstrates significant advancements in OC recurrence prediction, showcasing strong generalization capabilities in complex, data-constrained environments. This approach offers a promising pathway toward more personalized treatment strategies for OC patients.
目的:本研究提出了一个整合对比学习和知识升华的新框架,以提高早期卵巢癌(OC)复发预测,解决标记数据有限和肿瘤异质性带来的挑战。方法:利用585例OC患者的CT影像资料,其中随访信息完整的142例,复发情况不明的125例。为了对教师网络进行预训练,318张未标记的图像来自公共数据集(TCGA-OV和PLAGH-202-OC)。采用多视图对比学习(MVCL)生成多视图二维肿瘤切片,增强教师网络从复杂、异质性、班级内变异性高的肿瘤中提取特征的能力。在此基础上,提出的半监督多任务自蒸馏(Semi-MTSD)框架利用多任务学习(MTL)将OC子分类作为辅助任务集成。这种方法允许对学生网络进行重复预测的共同训练,利用标记和未标记的数据来提高数据有限设置下的预测性能。使用术前CT图像和已知的复发结果来评估学生网络的表现。评估指标包括接收者工作特征曲线下面积(AUC)、准确性(ACC)、灵敏度(SEN)、特异性(SPE)、F1评分、浮点运算(FLOPs)、参数计数、训练时间、推理时间和平均损坏误差(mCE)。结果:该框架的ACC为0.862,AUC为0.916,SPE为0.895,F1评分为0.831,优于现有的卵巢癌复发预测方法。对比研究和消融研究证实了该模型的稳健性,特别是在数据稀缺和肿瘤异质性的情况下。结论:MVCL和半mtsd框架在预测卵巢癌复发方面取得了重大进展,在复杂的、数据受限的环境中显示出强大的泛化能力。这种方法为卵巢癌患者的个性化治疗策略提供了一条有希望的途径。
{"title":"A multi-view contrastive learning and semi-supervised self-distillation framework for early recurrence prediction in ovarian cancer","authors":"Chi Dong ,&nbsp;Yujiao Wu ,&nbsp;Bo Sun ,&nbsp;Jiayi Bo ,&nbsp;Yufei Huang ,&nbsp;Yikang Geng ,&nbsp;Qianhui Zhang ,&nbsp;Ruixiang Liu ,&nbsp;Wei Guo ,&nbsp;Xingling Wang ,&nbsp;Xiran Jiang","doi":"10.1016/j.compmedimag.2024.102477","DOIUrl":"10.1016/j.compmedimag.2024.102477","url":null,"abstract":"<div><h3>Objective</h3><div>This study presents a novel framework that integrates contrastive learning and knowledge distillation to improve early ovarian cancer (OC) recurrence prediction, addressing the challenges posed by limited labeled data and tumor heterogeneity.</div></div><div><h3>Methods</h3><div>The research utilized CT imaging data from 585 OC patients, including 142 cases with complete follow-up information and 125 cases with unknown recurrence status. To pre-train the teacher network, 318 unlabeled images were sourced from public datasets (TCGA-OV and PLAGH-202-OC). Multi-view contrastive learning (MVCL) was employed to generate multi-view 2D tumor slices, enhancing the teacher network’s ability to extract features from complex, heterogeneous tumors with high intra-class variability. Building on this foundation, the proposed semi-supervised multi-task self-distillation (Semi-MTSD) framework integrated OC subtyping as an auxiliary task using multi-task learning (MTL). This approach allowed the co-training of a student network for recurrence prediction, leveraging both labeled and unlabeled data to improve predictive performance in data-limited settings. The student network's performance was assessed using preoperative CT images with known recurrence outcomes. Evaluation metrics included area under the receiver operating characteristic curve (AUC), accuracy (ACC), sensitivity (SEN), specificity (SPE), F1 score, floating-point operations (FLOPs), parameter count, training time, inference time, and mean corruption error (mCE).</div></div><div><h3>Results</h3><div>The proposed framework achieved an ACC of 0.862, an AUC of 0.916, a SPE of 0.895, and an F1 score of 0.831, surpassing existing methods for OC recurrence prediction. Comparative and ablation studies validated the model’s robustness, particularly in scenarios characterized by data scarcity and tumor heterogeneity.</div></div><div><h3>Conclusion</h3><div>The MVCL and Semi-MTSD framework demonstrates significant advancements in OC recurrence prediction, showcasing strong generalization capabilities in complex, data-constrained environments. This approach offers a promising pathway toward more personalized treatment strategies for OC patients.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"119 ","pages":"Article 102477"},"PeriodicalIF":5.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142824120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computerized Medical Imaging and Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1