首页 > 最新文献

Medical image analysis最新文献

英文 中文
Semi-supervised medical image segmentation via weak-to-strong perturbation consistency and edge-aware contrastive representation 基于弱-强扰动一致性和边缘感知对比表示的半监督医学图像分割
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-06 DOI: 10.1016/j.media.2024.103450
Yang Yang, Guoying Sun, Tong Zhang, Ruixuan Wang, Jingyong Su
Despite that supervised learning has demonstrated impressive accuracy in medical image segmentation, its reliance on large labeled datasets poses a challenge due to the effort and expertise required for data acquisition. Semi-supervised learning has emerged as a potential solution. However, it tends to yield satisfactory segmentation performance in the central region of the foreground, but struggles in the edge region. In this paper, we propose an innovative framework that effectively leverages unlabeled data to improve segmentation performance, especially in edge regions. Our proposed framework includes two novel designs. Firstly, we introduce a weak-to-strong perturbation strategy with corresponding feature-perturbed consistency loss to efficiently utilize unlabeled data and guide our framework in learning reliable regions. Secondly, we propose an edge-aware contrastive loss that utilizes uncertainty to select positive pairs, thereby learning discriminative pixel-level features in the edge regions using unlabeled data. In this way, the model minimizes the discrepancy of multiple predictions and improves representation ability, ultimately aiming at impressive performance on both primary and edge regions. We conducted a comparative analysis of the segmentation results on the publicly available BraTS2020 dataset, LA dataset, and the 2017 ACDC dataset. Through extensive quantification and visualization experiments under three standard semi-supervised settings, we demonstrate the effectiveness of our approach and set a new state-of-the-art for semi-supervised medical image segmentation. Our code is released publicly at https://github.com/youngyzzZ/SSL-w2sPC.
尽管监督学习在医学图像分割中表现出了令人印象深刻的准确性,但由于数据采集需要努力和专业知识,它对大型标记数据集的依赖带来了挑战。半监督学习已经成为一种潜在的解决方案。然而,它往往在前景的中心区域产生令人满意的分割性能,但在边缘区域挣扎。在本文中,我们提出了一个创新的框架,有效地利用未标记的数据来提高分割性能,特别是在边缘区域。我们提出的框架包括两种新颖的设计。首先,我们引入了具有相应特征扰动一致性损失的弱到强摄动策略,以有效地利用未标记数据并指导我们的框架学习可靠区域。其次,我们提出了一种边缘感知的对比损失,利用不确定性来选择正对,从而使用未标记的数据学习边缘区域的判别像素级特征。通过这种方式,该模型最大限度地减少了多个预测的差异,提高了表示能力,最终目标是在主区域和边缘区域都取得令人印象深刻的性能。我们对公开的BraTS2020数据集、LA数据集和2017年ACDC数据集的分割结果进行了对比分析。通过三种标准半监督设置下的大量量化和可视化实验,我们证明了该方法的有效性,并为半监督医学图像分割设定了新的技术水平。我们的代码在https://github.com/youngyzzZ/SSL-w2sPC公开发布。
{"title":"Semi-supervised medical image segmentation via weak-to-strong perturbation consistency and edge-aware contrastive representation","authors":"Yang Yang, Guoying Sun, Tong Zhang, Ruixuan Wang, Jingyong Su","doi":"10.1016/j.media.2024.103450","DOIUrl":"https://doi.org/10.1016/j.media.2024.103450","url":null,"abstract":"Despite that supervised learning has demonstrated impressive accuracy in medical image segmentation, its reliance on large labeled datasets poses a challenge due to the effort and expertise required for data acquisition. Semi-supervised learning has emerged as a potential solution. However, it tends to yield satisfactory segmentation performance in the central region of the foreground, but struggles in the edge region. In this paper, we propose an innovative framework that effectively leverages unlabeled data to improve segmentation performance, especially in edge regions. Our proposed framework includes two novel designs. Firstly, we introduce a weak-to-strong perturbation strategy with corresponding feature-perturbed consistency loss to efficiently utilize unlabeled data and guide our framework in learning reliable regions. Secondly, we propose an edge-aware contrastive loss that utilizes uncertainty to select positive pairs, thereby learning discriminative pixel-level features in the edge regions using unlabeled data. In this way, the model minimizes the discrepancy of multiple predictions and improves representation ability, ultimately aiming at impressive performance on both primary and edge regions. We conducted a comparative analysis of the segmentation results on the publicly available BraTS2020 dataset, LA dataset, and the 2017 ACDC dataset. Through extensive quantification and visualization experiments under three standard semi-supervised settings, we demonstrate the effectiveness of our approach and set a new state-of-the-art for semi-supervised medical image segmentation. Our code is released publicly at <ce:inter-ref xlink:href=\"https://github.com/youngyzzZ/SSL-w2sPC\" xlink:type=\"simple\">https://github.com/youngyzzZ/SSL-w2sPC</ce:inter-ref>.","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"87 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142968132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Domain-specific information preservation for Alzheimer’s disease diagnosis with incomplete multi-modality neuroimages 不完全多模态神经图像在阿尔茨海默病诊断中的领域特异性信息保存
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-06 DOI: 10.1016/j.media.2024.103448
Haozhe Xu, Jian Wang, Qianjin Feng, Yu Zhang, Zhenyuan Ning
Although multi-modality neuroimages have advanced the early diagnosis of Alzheimer’s Disease (AD), missing modality issue still poses a unique challenge in the clinical practice. Recent studies have tried to impute the missing data so as to utilize all available subjects for training robust multi-modality models. However, these studies may overlook the modality-specific information inherent in multi-modality data, that is, different modalities possess distinct imaging characteristics and focus on different aspects of the disease. In this paper, we propose a domain-specific information preservation (DSIP) framework, consisting of modality imputation stage and status identification stage, for AD diagnosis with incomplete multi-modality neuroimages. In the first stage, a specificity-induced generative adversarial network (SIGAN) is developed to bridge the modality gap and capture modality-specific details for imputing high-quality neuroimages. In the second stage, a specificity-promoted diagnosis network (SPDN) is designed to promote the inter-modality feature interaction and the classifier robustness for identifying disease status accurately. Extensive experiments demonstrate the proposed method significantly outperforms state-of-the-art methods in both modality imputation and status identification tasks.
尽管多模态神经图像对阿尔茨海默病(AD)的早期诊断有重要的推动作用,但在临床实践中,模态缺失问题仍然是一个独特的挑战。最近的研究试图对缺失的数据进行估算,以便利用所有可用的被试来训练稳健的多模态模型。然而,这些研究可能忽略了多模态数据中固有的模态特异性信息,即不同的模态具有不同的成像特征,关注的是疾病的不同方面。本文提出了一种基于多模态神经图像不完全诊断AD的特定域信息保存(DSIP)框架,包括模态输入阶段和状态识别阶段。在第一阶段,开发了特异性诱导生成对抗网络(SIGAN)来弥合模态差距并捕获模态特定细节,以输入高质量的神经图像。在第二阶段,设计特异性促进诊断网络(SPDN),以促进模态间特征的相互作用和分类器的鲁棒性,从而准确识别疾病状态。大量的实验表明,所提出的方法在模态输入和状态识别任务中都明显优于最先进的方法。
{"title":"Domain-specific information preservation for Alzheimer’s disease diagnosis with incomplete multi-modality neuroimages","authors":"Haozhe Xu, Jian Wang, Qianjin Feng, Yu Zhang, Zhenyuan Ning","doi":"10.1016/j.media.2024.103448","DOIUrl":"https://doi.org/10.1016/j.media.2024.103448","url":null,"abstract":"Although multi-modality neuroimages have advanced the early diagnosis of Alzheimer’s Disease (AD), missing modality issue still poses a unique challenge in the clinical practice. Recent studies have tried to impute the missing data so as to utilize all available subjects for training robust multi-modality models. However, these studies may overlook the modality-specific information inherent in multi-modality data, that is, different modalities possess distinct imaging characteristics and focus on different aspects of the disease. In this paper, we propose a domain-specific information preservation (DSIP) framework, consisting of modality imputation stage and status identification stage, for AD diagnosis with incomplete multi-modality neuroimages. In the first stage, a specificity-induced generative adversarial network (SIGAN) is developed to bridge the modality gap and capture modality-specific details for imputing high-quality neuroimages. In the second stage, a specificity-promoted diagnosis network (SPDN) is designed to promote the inter-modality feature interaction and the classifier robustness for identifying disease status accurately. Extensive experiments demonstrate the proposed method significantly outperforms state-of-the-art methods in both modality imputation and status identification tasks.","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"58 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142968133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated ultrasonography of hepatocellular carcinoma using discrete wavelet transform based deep-learning neural network. 基于离散小波变换的深度学习神经网络在肝癌超声诊断中的应用。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-04 DOI: 10.1016/j.media.2025.103453
Se-Yeol Rhyou, Jae-Chern Yoo

This study introduces HCC-Net, a novel wavelet-based approach for the accurate diagnosis of hepatocellular carcinoma (HCC) from abdominal ultrasound (US) images using artificial neural networks. The HCC-Net integrates the discrete wavelet transform (DWT) to decompose US images into four sub-band images, a lesion detector for hierarchical lesion localization, and a pattern-augmented classifier for generating pattern-enhanced lesion images and subsequent classification. The lesion detection uses a hierarchical coarse-to-fine approach to minimize missed lesions. CoarseNet performs initial lesion localization, while FineNet identifies any lesions that were missed. In the classification phase, the wavelet components of detected lesions are synthesized to create pattern-augmented images that enhance feature distinction, resulting in highly accurate classifications. These augmented images are classified into 'Normal,' 'Benign,' or 'Malignant' categories according to their morphologic features on sonography. The experimental results demonstrate the significant effectiveness of the proposed coarse-to-fine detection framework and pattern-augmented classifier in lesion detection and classification. We achieved an accuracy of 96.2 %, a sensitivity of 97.6 %, and a specificity of 98.1 % on the Samsung Medical Center dataset, indicating HCC-Net's potential as a reliable tool for liver cancer screening.

本研究介绍了HCC- net,一种新的基于小波的方法,用于使用人工神经网络从腹部超声(US)图像中准确诊断肝细胞癌(HCC)。HCC-Net集成了离散小波变换(DWT)将US图像分解为四个子带图像,一个用于分层病灶定位的病变检测器,以及一个用于生成模式增强病变图像并进行后续分类的模式增强分类器。病变检测采用分级粗到细的方法,以尽量减少遗漏的病变。CoarseNet进行初始病变定位,而FineNet识别任何遗漏的病变。在分类阶段,将检测到的病变的小波分量合成为增强特征区分的模式增强图像,从而获得高度准确的分类。根据超声图像的形态特征,这些增强图像被分为“正常”、“良性”或“恶性”三类。实验结果表明,本文提出的粗变细检测框架和模式增强分类器在损伤检测和分类方面具有显著的有效性。我们在三星医疗中心数据集上实现了96.2%的准确性,97.6%的敏感性和98.1%的特异性,表明HCC-Net有潜力成为肝癌筛查的可靠工具。
{"title":"Automated ultrasonography of hepatocellular carcinoma using discrete wavelet transform based deep-learning neural network.","authors":"Se-Yeol Rhyou, Jae-Chern Yoo","doi":"10.1016/j.media.2025.103453","DOIUrl":"https://doi.org/10.1016/j.media.2025.103453","url":null,"abstract":"<p><p>This study introduces HCC-Net, a novel wavelet-based approach for the accurate diagnosis of hepatocellular carcinoma (HCC) from abdominal ultrasound (US) images using artificial neural networks. The HCC-Net integrates the discrete wavelet transform (DWT) to decompose US images into four sub-band images, a lesion detector for hierarchical lesion localization, and a pattern-augmented classifier for generating pattern-enhanced lesion images and subsequent classification. The lesion detection uses a hierarchical coarse-to-fine approach to minimize missed lesions. CoarseNet performs initial lesion localization, while FineNet identifies any lesions that were missed. In the classification phase, the wavelet components of detected lesions are synthesized to create pattern-augmented images that enhance feature distinction, resulting in highly accurate classifications. These augmented images are classified into 'Normal,' 'Benign,' or 'Malignant' categories according to their morphologic features on sonography. The experimental results demonstrate the significant effectiveness of the proposed coarse-to-fine detection framework and pattern-augmented classifier in lesion detection and classification. We achieved an accuracy of 96.2 %, a sensitivity of 97.6 %, and a specificity of 98.1 % on the Samsung Medical Center dataset, indicating HCC-Net's potential as a reliable tool for liver cancer screening.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103453"},"PeriodicalIF":10.7,"publicationDate":"2025-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143008074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Strategies for generating synthetic computed tomography-like imaging from radiographs: A scoping review. 从x线片生成合成计算机层析成像的策略:范围综述。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-04 DOI: 10.1016/j.media.2025.103454
Daniel De Wilde, Olivier Zanier, Raffaele Da Mutten, Michael Jin, Luca Regli, Carlo Serra, Victor E Staartjes

Background: Advancements in tomographic medical imaging have revolutionized diagnostics and treatment monitoring by offering detailed 3D visualization of internal structures. Despite the significant value of computed tomography (CT), challenges such as high radiation dosage and cost barriers limit its accessibility, especially in low- and middle-income countries. Recognizing the potential of radiographic imaging in reconstructing CT images, this scoping review aims to explore the emerging field of synthesizing 3D CT-like images from 2D radiographs by examining the current methodologies.

Methods: A scoping review was carried out following PRISMA-SR guidelines. Eligibility criteria for the articles included full-text articles published up to September 9, 2024, studying methodologies for the synthesis of 3D CT images from 2D biplanar or four-projection x-ray images. Eligible articles were sourced from PubMed MEDLINE, Embase, and arXiv.

Results: 76 studies were included. The majority (50.8 %, n = 30) were published between 2010 and 2020 (38.2 %, n = 29) and from 2020 onwards (36.8 %, n = 28), with European (40.8 %, n = 31), North American (26.3 %, n = 20), and Asian (32.9 %, n = 25) institutions being primary contributors. Anatomical regions varied, with 17.1 % (n = 13) of studies not using clinical data. Further, studies focused on the chest (25 %, n = 19), spine and vertebrae (17.1 %, n = 13), coronary arteries (10.5 %, n = 8), and cranial structures (10.5 %, n = 8), among other anatomical regions. Convolutional neural networks (CNN) (19.7 %, n = 15), generative adversarial networks (21.1 %, n = 16) and statistical shape models (15.8 %, n = 12) emerged as the most applied methodologies. A limited number of studies included explored the use of conditional diffusion models, iterative reconstruction algorithms, statistical shape models, and digital tomosynthesis.

Conclusion: This scoping review summarizes current strategies and challenges in synthetic imaging generation. The development of 3D CT-like imaging from 2D radiographs could reduce radiation risk while simultaneously addressing financial and logistical obstacles that impede global access to CT imaging. Despite initial promising results, the field encounters challenges with varied methodologies and frequent lack of proper validation, requiring further research to define synthetic imaging's clinical role.

背景:断层医学成像的进步通过提供详细的内部结构3D可视化,彻底改变了诊断和治疗监测。尽管计算机断层扫描(CT)具有重要价值,但诸如高辐射剂量和成本障碍等挑战限制了其可及性,特别是在低收入和中等收入国家。认识到放射成像在重建CT图像方面的潜力,本综述旨在通过检查当前的方法,探索从2D x线片合成3D CT样图像的新兴领域。方法:根据PRISMA-SR指南进行范围审查。文章的资格标准包括截止到2024年9月9日发表的全文文章,研究从2D双平面或四投影x射线图像合成3D CT图像的方法。符合条件的文章来源于PubMed MEDLINE、Embase和arXiv。结果:共纳入76项研究。大多数(50.8%,n = 30)发表于2010年至2020年(38.2%,n = 29)和2020年以后(36.8%,n = 28),其中欧洲(40.8%,n = 31)、北美(26.3%,n = 20)和亚洲(32.9%,n = 25)机构是主要贡献者。解剖区域各不相同,17.1% (n = 13)的研究没有使用临床数据。此外,研究集中在胸部(25%,n = 19)、脊柱和椎骨(17.1%,n = 13)、冠状动脉(10.5%,n = 8)和颅结构(10.5%,n = 8)以及其他解剖区域。卷积神经网络(CNN) (19.7%, n = 15)、生成对抗网络(21.1%,n = 16)和统计形状模型(15.8%,n = 12)成为应用最多的方法。有限数量的研究包括探索条件扩散模型、迭代重建算法、统计形状模型和数字断层合成的使用。结论:本文综述了合成成像的当前策略和挑战。从2D x光片发展出3D CT样成像技术,可以降低辐射风险,同时解决阻碍全球获得CT成像的财务和后勤障碍。尽管最初的结果很有希望,但该领域遇到了各种方法的挑战,并且经常缺乏适当的验证,需要进一步的研究来确定合成成像的临床作用。
{"title":"Strategies for generating synthetic computed tomography-like imaging from radiographs: A scoping review.","authors":"Daniel De Wilde, Olivier Zanier, Raffaele Da Mutten, Michael Jin, Luca Regli, Carlo Serra, Victor E Staartjes","doi":"10.1016/j.media.2025.103454","DOIUrl":"https://doi.org/10.1016/j.media.2025.103454","url":null,"abstract":"<p><strong>Background: </strong>Advancements in tomographic medical imaging have revolutionized diagnostics and treatment monitoring by offering detailed 3D visualization of internal structures. Despite the significant value of computed tomography (CT), challenges such as high radiation dosage and cost barriers limit its accessibility, especially in low- and middle-income countries. Recognizing the potential of radiographic imaging in reconstructing CT images, this scoping review aims to explore the emerging field of synthesizing 3D CT-like images from 2D radiographs by examining the current methodologies.</p><p><strong>Methods: </strong>A scoping review was carried out following PRISMA-SR guidelines. Eligibility criteria for the articles included full-text articles published up to September 9, 2024, studying methodologies for the synthesis of 3D CT images from 2D biplanar or four-projection x-ray images. Eligible articles were sourced from PubMed MEDLINE, Embase, and arXiv.</p><p><strong>Results: </strong>76 studies were included. The majority (50.8 %, n = 30) were published between 2010 and 2020 (38.2 %, n = 29) and from 2020 onwards (36.8 %, n = 28), with European (40.8 %, n = 31), North American (26.3 %, n = 20), and Asian (32.9 %, n = 25) institutions being primary contributors. Anatomical regions varied, with 17.1 % (n = 13) of studies not using clinical data. Further, studies focused on the chest (25 %, n = 19), spine and vertebrae (17.1 %, n = 13), coronary arteries (10.5 %, n = 8), and cranial structures (10.5 %, n = 8), among other anatomical regions. Convolutional neural networks (CNN) (19.7 %, n = 15), generative adversarial networks (21.1 %, n = 16) and statistical shape models (15.8 %, n = 12) emerged as the most applied methodologies. A limited number of studies included explored the use of conditional diffusion models, iterative reconstruction algorithms, statistical shape models, and digital tomosynthesis.</p><p><strong>Conclusion: </strong>This scoping review summarizes current strategies and challenges in synthetic imaging generation. The development of 3D CT-like imaging from 2D radiographs could reduce radiation risk while simultaneously addressing financial and logistical obstacles that impede global access to CT imaging. Despite initial promising results, the field encounters challenges with varied methodologies and frequent lack of proper validation, requiring further research to define synthetic imaging's clinical role.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103454"},"PeriodicalIF":10.7,"publicationDate":"2025-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142965950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unlocking the diagnostic potential of electrocardiograms through information transfer from cardiac magnetic resonance imaging. 通过心脏磁共振成像的信息传递,释放心电图的诊断潜力。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-04 DOI: 10.1016/j.media.2024.103451
Özgün Turgut, Philip Müller, Paul Hager, Suprosanna Shit, Sophie Starck, Martin J Menten, Eimo Martens, Daniel Rueckert

Cardiovascular diseases (CVD) can be diagnosed using various diagnostic modalities. The electrocardiogram (ECG) is a cost-effective and widely available diagnostic aid that provides functional information of the heart. However, its ability to classify and spatially localise CVD is limited. In contrast, cardiac magnetic resonance (CMR) imaging provides detailed structural information of the heart and thus enables evidence-based diagnosis of CVD, but long scan times and high costs limit its use in clinical routine. In this work, we present a deep learning strategy for cost-effective and comprehensive cardiac screening solely from ECG. Our approach combines multimodal contrastive learning with masked data modelling to transfer domain-specific information from CMR imaging to ECG representations. In extensive experiments using data from 40,044 UK Biobank subjects, we demonstrate the utility and generalisability of our method for subject-specific risk prediction of CVD and the prediction of cardiac phenotypes using only ECG data. Specifically, our novel multimodal pre-training paradigm improves performance by up to 12.19% for risk prediction and 27.59% for phenotype prediction. In a qualitative analysis, we demonstrate that our learned ECG representations incorporate information from CMR image regions of interest. Our entire pipeline is publicly available at https://github.com/oetu/MMCL-ECG-CMR.

心血管疾病(CVD)可以通过各种诊断方式进行诊断。心电图(ECG)是一种具有成本效益且广泛可用的诊断辅助手段,可提供心脏的功能信息。然而,其分类和空间定位CVD的能力是有限的。相比之下,心脏磁共振(CMR)成像提供了心脏的详细结构信息,从而使CVD的循证诊断成为可能,但扫描时间长和成本高限制了其在临床常规中的应用。在这项工作中,我们提出了一种仅从ECG进行成本效益和全面心脏筛查的深度学习策略。我们的方法结合了多模态对比学习和屏蔽数据建模,将特定领域的信息从CMR成像转移到ECG表征。在使用40,044名英国生物银行受试者数据的广泛实验中,我们证明了我们的方法在CVD受试者特定风险预测和仅使用ECG数据预测心脏表型方面的实用性和通用性。具体来说,我们的新多模态预训练范式在风险预测方面提高了12.19%,在表型预测方面提高了27.59%。在定性分析中,我们证明了我们学习到的心电表征包含了来自CMR图像感兴趣区域的信息。我们的整个管道都可以在https://github.com/oetu/MMCL-ECG-CMR上公开获取。
{"title":"Unlocking the diagnostic potential of electrocardiograms through information transfer from cardiac magnetic resonance imaging.","authors":"Özgün Turgut, Philip Müller, Paul Hager, Suprosanna Shit, Sophie Starck, Martin J Menten, Eimo Martens, Daniel Rueckert","doi":"10.1016/j.media.2024.103451","DOIUrl":"https://doi.org/10.1016/j.media.2024.103451","url":null,"abstract":"<p><p>Cardiovascular diseases (CVD) can be diagnosed using various diagnostic modalities. The electrocardiogram (ECG) is a cost-effective and widely available diagnostic aid that provides functional information of the heart. However, its ability to classify and spatially localise CVD is limited. In contrast, cardiac magnetic resonance (CMR) imaging provides detailed structural information of the heart and thus enables evidence-based diagnosis of CVD, but long scan times and high costs limit its use in clinical routine. In this work, we present a deep learning strategy for cost-effective and comprehensive cardiac screening solely from ECG. Our approach combines multimodal contrastive learning with masked data modelling to transfer domain-specific information from CMR imaging to ECG representations. In extensive experiments using data from 40,044 UK Biobank subjects, we demonstrate the utility and generalisability of our method for subject-specific risk prediction of CVD and the prediction of cardiac phenotypes using only ECG data. Specifically, our novel multimodal pre-training paradigm improves performance by up to 12.19% for risk prediction and 27.59% for phenotype prediction. In a qualitative analysis, we demonstrate that our learned ECG representations incorporate information from CMR image regions of interest. Our entire pipeline is publicly available at https://github.com/oetu/MMCL-ECG-CMR.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103451"},"PeriodicalIF":10.7,"publicationDate":"2025-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142964590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-interactive learning: Fusion and evolution of multi-scale histomorphology features for molecular traits prediction in computational pathology. 自交互学习:计算病理学中分子性状预测的多尺度组织形态学特征的融合和进化。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-03 DOI: 10.1016/j.media.2024.103437
Yang Hu, Korsuk Sirinukunwattana, Bin Li, Kezia Gaitskell, Enric Domingo, Willem Bonnaffé, Marta Wojciechowska, Ruby Wood, Nasullah Khalid Alham, Stefano Malacrino, Dan J Woodcock, Clare Verrill, Ahmed Ahmed, Jens Rittscher

Predicting disease-related molecular traits from histomorphology brings great opportunities for precision medicine. Despite the rich information present in histopathological images, extracting fine-grained molecular features from standard whole slide images (WSI) is non-trivial. The task is further complicated by the lack of annotations for subtyping and contextual histomorphological features that might span multiple scales. This work proposes a novel multiple-instance learning (MIL) framework capable of WSI-based cancer morpho-molecular subtyping by fusion of different-scale features. Our method, debuting as Inter-MIL, follows a weakly-supervised scheme. It enables the training of the patch-level encoder for WSI in a task-aware optimisation procedure, a step normally not modelled in most existing MIL-based WSI analysis frameworks. We demonstrate that optimising the patch-level encoder is crucial to achieving high-quality fine-grained and tissue-level subtyping results and offers a significant improvement over task-agnostic encoders. Our approach deploys a pseudo-label propagation strategy to update the patch encoder iteratively, allowing discriminative subtype features to be learned. This mechanism also empowers extracting fine-grained attention within image tiles (the small patches), a task largely ignored in most existing weakly supervised-based frameworks. With Inter-MIL, we carried out four challenging cancer molecular subtyping tasks in the context of ovarian, colorectal, lung, and breast cancer. Extensive evaluation results show that Inter-MIL is a robust framework for cancer morpho-molecular subtyping with superior performance compared to several recently proposed methods, in small dataset scenarios where the number of available training slides is less than 100. The iterative optimisation mechanism of Inter-MIL significantly improves the quality of the image features learned by the patch embedded and generally directs the attention map to areas that better align with experts' interpretation, leading to the identification of more reliable histopathology biomarkers. Moreover, an external validation cohort is used to verify the robustness of Inter-MIL on molecular trait prediction.

从组织形态学上预测疾病相关的分子特征为精准医学带来了巨大的机遇。尽管组织病理学图像中存在丰富的信息,但从标准整张幻灯片图像(WSI)中提取细粒度的分子特征是非平凡的。由于缺乏对可能跨越多个尺度的子类型和上下文组织形态学特征的注释,这项任务变得更加复杂。这项工作提出了一种新的多实例学习(MIL)框架,能够通过融合不同尺度的特征来进行基于wsi的癌症形态-分子亚型。我们的方法,作为Inter-MIL,遵循弱监督方案。它可以在任务感知优化过程中训练WSI的补丁级编码器,这一步通常在大多数现有的基于mil的WSI分析框架中没有建模。我们证明了优化补丁级编码器对于实现高质量的细粒度和组织级亚型结果至关重要,并且提供了对任务不可知编码器的重大改进。我们的方法部署了一个伪标签传播策略来迭代地更新补丁编码器,允许学习判别子类型特征。这种机制还允许在图像块(小块)中提取细粒度的注意力,这在大多数现有的基于弱监督的框架中基本上被忽略了。利用Inter-MIL,我们在卵巢癌、结直肠癌、肺癌和乳腺癌的背景下开展了四项具有挑战性的癌症分子分型任务。广泛的评估结果表明,在可用的训练幻灯片数量少于100的小数据集场景下,与最近提出的几种方法相比,Inter-MIL是一种强大的癌症形态-分子亚分框架,具有优越的性能。Inter-MIL的迭代优化机制显著提高了嵌入补丁学习的图像特征的质量,并通常将注意力图引导到与专家解释更一致的区域,从而识别出更可靠的组织病理学生物标志物。此外,通过外部验证队列验证了Inter-MIL在分子性状预测上的稳健性。
{"title":"Self-interactive learning: Fusion and evolution of multi-scale histomorphology features for molecular traits prediction in computational pathology.","authors":"Yang Hu, Korsuk Sirinukunwattana, Bin Li, Kezia Gaitskell, Enric Domingo, Willem Bonnaffé, Marta Wojciechowska, Ruby Wood, Nasullah Khalid Alham, Stefano Malacrino, Dan J Woodcock, Clare Verrill, Ahmed Ahmed, Jens Rittscher","doi":"10.1016/j.media.2024.103437","DOIUrl":"https://doi.org/10.1016/j.media.2024.103437","url":null,"abstract":"<p><p>Predicting disease-related molecular traits from histomorphology brings great opportunities for precision medicine. Despite the rich information present in histopathological images, extracting fine-grained molecular features from standard whole slide images (WSI) is non-trivial. The task is further complicated by the lack of annotations for subtyping and contextual histomorphological features that might span multiple scales. This work proposes a novel multiple-instance learning (MIL) framework capable of WSI-based cancer morpho-molecular subtyping by fusion of different-scale features. Our method, debuting as Inter-MIL, follows a weakly-supervised scheme. It enables the training of the patch-level encoder for WSI in a task-aware optimisation procedure, a step normally not modelled in most existing MIL-based WSI analysis frameworks. We demonstrate that optimising the patch-level encoder is crucial to achieving high-quality fine-grained and tissue-level subtyping results and offers a significant improvement over task-agnostic encoders. Our approach deploys a pseudo-label propagation strategy to update the patch encoder iteratively, allowing discriminative subtype features to be learned. This mechanism also empowers extracting fine-grained attention within image tiles (the small patches), a task largely ignored in most existing weakly supervised-based frameworks. With Inter-MIL, we carried out four challenging cancer molecular subtyping tasks in the context of ovarian, colorectal, lung, and breast cancer. Extensive evaluation results show that Inter-MIL is a robust framework for cancer morpho-molecular subtyping with superior performance compared to several recently proposed methods, in small dataset scenarios where the number of available training slides is less than 100. The iterative optimisation mechanism of Inter-MIL significantly improves the quality of the image features learned by the patch embedded and generally directs the attention map to areas that better align with experts' interpretation, leading to the identification of more reliable histopathology biomarkers. Moreover, an external validation cohort is used to verify the robustness of Inter-MIL on molecular trait prediction.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103437"},"PeriodicalIF":10.7,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142971612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SegRap2023: A benchmark of organs-at-risk and gross tumor volume Segmentation for Radiotherapy Planning of Nasopharyngeal Carcinoma SegRap2023:鼻咽癌放疗计划中危险器官和肿瘤体积分割的基准
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-02 DOI: 10.1016/j.media.2024.103447
Xiangde Luo, Jia Fu, Yunxin Zhong, Shuolin Liu, Bing Han, Mehdi Astaraki, Simone Bendazzoli, Iuliana Toma-Dasu, Yiwen Ye, Ziyang Chen, Yong Xia, Yanzhou Su, Jin Ye, Junjun He, Zhaohu Xing, Hongqiu Wang, Lei Zhu, Kaixiang Yang, Xin Fang, Zhiwei Wang, Chan Woong Lee, Sang Joon Park, Jaehee Chun, Constantin Ulrich, Klaus H. Maier-Hein, Nchongmaje Ndipenoch, Alina Miron, Yongmin Li, Yimeng Zhang, Yu Chen, Lu Bai, Jinlong Huang, Chengyang An, Lisheng Wang, Kaiwen Huang, Yunqi Gu, Tao Zhou, Mu Zhou, Shichuan Zhang, Wenjun Liao, Guotai Wang, Shaoting Zhang
Radiation therapy is a primary and effective treatment strategy for NasoPharyngeal Carcinoma (NPC). The precise delineation of Gross Tumor Volumes (GTVs) and Organs-At-Risk (OARs) is crucial in radiation treatment, directly impacting patient prognosis. Despite that deep learning has achieved remarkable performance on various medical image segmentation tasks, its performance on OARs and GTVs of NPC is still limited, and high-quality benchmark datasets on this task are highly desirable for model development and evaluation. To alleviate this problem, the SegRap2023 challenge was organized in conjunction with MICCAI2023 and presented a large-scale benchmark for OAR and GTV segmentation with 400 Computed Tomography (CT) scans from 200 NPC patients, each with a pair of pre-aligned non-contrast and contrast-enhanced CT scans. The challenge aimed to segment 45 OARs and 2 GTVs from the paired CT scans per patient, and received 10 and 11 complete submissions for the two tasks, respectively. In this paper, we detail the challenge and analyze the solutions of all participants. The average Dice similarity coefficient scores for all submissions ranged from 76.68% to 86.70%, and 70.42% to 73.44% for OARs and GTVs, respectively. We conclude that the segmentation of relatively large OARs is well-addressed, and more efforts are needed for GTVs and small or thin OARs. The benchmark remains available at: https://segrap2023.grand-challenge.org.
放射治疗是鼻咽癌的主要和有效的治疗策略。肿瘤总体积(GTVs)和危险器官(OARs)的精确描述在放射治疗中至关重要,直接影响患者的预后。尽管深度学习在各种医学图像分割任务上取得了显著的成绩,但其在NPC的OARs和gtv上的表现仍然有限,对于该任务的高质量基准数据集非常需要用于模型开发和评估。为了缓解这一问题,SegRap2023挑战与MICCAI2023一起组织,并提出了一个大规模的OAR和GTV分割基准,其中包括来自200名NPC患者的400次计算机断层扫描(CT),每次扫描都有一对预对齐的非对比和增强CT扫描。该挑战旨在从配对CT扫描中从每位患者中分离出45个OARs和2个gtv,并分别收到10个和11个完整的两项任务提交。在本文中,我们详细介绍了所面临的挑战,并分析了所有参与者的解决方案。所有投稿的Dice相似系数平均值为76.68% ~ 86.70%,桨叶和gtv的Dice相似系数平均值分别为70.42% ~ 73.44%。我们的结论是,相对较大的桨的分割得到了很好的解决,而对于gtv和小或薄的桨,需要付出更多的努力。该基准仍然可以在:https://segrap2023.grand-challenge.org上获得。
{"title":"SegRap2023: A benchmark of organs-at-risk and gross tumor volume Segmentation for Radiotherapy Planning of Nasopharyngeal Carcinoma","authors":"Xiangde Luo, Jia Fu, Yunxin Zhong, Shuolin Liu, Bing Han, Mehdi Astaraki, Simone Bendazzoli, Iuliana Toma-Dasu, Yiwen Ye, Ziyang Chen, Yong Xia, Yanzhou Su, Jin Ye, Junjun He, Zhaohu Xing, Hongqiu Wang, Lei Zhu, Kaixiang Yang, Xin Fang, Zhiwei Wang, Chan Woong Lee, Sang Joon Park, Jaehee Chun, Constantin Ulrich, Klaus H. Maier-Hein, Nchongmaje Ndipenoch, Alina Miron, Yongmin Li, Yimeng Zhang, Yu Chen, Lu Bai, Jinlong Huang, Chengyang An, Lisheng Wang, Kaiwen Huang, Yunqi Gu, Tao Zhou, Mu Zhou, Shichuan Zhang, Wenjun Liao, Guotai Wang, Shaoting Zhang","doi":"10.1016/j.media.2024.103447","DOIUrl":"https://doi.org/10.1016/j.media.2024.103447","url":null,"abstract":"Radiation therapy is a primary and effective treatment strategy for NasoPharyngeal Carcinoma (NPC). The precise delineation of Gross Tumor Volumes (GTVs) and Organs-At-Risk (OARs) is crucial in radiation treatment, directly impacting patient prognosis. Despite that deep learning has achieved remarkable performance on various medical image segmentation tasks, its performance on OARs and GTVs of NPC is still limited, and high-quality benchmark datasets on this task are highly desirable for model development and evaluation. To alleviate this problem, the SegRap2023 challenge was organized in conjunction with MICCAI2023 and presented a large-scale benchmark for OAR and GTV segmentation with 400 Computed Tomography (CT) scans from 200 NPC patients, each with a pair of pre-aligned non-contrast and contrast-enhanced CT scans. The challenge aimed to segment 45 OARs and 2 GTVs from the paired CT scans per patient, and received 10 and 11 complete submissions for the two tasks, respectively. In this paper, we detail the challenge and analyze the solutions of all participants. The average Dice similarity coefficient scores for all submissions ranged from 76.68% to 86.70%, and 70.42% to 73.44% for OARs and GTVs, respectively. We conclude that the segmentation of relatively large OARs is well-addressed, and more efforts are needed for GTVs and small or thin OARs. The benchmark remains available at: <ce:inter-ref xlink:href=\"https://segrap2023.grand-challenge.org\" xlink:type=\"simple\">https://segrap2023.grand-challenge.org</ce:inter-ref>.","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"20 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142925278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IVIM-Morph: Motion-compensated quantitative Intra-voxel Incoherent Motion (IVIM) analysis for functional fetal lung maturity assessment from diffusion-weighted MRI data IVIM- morph:运动补偿定量体素内非相干运动(IVIM)分析功能胎儿肺成熟度评估从扩散加权MRI数据
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-31 DOI: 10.1016/j.media.2024.103445
Noga Kertes, Yael Zaffrani-Reznikov, Onur Afacan, Sila Kurugol, Simon K. Warfield, Moti Freiman
Quantitative analysis of pseudo-diffusion in diffusion-weighted magnetic resonance imaging (DWI) data shows potential for assessing fetal lung maturation and generating valuable imaging biomarkers. Yet, the clinical utility of DWI data is hindered by unavoidable fetal motion during acquisition. We present IVIM-morph, a self-supervised deep neural network model for motion-corrected quantitative analysis of DWI data using the Intra-voxel Incoherent Motion (IVIM) model. IVIM-morph combines two sub-networks, a registration sub-network, and an IVIM model fitting sub-network, enabling simultaneous estimation of IVIM model parameters and motion. To promote physically plausible image registration, we introduce a biophysically informed loss function that effectively balances registration and model-fitting quality. We validated the efficacy of IVIM-morph by establishing a correlation between the predicted IVIM model parameters of the lung and gestational age (GA) using fetal DWI data of 39 subjects. Our approach was compared against six baseline methods: (1) no motion compensation, (2) affine registration of all DWI images to the initial image, (3) deformable registration of all DWI images to the initial image, (4) deformable registration of each DWI image to its preceding image in the sequence, (5) iterative deformable motion compensation combined with IVIM model parameter estimation, and (6) self-supervised deep-learning-based deformable registration. IVIM-morph exhibited a notably improved correlation with gestational age (GA) when performing in-vivo quantitative analysis of fetal lung DWI data during the canalicular phase. Specifically, over 2 test groups of cases, it achieved an Rf2 of 0.44 and 0.52, outperforming the values of 0.27 and 0.25, 0.25 and 0.00, 0.00 and 0.00, 0.38 and 0.00, and 0.07 and 0.14 obtained by other methods. IVIM-morph shows potential in developing valuable biomarkers for non-invasive assessment of fetal lung maturity with DWI data. Moreover, its adaptability opens the door to potential applications in other clinical contexts where motion compensation is essential for quantitative DWI analysis. The IVIM-morph code is readily available at: https://github.com/TechnionComputationalMRILab/qDWI-Morph.
定量分析弥散加权磁共振成像(DWI)数据中的伪扩散显示了评估胎儿肺成熟和产生有价值的成像生物标志物的潜力。然而,DWI数据的临床应用受到采集过程中不可避免的胎儿运动的阻碍。我们提出了IVIM-morph,这是一种自监督深度神经网络模型,用于使用体素内非相干运动(IVIM)模型对DWI数据进行运动校正定量分析。IVIM-morph结合了两个子网络,一个配准子网络和一个IVIM模型拟合子网络,可以同时估计IVIM模型参数和运动。为了促进物理上可信的图像配准,我们引入了生物物理信息损失函数,有效地平衡配准和模型拟合质量。我们利用39例受试者的胎儿DWI数据,通过建立预测肺部IVIM模型参数与胎龄(GA)之间的相关性,验证了IVIM-morph的有效性。我们的方法与六种基线方法进行了比较:(1)无运动补偿,(2)所有DWI图像与初始图像的仿射配准,(3)所有DWI图像与初始图像的可变形配准,(4)序列中每个DWI图像与前一张图像的可变形配准,(5)结合IVIM模型参数估计的迭代可变形运动补偿,以及(6)基于自监督深度学习的可变形配准。当对小管期胎儿肺DWI数据进行体内定量分析时,IVIM-morph与胎龄(GA)的相关性显著提高。其中,在2组病例中,Rf2分别为0.44和0.52,优于其他方法得到的0.27和0.25、0.25和0.00、0.00和0.00、0.38和0.00、0.07和0.14。IVIM-morph在利用DWI数据无创评估胎儿肺成熟度方面显示出开发有价值的生物标志物的潜力。此外,它的适应性为其他临床环境中的潜在应用打开了大门,其中运动补偿对定量DWI分析至关重要。IVIM-morph代码可以在https://github.com/TechnionComputationalMRILab/qDWI-Morph上随时获得。
{"title":"IVIM-Morph: Motion-compensated quantitative Intra-voxel Incoherent Motion (IVIM) analysis for functional fetal lung maturity assessment from diffusion-weighted MRI data","authors":"Noga Kertes, Yael Zaffrani-Reznikov, Onur Afacan, Sila Kurugol, Simon K. Warfield, Moti Freiman","doi":"10.1016/j.media.2024.103445","DOIUrl":"https://doi.org/10.1016/j.media.2024.103445","url":null,"abstract":"Quantitative analysis of pseudo-diffusion in diffusion-weighted magnetic resonance imaging (DWI) data shows potential for assessing fetal lung maturation and generating valuable imaging biomarkers. Yet, the clinical utility of DWI data is hindered by unavoidable fetal motion during acquisition. We present IVIM-morph, a self-supervised deep neural network model for motion-corrected quantitative analysis of DWI data using the Intra-voxel Incoherent Motion (IVIM) model. IVIM-morph combines two sub-networks, a registration sub-network, and an IVIM model fitting sub-network, enabling simultaneous estimation of IVIM model parameters and motion. To promote physically plausible image registration, we introduce a biophysically informed loss function that effectively balances registration and model-fitting quality. We validated the efficacy of IVIM-morph by establishing a correlation between the predicted IVIM model parameters of the lung and gestational age (GA) using fetal DWI data of 39 subjects. Our approach was compared against six baseline methods: (1) no motion compensation, (2) affine registration of all DWI images to the initial image, (3) deformable registration of all DWI images to the initial image, (4) deformable registration of each DWI image to its preceding image in the sequence, (5) iterative deformable motion compensation combined with IVIM model parameter estimation, and (6) self-supervised deep-learning-based deformable registration. IVIM-morph exhibited a notably improved correlation with gestational age (GA) when performing in-vivo quantitative analysis of fetal lung DWI data during the canalicular phase. Specifically, over 2 test groups of cases, it achieved an <mml:math altimg=\"si1.svg\" display=\"inline\"><mml:msubsup><mml:mrow><mml:mi>R</mml:mi></mml:mrow><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:math> of 0.44 and 0.52, outperforming the values of 0.27 and 0.25, 0.25 and 0.00, 0.00 and 0.00, 0.38 and 0.00, and 0.07 and 0.14 obtained by other methods. IVIM-morph shows potential in developing valuable biomarkers for non-invasive assessment of fetal lung maturity with DWI data. Moreover, its adaptability opens the door to potential applications in other clinical contexts where motion compensation is essential for quantitative DWI analysis. The IVIM-morph code is readily available at: <ce:inter-ref xlink:href=\"https://github.com/TechnionComputationalMRILab/qDWI-Morph\" xlink:type=\"simple\">https://github.com/TechnionComputationalMRILab/qDWI-Morph</ce:inter-ref>.","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"398 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2024-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142925280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cooperative multi-task learning and interpretable image biomarkers for glioma grading and molecular subtyping. 神经胶质瘤分级和分子分型的合作多任务学习和可解释的图像生物标志物。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-30 DOI: 10.1016/j.media.2024.103435
Qijian Chen, Lihui Wang, Zeyu Deng, Rongpin Wang, Li Wang, Caiqing Jian, Yue-Min Zhu

Deep learning methods have been widely used for various glioma predictions. However, they are usually task-specific, segmentation-dependent and lack of interpretable biomarkers. How to accurately predict the glioma histological grade and molecular subtypes at the same time and provide reliable imaging biomarkers is still challenging. To achieve this, we propose a novel cooperative multi-task learning network (CMTLNet) which consists of a task-common feature extraction (CFE) module, a task-specific unique feature extraction (UFE) module and a unique-common feature collaborative classification (UCFC) module. In CFE, a segmentation-free tumor feature perception (SFTFP) module is first designed to extract the tumor-aware features in a classification manner rather than a segmentation manner. Following that, based on the multi-scale tumor-aware features extracted by SFTFP module, CFE uses convolutional layers to further refine these features, from which the task-common features are learned. In UFE, based on orthogonal projection and conditional classification strategies, the task-specific unique features are extracted. In UCFC, the unique and common features are fused with an attention mechanism to make them adaptive to different glioma prediction tasks. Finally, deep features-guided interpretable radiomic biomarkers for each glioma prediction task are explored by combining SHAP values and correlation analysis. Through the comparisons with recent reported methods on a large multi-center dataset comprising over 1800 cases, we demonstrated the superiority of the proposed CMTLNet, with the mean Matthews correlation coefficient in validation and test sets improved by (4.1%, 10.7%), (3.6%, 23.4%), and (2.7%, 22.7%) respectively for glioma grading, 1p/19q and IDH status prediction tasks. In addition, we found that some radiomic features are highly related to uninterpretable deep features and that their variation trends are consistent in multi-center datasets, which can be taken as reliable imaging biomarkers for glioma diagnosis. The proposed CMTLNet provides an interpretable tool for glioma multi-task prediction, which is beneficial for glioma precise diagnosis and personalized treatment.

深度学习方法已广泛用于各种胶质瘤预测。然而,它们通常是特定于任务的,依赖于片段的,缺乏可解释的生物标志物。如何同时准确预测胶质瘤的组织学分级和分子亚型,并提供可靠的影像学生物标志物仍然是一个挑战。为了实现这一目标,我们提出了一种新的合作多任务学习网络(CMTLNet),该网络由任务公共特征提取(CFE)模块、任务特定唯一特征提取(UFE)模块和唯一公共特征协同分类(UCFC)模块组成。在CFE中,首先设计了无分割的肿瘤特征感知(SFTFP)模块,以分类方式而非分割方式提取肿瘤感知特征。然后,在SFTFP模块提取的多尺度肿瘤感知特征的基础上,CFE使用卷积层对这些特征进行进一步细化,从中学习任务公共特征。在UFE中,基于正交投影和条件分类策略提取特定任务的唯一特征。在UCFC中,独特和共同的特征与注意机制相融合,使其适应不同的胶质瘤预测任务。最后,通过结合SHAP值和相关分析,探索每个胶质瘤预测任务的深度特征引导的可解释放射组学生物标志物。通过在包含1800多个病例的大型多中心数据集上与最近报道的方法进行比较,我们证明了所提出的CMTLNet的优越性,验证集和测试集的平均马修斯相关系数分别提高了(4.1%,10.7%),(3.6%,23.4%)和(2.7%,22.7%),用于胶质瘤评分,1p/19q和IDH状态预测任务。此外,我们发现一些放射学特征与不可解释的深层特征高度相关,并且它们的变化趋势在多中心数据集中是一致的,可以作为胶质瘤诊断的可靠成像生物标志物。提出的CMTLNet为胶质瘤多任务预测提供了一个可解释的工具,有利于胶质瘤的精确诊断和个性化治疗。
{"title":"Cooperative multi-task learning and interpretable image biomarkers for glioma grading and molecular subtyping.","authors":"Qijian Chen, Lihui Wang, Zeyu Deng, Rongpin Wang, Li Wang, Caiqing Jian, Yue-Min Zhu","doi":"10.1016/j.media.2024.103435","DOIUrl":"https://doi.org/10.1016/j.media.2024.103435","url":null,"abstract":"<p><p>Deep learning methods have been widely used for various glioma predictions. However, they are usually task-specific, segmentation-dependent and lack of interpretable biomarkers. How to accurately predict the glioma histological grade and molecular subtypes at the same time and provide reliable imaging biomarkers is still challenging. To achieve this, we propose a novel cooperative multi-task learning network (CMTLNet) which consists of a task-common feature extraction (CFE) module, a task-specific unique feature extraction (UFE) module and a unique-common feature collaborative classification (UCFC) module. In CFE, a segmentation-free tumor feature perception (SFTFP) module is first designed to extract the tumor-aware features in a classification manner rather than a segmentation manner. Following that, based on the multi-scale tumor-aware features extracted by SFTFP module, CFE uses convolutional layers to further refine these features, from which the task-common features are learned. In UFE, based on orthogonal projection and conditional classification strategies, the task-specific unique features are extracted. In UCFC, the unique and common features are fused with an attention mechanism to make them adaptive to different glioma prediction tasks. Finally, deep features-guided interpretable radiomic biomarkers for each glioma prediction task are explored by combining SHAP values and correlation analysis. Through the comparisons with recent reported methods on a large multi-center dataset comprising over 1800 cases, we demonstrated the superiority of the proposed CMTLNet, with the mean Matthews correlation coefficient in validation and test sets improved by (4.1%, 10.7%), (3.6%, 23.4%), and (2.7%, 22.7%) respectively for glioma grading, 1p/19q and IDH status prediction tasks. In addition, we found that some radiomic features are highly related to uninterpretable deep features and that their variation trends are consistent in multi-center datasets, which can be taken as reliable imaging biomarkers for glioma diagnosis. The proposed CMTLNet provides an interpretable tool for glioma multi-task prediction, which is beneficial for glioma precise diagnosis and personalized treatment.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103435"},"PeriodicalIF":10.7,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142950922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2D echocardiography video to 3D heart shape reconstruction for clinical application. 二维超声心动图视频对三维心脏形态重建的临床应用。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-28 DOI: 10.1016/j.media.2024.103434
Fabian Laumer, Lena Rubi, Michael A Matter, Stefano Buoso, Gabriel Fringeli, François Mach, Frank Ruschitzka, Joachim M Buhmann, Christian M Matter

Transthoracic Echocardiography (TTE) is a crucial tool for assessing cardiac morphology and function quickly and non-invasively without ionising radiation. However, the examination is subject to intra- and inter-user variability and recordings are often limited to 2D imaging and assessments of end-diastolic and end-systolic volumes. We have developed a novel, fully automated machine learning-based framework to generate a personalised 4D (3D plus time) model of the left ventricular (LV) blood pool with high temporal resolution. A 4D shape is reconstructed from specific 2D echocardiographic views employing deep neural networks, pretrained on a synthetic dataset, and fine-tuned in a self-supervised manner using a novel optimisation method for cross-sectional imaging data. No 3D ground truth is needed for model training. The generated digital twins enhance the interpretation of TTE data by providing a versatile tool for automated analysis of LV volume changes, localisation of infarct areas, and identification of new and clinically relevant biomarkers. Experiments are performed on a multicentre dataset that includes TTE exams of 144 patients with normal TTE and 314 patients with acute myocardial infarction (AMI). The novel biomarkers show a high predictive value for survival (area under the curve (AUC) of 0.82 for 1-year all-cause mortality), demonstrating that personalised 3D shape modelling has the potential to improve diagnostic accuracy and risk assessment.

经胸超声心动图(TTE)是快速、无创、无电离辐射评估心脏形态和功能的重要工具。然而,检查受到用户内部和用户之间的差异的影响,记录通常仅限于2D成像和舒张末期和收缩末期体积的评估。我们开发了一种新颖的,全自动的基于机器学习的框架,以高时间分辨率生成个性化的左心室(LV)血池4D (3D加时间)模型。利用深度神经网络从特定的2D超声心动图视图重建4D形状,在合成数据集上进行预训练,并使用一种新的优化方法对横断面成像数据进行自我监督微调。模型训练不需要3D地面真相。生成的数字孪生体通过提供一种多功能工具来自动分析左室容积变化、梗死区域定位以及识别新的临床相关生物标志物,从而增强了对TTE数据的解释。实验是在一个多中心数据集上进行的,该数据集包括144例正常TTE患者和314例急性心肌梗死(AMI)患者的TTE检查。新的生物标志物显示出较高的生存预测价值(1年全因死亡率的曲线下面积(AUC)为0.82),表明个性化3D形状建模具有提高诊断准确性和风险评估的潜力。
{"title":"2D echocardiography video to 3D heart shape reconstruction for clinical application.","authors":"Fabian Laumer, Lena Rubi, Michael A Matter, Stefano Buoso, Gabriel Fringeli, François Mach, Frank Ruschitzka, Joachim M Buhmann, Christian M Matter","doi":"10.1016/j.media.2024.103434","DOIUrl":"https://doi.org/10.1016/j.media.2024.103434","url":null,"abstract":"<p><p>Transthoracic Echocardiography (TTE) is a crucial tool for assessing cardiac morphology and function quickly and non-invasively without ionising radiation. However, the examination is subject to intra- and inter-user variability and recordings are often limited to 2D imaging and assessments of end-diastolic and end-systolic volumes. We have developed a novel, fully automated machine learning-based framework to generate a personalised 4D (3D plus time) model of the left ventricular (LV) blood pool with high temporal resolution. A 4D shape is reconstructed from specific 2D echocardiographic views employing deep neural networks, pretrained on a synthetic dataset, and fine-tuned in a self-supervised manner using a novel optimisation method for cross-sectional imaging data. No 3D ground truth is needed for model training. The generated digital twins enhance the interpretation of TTE data by providing a versatile tool for automated analysis of LV volume changes, localisation of infarct areas, and identification of new and clinically relevant biomarkers. Experiments are performed on a multicentre dataset that includes TTE exams of 144 patients with normal TTE and 314 patients with acute myocardial infarction (AMI). The novel biomarkers show a high predictive value for survival (area under the curve (AUC) of 0.82 for 1-year all-cause mortality), demonstrating that personalised 3D shape modelling has the potential to improve diagnostic accuracy and risk assessment.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103434"},"PeriodicalIF":10.7,"publicationDate":"2024-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142910004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical image analysis
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1