首页 > 最新文献

Computerized Medical Imaging and Graphics最新文献

英文 中文
Deep learning ensembles for detecting brain metastases in longitudinal multi-modal MRI studies 在纵向多模态磁共振成像研究中检测脑转移的深度学习组合
IF 5.7 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-05-22 DOI: 10.1016/j.compmedimag.2024.102401
Bartosz Machura , Damian Kucharski , Oskar Bozek , Bartosz Eksner , Bartosz Kokoszka , Tomasz Pekala , Mateusz Radom , Marek Strzelczak , Lukasz Zarudzki , Benjamín Gutiérrez-Becker , Agata Krason , Jean Tessier , Jakub Nalepa

Metastatic brain cancer is a condition characterized by the migration of cancer cells to the brain from extracranial sites. Notably, metastatic brain tumors surpass primary brain tumors in prevalence by a significant factor, they exhibit an aggressive growth potential and have the capacity to spread across diverse cerebral locations simultaneously. Magnetic resonance imaging (MRI) scans of individuals afflicted with metastatic brain tumors unveil a wide spectrum of characteristics. These lesions vary in size and quantity, spanning from tiny nodules to substantial masses captured within MRI. Patients may present with a limited number of lesions or an extensive burden of hundreds of them. Moreover, longitudinal studies may depict surgical resection cavities, as well as areas of necrosis or edema. Thus, the manual analysis of such MRI scans is difficult, user-dependent and cost-inefficient, and – importantly – it lacks reproducibility. We address these challenges and propose a pipeline for detecting and analyzing brain metastases in longitudinal studies, which benefits from an ensemble of various deep learning architectures originally designed for different downstream tasks (detection and segmentation). The experiments, performed over 275 multi-modal MRI scans of 87 patients acquired in 53 sites, coupled with rigorously validated manual annotations, revealed that our pipeline, built upon open-source tools to ensure its reproducibility, offers high-quality detection, and allows for precisely tracking the disease progression. To objectively quantify the generalizability of models, we introduce a new data stratification approach that accommodates the heterogeneity of the dataset and is used to elaborate training-test splits in a data-robust manner, alongside a new set of quality metrics to objectively assess algorithms. Our system provides a fully automatic and quantitative approach that may support physicians in a laborious process of disease progression tracking and evaluation of treatment efficacy.

转移性脑癌的特点是癌细胞从颅外转移到脑部。值得注意的是,转移性脑肿瘤的发病率远远超过原发性脑肿瘤,它们具有侵袭性生长潜力,并能同时扩散到大脑的不同部位。转移性脑肿瘤患者的磁共振成像(MRI)扫描显示出广泛的特征。这些病变的大小和数量各不相同,从微小的结节到磁共振成像中捕捉到的巨大肿块。患者可能表现为数量有限的病灶,也可能表现为数以百计的广泛病灶。此外,纵向研究可能会显示手术切除腔以及坏死或水肿区域。因此,手动分析这类磁共振成像扫描既困难又依赖用户,成本效率低,更重要的是缺乏可重复性。针对这些挑战,我们提出了一种在纵向研究中检测和分析脑转移的方法,它得益于最初为不同下游任务(检测和分割)设计的各种深度学习架构的组合。通过对在 53 个地点获得的 87 名患者的 275 个多模态 MRI 扫描以及经过严格验证的人工注释进行实验,发现我们的管道基于开源工具构建,可确保其可重复性,提供高质量的检测,并可精确跟踪疾病进展。为了客观地量化模型的可推广性,我们引入了一种新的数据分层方法,这种方法能适应数据集的异质性,并用于以数据稳健的方式制定训练-测试分割,同时还引入了一套新的质量指标来客观地评估算法。我们的系统提供了一种全自动的定量方法,可在疾病进展跟踪和疗效评估的繁琐过程中为医生提供支持。
{"title":"Deep learning ensembles for detecting brain metastases in longitudinal multi-modal MRI studies","authors":"Bartosz Machura ,&nbsp;Damian Kucharski ,&nbsp;Oskar Bozek ,&nbsp;Bartosz Eksner ,&nbsp;Bartosz Kokoszka ,&nbsp;Tomasz Pekala ,&nbsp;Mateusz Radom ,&nbsp;Marek Strzelczak ,&nbsp;Lukasz Zarudzki ,&nbsp;Benjamín Gutiérrez-Becker ,&nbsp;Agata Krason ,&nbsp;Jean Tessier ,&nbsp;Jakub Nalepa","doi":"10.1016/j.compmedimag.2024.102401","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102401","url":null,"abstract":"<div><p>Metastatic brain cancer is a condition characterized by the migration of cancer cells to the brain from extracranial sites. Notably, metastatic brain tumors surpass primary brain tumors in prevalence by a significant factor, they exhibit an aggressive growth potential and have the capacity to spread across diverse cerebral locations simultaneously. Magnetic resonance imaging (MRI) scans of individuals afflicted with metastatic brain tumors unveil a wide spectrum of characteristics. These lesions vary in size and quantity, spanning from tiny nodules to substantial masses captured within MRI. Patients may present with a limited number of lesions or an extensive burden of hundreds of them. Moreover, longitudinal studies may depict surgical resection cavities, as well as areas of necrosis or edema. Thus, the manual analysis of such MRI scans is difficult, user-dependent and cost-inefficient, and – importantly – it lacks reproducibility. We address these challenges and propose a pipeline for detecting and analyzing brain metastases in longitudinal studies, which benefits from an ensemble of various deep learning architectures originally designed for different downstream tasks (detection and segmentation). The experiments, performed over 275 multi-modal MRI scans of 87 patients acquired in 53 sites, coupled with rigorously validated manual annotations, revealed that our pipeline, built upon open-source tools to ensure its reproducibility, offers high-quality detection, and allows for precisely tracking the disease progression. To objectively quantify the generalizability of models, we introduce a new data stratification approach that accommodates the heterogeneity of the dataset and is used to elaborate training-test splits in a data-robust manner, alongside a new set of quality metrics to objectively assess algorithms. Our system provides a fully automatic and quantitative approach that may support physicians in a laborious process of disease progression tracking and evaluation of treatment efficacy.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102401"},"PeriodicalIF":5.7,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124000788/pdfft?md5=51dfd8fc9e95917b8971fa4297d3ea4e&pid=1-s2.0-S0895611124000788-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141095699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A 3D framework for segmentation of carotid artery vessel wall and identification of plaque compositions in multi-sequence MR images 在多序列磁共振图像中分割颈动脉血管壁并识别斑块成分的三维框架
IF 5.7 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-05-21 DOI: 10.1016/j.compmedimag.2024.102402
Jian Wang , Fan Yu , Mengze Zhang , Jie Lu , Zhen Qian

Accurately assessing carotid artery wall thickening and identifying risky plaque components are critical for early diagnosis and risk management of carotid atherosclerosis. In this paper, we present a 3D framework for automated segmentation of the carotid artery vessel wall and identification of the compositions of carotid plaque in multi-sequence magnetic resonance (MR) images under the challenge of imperfect manual labeling. Manual labeling is commonly done in 2D slices of these multi-sequence MR images and often lacks perfect alignment across 2D slices and the multiple MR sequences, leading to labeling inaccuracies. To address such challenges, our framework is split into two parts: a segmentation subnetwork and a plaque component identification subnetwork. Initially, a 2D localization network pinpoints the carotid artery’s position, extracting the region of interest (ROI) from the input images. Following that, a signed-distance-map-enabled 3D U-net (Çiçek etal, 2016)an adaptation of the nnU-net (Ronneberger and Fischer, 2015) segments the carotid artery vessel wall. This method allows for the concurrent segmentation of the vessel wall area using the signed distance map (SDM) loss (Xue et al., 2020) which regularizes the segmentation surfaces in 3D and reduces erroneous segmentation caused by imperfect manual labels. Subsequently, the ROI of the input images and the obtained vessel wall masks are extracted and combined to obtain the identification results of plaque components in the identification subnetwork. Tailored data augmentation operations are introduced into the framework to reduce the false positive rate of calcification and hemorrhage identification. We trained and tested our proposed method on a dataset consisting of 115 patients, and it achieves an accurate segmentation result of carotid artery wall (0.8459 Dice), which is superior to the best result in published studies (0.7885 Dice). Our approach yielded accuracies of 0.82, 0.73 and 0.88 for the identification of calcification, lipid-rich core and hemorrhage components. Our proposed framework can be potentially used in clinical and research settings to help radiologists perform cumbersome reading tasks and evaluate the risk of carotid plaques.

准确评估颈动脉壁增厚和识别风险斑块成分对于颈动脉粥样硬化的早期诊断和风险管理至关重要。在本文中,我们提出了一个三维框架,用于自动分割颈动脉血管壁并识别多序列磁共振(MR)图像中的颈动脉斑块成分,以应对人工标记不完善的挑战。手动标记通常是在这些多序列磁共振图像的二维切片上完成的,而且往往缺乏二维切片和多序列磁共振图像之间的完美对齐,从而导致标记不准确。为了应对这些挑战,我们的框架分为两部分:分割子网络和斑块成分识别子网络。首先,二维定位网络从输入图像中提取感兴趣区(ROI),确定颈动脉的位置。然后,由签名距离图支持的 3D U-网络(Çiçek etal,2016 年)对 nnU-网络(Ronneberger 和 Fischer,2015 年)进行改编,对颈动脉血管壁进行分割。该方法允许使用签名距离图(SDM)损失(Xue 等人,2020 年)同时分割血管壁区域,该损失可对三维分割表面进行正则化处理,并减少因人工标签不完善而导致的错误分割。随后,提取输入图像的 ROI 和获得的血管壁掩膜,并将其结合起来,以获得识别子网络中斑块成分的识别结果。该框架还引入了量身定制的数据增强操作,以降低钙化和出血识别的误判率。我们在一个由 115 名患者组成的数据集上训练和测试了我们提出的方法,该方法实现了准确的颈动脉壁分割结果(0.8459 Dice),优于已发表研究中的最佳结果(0.7885 Dice)。我们的方法对钙化、富脂核心和出血成分的识别准确率分别为 0.82、0.73 和 0.88。我们提出的框架可用于临床和研究,帮助放射科医生完成繁琐的阅读任务,评估颈动脉斑块的风险。
{"title":"A 3D framework for segmentation of carotid artery vessel wall and identification of plaque compositions in multi-sequence MR images","authors":"Jian Wang ,&nbsp;Fan Yu ,&nbsp;Mengze Zhang ,&nbsp;Jie Lu ,&nbsp;Zhen Qian","doi":"10.1016/j.compmedimag.2024.102402","DOIUrl":"10.1016/j.compmedimag.2024.102402","url":null,"abstract":"<div><p>Accurately assessing carotid artery wall thickening and identifying risky plaque components are critical for early diagnosis and risk management of carotid atherosclerosis. In this paper, we present a 3D framework for automated segmentation of the carotid artery vessel wall and identification of the compositions of carotid plaque in multi-sequence magnetic resonance (MR) images under the challenge of imperfect manual labeling. Manual labeling is commonly done in 2D slices of these multi-sequence MR images and often lacks perfect alignment across 2D slices and the multiple MR sequences, leading to labeling inaccuracies. To address such challenges, our framework is split into two parts: a segmentation subnetwork and a plaque component identification subnetwork. Initially, a 2D localization network pinpoints the carotid artery’s position, extracting the region of interest (ROI) from the input images. Following that, a signed-distance-map-enabled 3D U-net (Çiçek etal, 2016)an adaptation of the nnU-net (Ronneberger and Fischer, 2015) segments the carotid artery vessel wall. This method allows for the concurrent segmentation of the vessel wall area using the signed distance map (SDM) loss (Xue et al., 2020) which regularizes the segmentation surfaces in 3D and reduces erroneous segmentation caused by imperfect manual labels. Subsequently, the ROI of the input images and the obtained vessel wall masks are extracted and combined to obtain the identification results of plaque components in the identification subnetwork. Tailored data augmentation operations are introduced into the framework to reduce the false positive rate of calcification and hemorrhage identification. We trained and tested our proposed method on a dataset consisting of 115 patients, and it achieves an accurate segmentation result of carotid artery wall (0.8459 Dice), which is superior to the best result in published studies (0.7885 Dice). Our approach yielded accuracies of 0.82, 0.73 and 0.88 for the identification of calcification, lipid-rich core and hemorrhage components. Our proposed framework can be potentially used in clinical and research settings to help radiologists perform cumbersome reading tasks and evaluate the risk of carotid plaques.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102402"},"PeriodicalIF":5.7,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141135573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing cancer prediction in challenging screen-detected incident lung nodules using time-series deep learning 利用时间序列深度学习加强对具有挑战性的筛查发现的偶发肺结节的癌症预测
IF 5.7 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-05-20 DOI: 10.1016/j.compmedimag.2024.102399
Shahab Aslani , Pavan Alluri , Eyjolfur Gudmundsson , Edward Chandy , John McCabe , Anand Devaraj , Carolyn Horst , Sam M. Janes , Rahul Chakkara , Daniel C. Alexander , SUMMIT consortium, Arjun Nair , Joseph Jacob

Lung cancer screening (LCS) using annual computed tomography (CT) scanning significantly reduces mortality by detecting cancerous lung nodules at an earlier stage. Deep learning algorithms can improve nodule malignancy risk stratification. However, they have typically been used to analyse single time point CT data when detecting malignant nodules on either baseline or incident CT LCS rounds. Deep learning algorithms have the greatest value in two aspects. These approaches have great potential in assessing nodule change across time-series CT scans where subtle changes may be challenging to identify using the human eye alone. Moreover, they could be targeted to detect nodules developing on incident screening rounds, where cancers are generally smaller and more challenging to detect confidently.

Here, we show the performance of our Deep learning-based Computer-Aided Diagnosis model integrating Nodule and Lung imaging data with clinical Metadata Longitudinally (DeepCAD-NLM-L) for malignancy prediction. DeepCAD-NLM-L showed improved performance (AUC = 88%) against models utilizing single time-point data alone. DeepCAD-NLM-L also demonstrated comparable and complementary performance to radiologists when interpreting the most challenging nodules typically found in LCS programs. It also demonstrated similar performance to radiologists when assessed on out-of-distribution imaging dataset. The results emphasize the advantages of using time-series and multimodal analyses when interpreting malignancy risk in LCS.

使用年度计算机断层扫描(CT)进行肺癌筛查(LCS)可在早期发现肺癌结节,从而显著降低死亡率。深度学习算法可以改善结节恶性风险分层。然而,在检测基线或事件 CT LCS 轮次中的恶性结节时,它们通常被用于分析单个时间点 CT 数据。深度学习算法在两个方面具有最大价值。这些方法在评估跨时间序列 CT 扫描的结节变化方面具有巨大潜力,在这种情况下,仅靠人眼可能难以识别微妙的变化。在这里,我们展示了基于深度学习的计算机辅助诊断模型的性能,该模型将结节和肺部成像数据与临床元数据纵向整合(DeepCAD-NLM-L),用于恶性肿瘤预测。与仅利用单一时间点数据的模型相比,DeepCAD-NLM-L 的性能有所提高(AUC = 88%)。DeepCAD-NLM-L 在解读 LCS 项目中常见的最具挑战性的结节时,也表现出与放射科医生相当的互补性。在对分布外成像数据集进行评估时,DeepCAD-NLM-L 也表现出与放射科医生相似的性能。这些结果强调了在解读 LCS 中的恶性肿瘤风险时使用时间序列和多模态分析的优势。
{"title":"Enhancing cancer prediction in challenging screen-detected incident lung nodules using time-series deep learning","authors":"Shahab Aslani ,&nbsp;Pavan Alluri ,&nbsp;Eyjolfur Gudmundsson ,&nbsp;Edward Chandy ,&nbsp;John McCabe ,&nbsp;Anand Devaraj ,&nbsp;Carolyn Horst ,&nbsp;Sam M. Janes ,&nbsp;Rahul Chakkara ,&nbsp;Daniel C. Alexander ,&nbsp;SUMMIT consortium,&nbsp;Arjun Nair ,&nbsp;Joseph Jacob","doi":"10.1016/j.compmedimag.2024.102399","DOIUrl":"10.1016/j.compmedimag.2024.102399","url":null,"abstract":"<div><p>Lung cancer screening (LCS) using annual computed tomography (CT) scanning significantly reduces mortality by detecting cancerous lung nodules at an earlier stage. Deep learning algorithms can improve nodule malignancy risk stratification. However, they have typically been used to analyse single time point CT data when detecting malignant nodules on either baseline or incident CT LCS rounds. Deep learning algorithms have the greatest value in two aspects. These approaches have great potential in assessing nodule change across time-series CT scans where subtle changes may be challenging to identify using the human eye alone. Moreover, they could be targeted to detect nodules developing on incident screening rounds, where cancers are generally smaller and more challenging to detect confidently.</p><p>Here, we show the performance of our Deep learning-based Computer-Aided Diagnosis model integrating Nodule and Lung imaging data with clinical Metadata Longitudinally (DeepCAD-NLM-L) for malignancy prediction. DeepCAD-NLM-L showed improved performance (AUC = 88%) against models utilizing single time-point data alone. DeepCAD-NLM-L also demonstrated comparable and complementary performance to radiologists when interpreting the most challenging nodules typically found in LCS programs. It also demonstrated similar performance to radiologists when assessed on out-of-distribution imaging dataset. The results emphasize the advantages of using time-series and multimodal analyses when interpreting malignancy risk in LCS.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102399"},"PeriodicalIF":5.7,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124000764/pdfft?md5=8b33f33239dfe3edc77e2b30eb2fbd9c&pid=1-s2.0-S0895611124000764-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141136556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep neural network for the prediction of KRAS, NRAS, and BRAF genotypes in left-sided colorectal cancer based on histopathologic images 基于组织病理学图像预测左侧结直肠癌 KRAS、NRAS 和 BRAF 基因型的深度神经网络。
IF 5.7 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-05-12 DOI: 10.1016/j.compmedimag.2024.102384
Xuejie Li , Xianda Chi , Pinjie Huang , Qiong Liang , Jianpei Liu

Background

The KRAS, NRAS, and BRAF genotypes are critical for selecting targeted therapies for patients with metastatic colorectal cancer (mCRC). Here, we aimed to develop a deep learning model that utilizes pathologic whole-slide images (WSIs) to accurately predict the status of KRAS, NRAS, and BRAFV600E.

Methods

129 patients with left-sided colon cancer and rectal cancer from the Third Affiliated Hospital of Sun Yat-sen University were assigned to the training and testing cohorts. Utilizing three convolutional neural networks (ResNet18, ResNet50, and Inception v3), we extracted 206 pathological features from H&E-stained WSIs, serving as the foundation for constructing specific pathological models. A clinical feature model was then developed, with carcinoembryonic antigen (CEA) identified through comprehensive multiple regression analysis as the key biomarker. Subsequently, these two models were combined to create a clinical-pathological integrated model, resulting in a total of three genetic prediction models.

Result

103 patients were evaluated in the training cohort (1782,302 image tiles), while the remaining 26 patients were enrolled in the testing cohort (489,481 image tiles). Compared with the clinical model and the pathology model, the combined model which incorporated CEA levels and pathological signatures, showed increased predictive ability, with an area under the curve (AUC) of 0.96 in the training and an AUC of 0.83 in the testing cohort, accompanied by a high positive predictive value (PPV 0.92).

Conclusion

The combined model demonstrated a considerable ability to accurately predict the status of KRAS, NRAS, and BRAFV600E in patients with left-sided colorectal cancer, with potential application to assist doctors in developing targeted treatment strategies for mCRC patients, and effectively identifying mutations and eliminating the need for confirmatory genetic testing.

背景:KRAS、NRAS和BRAF基因型是转移性结直肠癌(mCRC)患者选择靶向疗法的关键。方法:将中山大学附属第三医院的 129 名左侧结肠癌和直肠癌患者分配到训练组和测试组。利用三种卷积神经网络(ResNet18、ResNet50和Inception v3),我们从H&E染色的WSI中提取了206个病理特征,作为构建特定病理模型的基础。然后建立了临床特征模型,并通过综合多元回归分析确定癌胚抗原 (CEA) 为关键生物标记物。结果:103 名患者被纳入训练队列(1782302 张图像),其余 26 名患者被纳入测试队列(489481 张图像)。与临床模型和病理模型相比,包含 CEA 水平和病理特征的组合模型显示出更强的预测能力,训练队列中的曲线下面积(AUC)为 0.96,测试队列中的曲线下面积(AUC)为 0.83,同时具有较高的阳性预测值(PPV 0.92):综合模型在准确预测左侧结直肠癌患者的 KRAS、NRAS 和 BRAFV600E 状态方面表现出了相当高的能力,有望协助医生为 mCRC 患者制定有针对性的治疗策略,并有效识别基因突变,消除确诊基因检测的需要。
{"title":"Deep neural network for the prediction of KRAS, NRAS, and BRAF genotypes in left-sided colorectal cancer based on histopathologic images","authors":"Xuejie Li ,&nbsp;Xianda Chi ,&nbsp;Pinjie Huang ,&nbsp;Qiong Liang ,&nbsp;Jianpei Liu","doi":"10.1016/j.compmedimag.2024.102384","DOIUrl":"10.1016/j.compmedimag.2024.102384","url":null,"abstract":"<div><h3>Background</h3><p>The KRAS, NRAS, and BRAF genotypes are critical for selecting targeted therapies for patients with metastatic colorectal cancer (mCRC). Here, we aimed to develop a deep learning model that utilizes pathologic whole-slide images (WSIs) to accurately predict the status of KRAS, NRAS, and BRAF<sup>V600E</sup>.</p></div><div><h3>Methods</h3><p>129 patients with left-sided colon cancer and rectal cancer from the Third Affiliated Hospital of Sun Yat-sen University were assigned to the training and testing cohorts. Utilizing three convolutional neural networks (ResNet18, ResNet50, and Inception v3), we extracted 206 pathological features from H&amp;E-stained WSIs, serving as the foundation for constructing specific pathological models. A clinical feature model was then developed, with carcinoembryonic antigen (CEA) identified through comprehensive multiple regression analysis as the key biomarker. Subsequently, these two models were combined to create a clinical-pathological integrated model, resulting in a total of three genetic prediction models.</p></div><div><h3>Result</h3><p>103 patients were evaluated in the training cohort (1782,302 image tiles), while the remaining 26 patients were enrolled in the testing cohort (489,481 image tiles). Compared with the clinical model and the pathology model, the combined model which incorporated CEA levels and pathological signatures, showed increased predictive ability, with an area under the curve (AUC) of 0.96 in the training and an AUC of 0.83 in the testing cohort, accompanied by a high positive predictive value (PPV 0.92).</p></div><div><h3>Conclusion</h3><p>The combined model demonstrated a considerable ability to accurately predict the status of KRAS, NRAS, and BRAF<sup>V600E</sup> in patients with left-sided colorectal cancer, with potential application to assist doctors in developing targeted treatment strategies for mCRC patients, and effectively identifying mutations and eliminating the need for confirmatory genetic testing.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102384"},"PeriodicalIF":5.7,"publicationDate":"2024-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140960862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised lung CT image registration via stochastic decomposition of deformation fields 通过随机分解变形场实现无监督肺部 CT 图像配准
IF 5.7 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-05-07 DOI: 10.1016/j.compmedimag.2024.102397
Jing Zou , Youyi Song , Lihao Liu , Angelica I. Aviles-Rivero , Jing Qin

We address the problem of lung CT image registration, which underpins various diagnoses and treatments for lung diseases. The main crux of the problem is the large deformation that the lungs undergo during respiration. This physiological process imposes several challenges from a learning point of view. In this paper, we propose a novel training scheme, called stochastic decomposition, which enables deep networks to effectively learn such a difficult deformation field during lung CT image registration. The key idea is to stochastically decompose the deformation field, and supervise the registration by synthetic data that have the corresponding appearance discrepancy. The stochastic decomposition allows for revealing all possible decompositions of the deformation field. At the learning level, these decompositions can be seen as a prior to reduce the ill-posedness of the registration yielding to boost the performance. We demonstrate the effectiveness of our framework on Lung CT data. We show, through extensive numerical and visual results, that our technique outperforms existing methods.

肺部 CT 图像配准是各种肺部疾病诊断和治疗的基础,我们要解决的问题就是肺部 CT 图像配准。该问题的主要症结在于肺部在呼吸过程中会发生巨大变形。从学习的角度来看,这一生理过程带来了诸多挑战。在本文中,我们提出了一种名为随机分解的新型训练方案,它能让深度网络在肺部 CT 图像配准过程中有效学习这种困难的形变场。其关键思路是随机分解形变场,并通过具有相应外观差异的合成数据来监督配准。随机分解可以揭示形变场的所有可能分解。在学习层面上,这些分解可以被看作是一种先验,可以减少配准的不确定性,从而提高性能。我们在肺部 CT 数据上演示了我们框架的有效性。我们通过大量的数值和视觉结果表明,我们的技术优于现有的方法。
{"title":"Unsupervised lung CT image registration via stochastic decomposition of deformation fields","authors":"Jing Zou ,&nbsp;Youyi Song ,&nbsp;Lihao Liu ,&nbsp;Angelica I. Aviles-Rivero ,&nbsp;Jing Qin","doi":"10.1016/j.compmedimag.2024.102397","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102397","url":null,"abstract":"<div><p>We address the problem of lung CT image registration, which underpins various diagnoses and treatments for lung diseases. The main crux of the problem is the large deformation that the lungs undergo during respiration. This physiological process imposes several challenges from a learning point of view. In this paper, we propose a novel training scheme, called stochastic decomposition, which enables deep networks to effectively learn such a difficult deformation field during lung CT image registration. The key idea is to stochastically decompose the deformation field, and supervise the registration by synthetic data that have the corresponding appearance discrepancy. The stochastic decomposition allows for revealing all possible decompositions of the deformation field. At the learning level, these decompositions can be seen as a prior to reduce the ill-posedness of the registration yielding to boost the performance. We demonstrate the effectiveness of our framework on Lung CT data. We show, through extensive numerical and visual results, that our technique outperforms existing methods.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102397"},"PeriodicalIF":5.7,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140910060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weakly-supervised preclinical tumor localization associated with survival prediction from lung cancer screening Chest X-ray images 弱监督临床前肿瘤定位与肺癌筛查胸部 X 光图像的生存预测相关联
IF 5.7 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-05-07 DOI: 10.1016/j.compmedimag.2024.102395
Renato Hermoza , Jacinto C. Nascimento , Gustavo Carneiro

In this paper, we hypothesize that it is possible to localize image regions of preclinical tumors in a Chest X-ray (CXR) image by a weakly-supervised training of a survival prediction model using a dataset containing CXR images of healthy patients and their time-to-death label. These visual explanations can empower clinicians in early lung cancer detection and increase patient awareness of their susceptibility to the disease. To test this hypothesis, we train a censor-aware multi-class survival prediction deep learning classifier that is robust to imbalanced training, where classes represent quantized number of days for time-to-death prediction. Such multi-class model allows us to use post-hoc interpretability methods, such as Grad-CAM, to localize image regions of preclinical tumors. For the experiments, we propose a new benchmark based on the National Lung Cancer Screening Trial (NLST) dataset to test weakly-supervised preclinical tumor localization and survival prediction models, and results suggest that our proposed method shows state-of-the-art C-index survival prediction and weakly-supervised preclinical tumor localization results. To our knowledge, this constitutes a pioneer approach in the field that is able to produce visual explanations of preclinical events associated with survival prediction results.

在本文中,我们假设通过使用包含健康患者 CXR 图像及其死亡时间标签的数据集对生存预测模型进行弱监督训练,有可能在胸部 X 光(CXR)图像中定位临床前肿瘤的图像区域。这些可视化解释可以增强临床医生早期检测肺癌的能力,并提高患者对自身易感性的认识。为了验证这一假设,我们训练了一种对不平衡训练具有鲁棒性的审查器感知多类生存预测深度学习分类器,其中类代表了量化的死亡时间预测天数。这种多类模型允许我们使用事后可解释性方法(如 Grad-CAM)来定位临床前肿瘤的图像区域。在实验中,我们提出了一个基于国家肺癌筛查试验(NLST)数据集的新基准,以测试弱监督临床前肿瘤定位和生存预测模型,结果表明我们提出的方法显示了最先进的 C 指数生存预测和弱监督临床前肿瘤定位结果。据我们所知,这是该领域中能够对与生存预测结果相关的临床前事件进行可视化解释的开创性方法。
{"title":"Weakly-supervised preclinical tumor localization associated with survival prediction from lung cancer screening Chest X-ray images","authors":"Renato Hermoza ,&nbsp;Jacinto C. Nascimento ,&nbsp;Gustavo Carneiro","doi":"10.1016/j.compmedimag.2024.102395","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102395","url":null,"abstract":"<div><p>In this paper, we hypothesize that it is possible to localize image regions of preclinical tumors in a Chest X-ray (CXR) image by a weakly-supervised training of a survival prediction model using a dataset containing CXR images of healthy patients and their time-to-death label. These visual explanations can empower clinicians in early lung cancer detection and increase patient awareness of their susceptibility to the disease. To test this hypothesis, we train a censor-aware multi-class survival prediction deep learning classifier that is robust to imbalanced training, where classes represent quantized number of days for time-to-death prediction. Such multi-class model allows us to use post-hoc interpretability methods, such as Grad-CAM, to localize image regions of preclinical tumors. For the experiments, we propose a new benchmark based on the National Lung Cancer Screening Trial (NLST) dataset to test weakly-supervised preclinical tumor localization and survival prediction models, and results suggest that our proposed method shows state-of-the-art C-index survival prediction and weakly-supervised preclinical tumor localization results. To our knowledge, this constitutes a pioneer approach in the field that is able to produce visual explanations of preclinical events associated with survival prediction results.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102395"},"PeriodicalIF":5.7,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124000727/pdfft?md5=13bd653784bd57b091f5c80e427ca52e&pid=1-s2.0-S0895611124000727-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140901751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GNN-based structural information to improve DNN-based basal ganglia segmentation in children following early brain lesion 基于 GNN 的结构信息改进基于 DNN 的儿童早期脑损伤基底节区段划分
IF 5.7 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-05-07 DOI: 10.1016/j.compmedimag.2024.102396
Patty Coupeau , Jean-Baptiste Fasquel , Lucie Hertz-Pannier , Mickaël Dinomais

Analyzing the basal ganglia following an early brain lesion is crucial due to their noteworthy role in sensory–motor functions. However, the segmentation of these subcortical structures on MRI is challenging in children and is further complicated by the presence of a lesion. Although current deep neural networks (DNN) perform well in segmenting subcortical brain structures in healthy brains, they lack robustness when faced with lesion variability, leading to structural inconsistencies. Given the established spatial organization of the basal ganglia, we propose enhancing the DNN-based segmentation through post-processing with a graph neural network (GNN). The GNN conducts node classification on graphs encoding both class probabilities and spatial information regarding the regions segmented by the DNN. In this study, we focus on neonatal arterial ischemic stroke (NAIS) in children. The approach is evaluated on both healthy children and children after NAIS using three DNN backbones: U-Net, UNETr, and MSGSE-Net. The results show an improvement in segmentation performance, with an increase in the median Dice score by up to 4% and a reduction in the median Hausdorff distance (HD) by up to 93% for healthy children (from 36.45 to 2.57) and up to 91% for children suffering from NAIS (from 40.64 to 3.50). The performance of the method is compared with atlas-based methods. Severe cases of neonatal stroke result in a decline in performance in the injured hemisphere, without negatively affecting the segmentation of the contra-injured hemisphere. Furthermore, the approach demonstrates resilience to small training datasets, a widespread challenge in the medical field, particularly in pediatrics and for rare pathologies.

由于基底节在感觉运动功能中的重要作用,因此在早期脑损伤后分析基底节至关重要。然而,在核磁共振成像上分割这些儿童皮层下结构具有挑战性,而且由于存在病变而变得更加复杂。虽然目前的深度神经网络(DNN)在分割健康大脑皮层下结构时表现良好,但在面对病变变异时却缺乏鲁棒性,导致结构不一致。鉴于基底节的空间组织已经确立,我们建议通过图神经网络(GNN)的后处理来增强基于 DNN 的分割。图神经网络对编码类别概率和 DNN 所分割区域空间信息的图进行节点分类。在本研究中,我们重点关注儿童新生儿动脉缺血性中风(NAIS)。我们使用三种 DNN 主干对健康儿童和新生儿动脉缺血性中风后的儿童进行了评估:U-Net、UNETr 和 MSGSE-Net。结果表明,该方法提高了分割性能,健康儿童的中位 Dice 分数提高了 4%,中位 Hausdorff 距离 (HD) 缩短了 93%(从 36.45 到 2.57),而 NAIS 患儿的中位 Hausdorff 距离缩短了 91%(从 40.64 到 3.50)。该方法的性能与基于地图集的方法进行了比较。严重的新生儿中风会导致受伤半球的性能下降,但不会对受伤半球的分割产生负面影响。此外,该方法还显示出对小规模训练数据集的适应能力,这在医学领域是一个普遍的挑战,尤其是在儿科和罕见病症方面。
{"title":"GNN-based structural information to improve DNN-based basal ganglia segmentation in children following early brain lesion","authors":"Patty Coupeau ,&nbsp;Jean-Baptiste Fasquel ,&nbsp;Lucie Hertz-Pannier ,&nbsp;Mickaël Dinomais","doi":"10.1016/j.compmedimag.2024.102396","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102396","url":null,"abstract":"<div><p>Analyzing the basal ganglia following an early brain lesion is crucial due to their noteworthy role in sensory–motor functions. However, the segmentation of these subcortical structures on MRI is challenging in children and is further complicated by the presence of a lesion. Although current deep neural networks (DNN) perform well in segmenting subcortical brain structures in healthy brains, they lack robustness when faced with lesion variability, leading to structural inconsistencies. Given the established spatial organization of the basal ganglia, we propose enhancing the DNN-based segmentation through post-processing with a graph neural network (GNN). The GNN conducts node classification on graphs encoding both class probabilities and spatial information regarding the regions segmented by the DNN. In this study, we focus on neonatal arterial ischemic stroke (NAIS) in children. The approach is evaluated on both healthy children and children after NAIS using three DNN backbones: U-Net, UNETr, and MSGSE-Net. The results show an improvement in segmentation performance, with an increase in the median Dice score by up to 4% and a reduction in the median Hausdorff distance (HD) by up to 93% for healthy children (from 36.45 to 2.57) and up to 91% for children suffering from NAIS (from 40.64 to 3.50). The performance of the method is compared with atlas-based methods. Severe cases of neonatal stroke result in a decline in performance in the injured hemisphere, without negatively affecting the segmentation of the contra-injured hemisphere. Furthermore, the approach demonstrates resilience to small training datasets, a widespread challenge in the medical field, particularly in pediatrics and for rare pathologies.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102396"},"PeriodicalIF":5.7,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140914459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging a realistic synthetic database to learn Shape-from-Shading for estimating the colon depth in colonoscopy images 利用逼真的合成数据库学习阴影形状,以估计结肠镜图像中的结肠深度
IF 5.7 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-05-03 DOI: 10.1016/j.compmedimag.2024.102390
Josué Ruano , Martín Gómez , Eduardo Romero , Antoine Manzanera

Colonoscopy is the choice procedure to diagnose, screening, and treat the colon and rectum cancer, from early detection of small precancerous lesions (polyps), to confirmation of malign masses. However, the high variability of the organ appearance and the complex shape of both the colon wall and structures of interest make this exploration difficult. Learned visuospatial and perceptual abilities mitigate technical limitations in clinical practice by proper estimation of the intestinal depth. This work introduces a novel methodology to estimate colon depth maps in single frames from monocular colonoscopy videos. The generated depth map is inferred from the shading variation of the colon wall with respect to the light source, as learned from a realistic synthetic database. Briefly, a classic convolutional neural network architecture is trained from scratch to estimate the depth map, improving sharp depth estimations in haustral folds and polyps by a custom loss function that minimizes the estimation error in edges and curvatures. The network was trained by a custom synthetic colonoscopy database herein constructed and released, composed of 248 400 frames (47 videos), with depth annotations at the level of pixels. This collection comprehends 5 subsets of videos with progressively higher levels of visual complexity. Evaluation of the depth estimation with the synthetic database reached a threshold accuracy of 95.65%, and a mean-RMSE of 0.451cm, while a qualitative assessment with a real database showed consistent depth estimations, visually evaluated by the expert gastroenterologist coauthoring this paper. Finally, the method achieved competitive performance with respect to another state-of-the-art method using a public synthetic database and comparable results in a set of images with other five state-of-the-art methods. Additionally, three-dimensional reconstructions demonstrated useful approximations of the gastrointestinal tract geometry. Code for reproducing the reported results and the dataset are available at https://github.com/Cimalab-unal/ColonDepthEstimation.

从早期发现小的癌前病变(息肉)到确认恶性肿块,结肠镜检查是诊断、筛查和治疗结肠癌和直肠癌的首选方法。然而,由于器官外观的高度可变性以及结肠壁和相关结构的复杂形状,使得这项探索工作十分困难。学习视觉空间和感知能力可以通过正确估计肠道深度来缓解临床实践中的技术限制。这项工作介绍了一种从单眼结肠镜视频的单帧中估算结肠深度图的新方法。生成的深度图是根据现实合成数据库中结肠壁相对于光源的阴影变化推断出来的。简而言之,从头开始训练经典的卷积神经网络架构来估算深度图,并通过自定义损失函数来改善褶皱和息肉的锐利深度估算,使边缘和曲率的估算误差最小化。该网络由一个定制的合成结肠镜数据库训练而成,该数据库由 248 400 个帧组成(47 个视频),并带有像素级深度注释。该数据库包含 5 个视频子集,其视觉复杂度逐步提高。通过合成数据库对深度估计进行评估,阈值准确率达到 95.65%,平均均方根误差为 0.451 厘米,而通过真实数据库进行的定性评估显示,深度估计的一致性得到了本文合著者之一的胃肠病专家的直观评价。最后,该方法与另一种使用公共合成数据库的先进方法相比,性能具有竞争力,在一组图像中与其他五种先进方法的结果也不相上下。此外,三维重建显示了胃肠道几何形状的近似值。重现报告结果的代码和数据集可在 https://github.com/Cimalab-unal/ColonDepthEstimation 上获取。
{"title":"Leveraging a realistic synthetic database to learn Shape-from-Shading for estimating the colon depth in colonoscopy images","authors":"Josué Ruano ,&nbsp;Martín Gómez ,&nbsp;Eduardo Romero ,&nbsp;Antoine Manzanera","doi":"10.1016/j.compmedimag.2024.102390","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102390","url":null,"abstract":"<div><p>Colonoscopy is the choice procedure to diagnose, screening, and treat the colon and rectum cancer, from early detection of small precancerous lesions (polyps), to confirmation of malign masses. However, the high variability of the organ appearance and the complex shape of both the colon wall and structures of interest make this exploration difficult. Learned visuospatial and perceptual abilities mitigate technical limitations in clinical practice by proper estimation of the intestinal depth. This work introduces a novel methodology to estimate colon depth maps in single frames from monocular colonoscopy videos. The generated depth map is inferred from the shading variation of the colon wall with respect to the light source, as learned from a realistic synthetic database. Briefly, a classic convolutional neural network architecture is trained from scratch to estimate the depth map, improving sharp depth estimations in haustral folds and polyps by a custom loss function that minimizes the estimation error in edges and curvatures. The network was trained by a custom synthetic colonoscopy database herein constructed and released, composed of 248<!--> <!-->400 frames (47 videos), with depth annotations at the level of pixels. This collection comprehends 5 subsets of videos with progressively higher levels of visual complexity. Evaluation of the depth estimation with the synthetic database reached a threshold accuracy of 95.65%, and a mean-RMSE of <span><math><mrow><mn>0</mn><mo>.</mo><mn>451</mn><mi>cm</mi></mrow></math></span>, while a qualitative assessment with a real database showed consistent depth estimations, visually evaluated by the expert gastroenterologist coauthoring this paper. Finally, the method achieved competitive performance with respect to another state-of-the-art method using a public synthetic database and comparable results in a set of images with other five state-of-the-art methods. Additionally, three-dimensional reconstructions demonstrated useful approximations of the gastrointestinal tract geometry. Code for reproducing the reported results and the dataset are available at <span>https://github.com/Cimalab-unal/ColonDepthEstimation</span><svg><path></path></svg>.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102390"},"PeriodicalIF":5.7,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124000673/pdfft?md5=bb898a4ca8669cb2b3cd2808af60a2b6&pid=1-s2.0-S0895611124000673-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140842801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3DFRINet: A Framework for the Detection and Diagnosis of Fracture Related Infection in Low Extremities Based on 18F-FDG PET/CT 3D Images 3DFRINet:基于 18F-FDG PET/CT 3D 图像的低位肢体骨折相关感染检测和诊断框架
IF 5.7 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-05-03 DOI: 10.1016/j.compmedimag.2024.102394
Chengfan Li , Liangbing Nie , Zhenkui Sun , Xuehai Ding , Quanyong Luo , Chentian Shen

Fracture related infection (FRI) is one of the most devastating complications after fracture surgery in the lower extremities, which can lead to extremely high morbidity and medical costs. Therefore, early comprehensive evaluation and accurate diagnosis of patients are critical for appropriate treatment, prevention of complications, and good prognosis. 18Fluoro-deoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT) is one of the most commonly used medical imaging modalities for diagnosing FRI. With the development of deep learning, more neural networks have been proposed and become powerful computer-aided diagnosis tools in medical imaging. Therefore, a fully automated two-stage framework for FRI detection and diagnosis, 3DFRINet (Three Dimension FRI Network), is proposed for 18F-FDG PET/CT 3D imaging. The first stage can effectively extract and fuse the features of both modalities to accurately locate the lesion by the dual-branch design and attention module. The second stage reduces the dimensionality of the image by using the maximum intensity projection, which retains the effective features while reducing the computational effort and achieving excellent diagnostic performance. The diagnostic performance of lesions reached 91.55% accuracy, 0.9331 AUC, and 0.9250 F1 score. 3DFRINet has an advantage over six nuclear medicine experts in each classification metric. The statistical analysis shows that 3DFRINet is equivalent or superior to the primary nuclear medicine physicians and comparable to the senior nuclear medicine physicians. In conclusion, this study first proposed a method based on 18F-FDG PET/CT three-dimensional imaging for FRI location and diagnosis. This method shows superior lesion detection rate and diagnostic efficiency and therefore has good prospects for clinical application.

骨折相关感染(FRI)是下肢骨折手术后最具破坏性的并发症之一,可导致极高的发病率和医疗费用。因此,及早对患者进行全面评估和准确诊断对于适当治疗、预防并发症和良好预后至关重要。18氟脱氧葡萄糖正电子发射断层扫描/计算机断层扫描(18F-FDG PET/CT)是诊断FRI最常用的医学影像模式之一。随着深度学习的发展,越来越多的神经网络被提出并成为医学影像领域强大的计算机辅助诊断工具。因此,针对 18F-FDG PET/CT 三维成像,提出了一种两阶段全自动 FRI 检测和诊断框架--3DFRINet(三维 FRI 网络)。第一阶段通过双分支设计和注意力模块,有效提取和融合两种模式的特征,准确定位病灶。第二阶段利用最大强度投影降低图像维度,在保留有效特征的同时减少了计算量,实现了出色的诊断性能。病变诊断准确率达到 91.55%,AUC 为 0.9331,F1 得分为 0.9250。与六位核医学专家相比,3DFRINet 在各项分类指标上均有优势。统计分析表明,3DFRINet 与初级核医学医师相当或更胜一筹,与高级核医学医师相当。总之,本研究首次提出了一种基于 18F-FDG PET/CT 三维成像的 FRI 定位和诊断方法。该方法病灶检出率高,诊断效率高,具有良好的临床应用前景。
{"title":"3DFRINet: A Framework for the Detection and Diagnosis of Fracture Related Infection in Low Extremities Based on 18F-FDG PET/CT 3D Images","authors":"Chengfan Li ,&nbsp;Liangbing Nie ,&nbsp;Zhenkui Sun ,&nbsp;Xuehai Ding ,&nbsp;Quanyong Luo ,&nbsp;Chentian Shen","doi":"10.1016/j.compmedimag.2024.102394","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102394","url":null,"abstract":"<div><p>Fracture related infection (FRI) is one of the most devastating complications after fracture surgery in the lower extremities, which can lead to extremely high morbidity and medical costs. Therefore, early comprehensive evaluation and accurate diagnosis of patients are critical for appropriate treatment, prevention of complications, and good prognosis. <sup>18</sup>Fluoro-deoxyglucose positron emission tomography/computed tomography (<sup>18</sup>F-FDG PET/CT) is one of the most commonly used medical imaging modalities for diagnosing FRI. With the development of deep learning, more neural networks have been proposed and become powerful computer-aided diagnosis tools in medical imaging. Therefore, a fully automated two-stage framework for FRI detection and diagnosis, 3DFRINet (Three Dimension FRI Network), is proposed for <sup>18</sup>F-FDG PET/CT 3D imaging. The first stage can effectively extract and fuse the features of both modalities to accurately locate the lesion by the dual-branch design and attention module. The second stage reduces the dimensionality of the image by using the maximum intensity projection, which retains the effective features while reducing the computational effort and achieving excellent diagnostic performance. The diagnostic performance of lesions reached 91.55% accuracy, 0.9331 AUC, and 0.9250 F1 score. 3DFRINet has an advantage over six nuclear medicine experts in each classification metric. The statistical analysis shows that 3DFRINet is equivalent or superior to the primary nuclear medicine physicians and comparable to the senior nuclear medicine physicians. In conclusion, this study first proposed a method based on <sup>18</sup>F-FDG PET/CT three-dimensional imaging for FRI location and diagnosis. This method shows superior lesion detection rate and diagnostic efficiency and therefore has good prospects for clinical application.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102394"},"PeriodicalIF":5.7,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140844148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards a unified approach for unsupervised brain MRI Motion Artefact Detection with few shot Anomaly Detection 实现无监督脑磁共振成像运动伪影检测与少量异常检测的统一方法
IF 5.7 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-05-03 DOI: 10.1016/j.compmedimag.2024.102391
Niamh Belton , Misgina Tsighe Hagos , Aonghus Lawlor , Kathleen M. Curran

Automated Motion Artefact Detection (MAD) in Magnetic Resonance Imaging (MRI) is a field of study that aims to automatically flag motion artefacts in order to prevent the requirement for a repeat scan. In this paper, we identify and tackle the three current challenges in the field of automated MAD; (1) reliance on fully-supervised training, meaning they require specific examples of Motion Artefacts (MA), (2) inconsistent use of benchmark datasets across different works and use of private datasets for testing and training of newly proposed MAD techniques and (3) a lack of sufficiently large datasets for MRI MAD. To address these challenges, we demonstrate how MAs can be identified by formulating the problem as an unsupervised Anomaly Detection (AD) task. We compare the performance of three State-of-the-Art AD algorithms DeepSVDD, Interpolated Gaussian Descriptor and FewSOME on two open-source Brain MRI datasets on the task of MAD and MA severity classification, with FewSOME achieving a MAD AUC >90% on both datasets and a Spearman Rank Correlation Coefficient of 0.8 on the task of MA severity classification. These models are trained in the few shot setting, meaning large Brain MRI datasets are not required to build robust MAD algorithms. This work also sets a standard protocol for testing MAD algorithms on open-source benchmark datasets. In addition to addressing these challenges, we demonstrate how our proposed ‘anomaly-aware’ scoring function improves FewSOME’s MAD performance in the setting where one and two shots of the anomalous class are available for training. Code available at https://github.com/niamhbelton/Unsupervised-Brain-MRI-Motion-Artefact-Detection/.

磁共振成像(MRI)中的运动伪影自动检测(MAD)是一个研究领域,旨在自动标记运动伪影,以避免重复扫描。在本文中,我们确定并解决了自动运动伪影识别领域当前面临的三大挑战:(1)依赖于完全监督训练,这意味着它们需要运动伪影(MA)的特定示例;(2)不同研究中基准数据集的使用不一致,以及使用私人数据集对新提出的运动伪影识别技术进行测试和训练;(3)缺乏足够大的磁共振成像运动伪影识别数据集。为了应对这些挑战,我们演示了如何通过将问题表述为无监督异常检测(AD)任务来识别 MA。我们在两个开源脑磁共振成像数据集上比较了 DeepSVDD、插值高斯描述符和 FewSOME 这三种最新 AD 算法在 MAD 和 MA 严重程度分类任务上的性能,其中 FewSOME 在两个数据集上的 MAD AUC 均达到 90%,在 MA 严重程度分类任务上的斯皮尔曼等级相关系数达到 0.8。这些模型是在少数几个镜头的设置中训练出来的,这意味着不需要大型脑磁共振成像数据集就能建立稳健的 MAD 算法。这项工作还为在开源基准数据集上测试 MAD 算法制定了标准协议。除了应对这些挑战外,我们还展示了我们提出的 "异常感知 "评分函数如何提高 FewSOME 在有一个和两个异常类镜头可用于训练的情况下的 MAD 性能。代码见 https://github.com/niamhbelton/Unsupervised-Brain-MRI-Motion-Artefact-Detection/。
{"title":"Towards a unified approach for unsupervised brain MRI Motion Artefact Detection with few shot Anomaly Detection","authors":"Niamh Belton ,&nbsp;Misgina Tsighe Hagos ,&nbsp;Aonghus Lawlor ,&nbsp;Kathleen M. Curran","doi":"10.1016/j.compmedimag.2024.102391","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102391","url":null,"abstract":"<div><p>Automated Motion Artefact Detection (MAD) in Magnetic Resonance Imaging (MRI) is a field of study that aims to automatically flag motion artefacts in order to prevent the requirement for a repeat scan. In this paper, we identify and tackle the three current challenges in the field of automated MAD; (1) reliance on fully-supervised training, meaning they require specific examples of Motion Artefacts (MA), (2) inconsistent use of benchmark datasets across different works and use of private datasets for testing and training of newly proposed MAD techniques and (3) a lack of sufficiently large datasets for MRI MAD. To address these challenges, we demonstrate how MAs can be identified by formulating the problem as an unsupervised Anomaly Detection (AD) task. We compare the performance of three State-of-the-Art AD algorithms DeepSVDD, Interpolated Gaussian Descriptor and FewSOME on two open-source Brain MRI datasets on the task of MAD and MA severity classification, with FewSOME achieving a MAD AUC <span><math><mrow><mo>&gt;</mo><mn>90</mn><mtext>%</mtext></mrow></math></span> on both datasets and a Spearman Rank Correlation Coefficient of 0.8 on the task of MA severity classification. These models are trained in the few shot setting, meaning large Brain MRI datasets are not required to build robust MAD algorithms. This work also sets a standard protocol for testing MAD algorithms on open-source benchmark datasets. In addition to addressing these challenges, we demonstrate how our proposed ‘anomaly-aware’ scoring function improves FewSOME’s MAD performance in the setting where one and two shots of the anomalous class are available for training. Code available at <span>https://github.com/niamhbelton/Unsupervised-Brain-MRI-Motion-Artefact-Detection/</span><svg><path></path></svg>.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102391"},"PeriodicalIF":5.7,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124000685/pdfft?md5=8275185e5cfc03cae6d8bed048a27239&pid=1-s2.0-S0895611124000685-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140844149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computerized Medical Imaging and Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1