首页 > 最新文献

Journal of imaging informatics in medicine最新文献

英文 中文
A Robust [18F]-PSMA-1007 Radiomics Ensemble Model for Prostate Cancer Risk Stratification. 用于前列腺癌风险分层的强大[18F]-PSMA-1007放射组学集合模型。
Pub Date : 2024-09-30 DOI: 10.1007/s10278-024-01281-w
Giovanni Pasini, Alessandro Stefano, Cristina Mantarro, Selene Richiusa, Albert Comelli, Giorgio Ivan Russo, Maria Gabriella Sabini, Sebastiano Cosentino, Massimo Ippolito, Giorgio Russo

The aim of this study is to investigate the role of [18F]-PSMA-1007 PET in differentiating high- and low-risk prostate cancer (PCa) through a robust radiomics ensemble model. This retrospective study included 143 PCa patients who underwent [18F]-PSMA-1007 PET/CT imaging. PCa areas were manually contoured on PET images and 1781 image biomarker standardization initiative (IBSI)-compliant radiomics features were extracted. A 30 times iterated preliminary analysis pipeline, comprising of the least absolute shrinkage and selection operator (LASSO) for feature selection and fivefold cross-validation for model optimization, was adopted to identify the most robust features to dataset variations, select candidate models for ensemble modelling, and optimize hyperparameters. Thirteen subsets of selected features, 11 generated from the preliminary analysis plus two additional subsets, the first based on the combination of robust and fine-tuning features, and the second only on fine-tuning features were used to train the model ensemble. Accuracy, area under curve (AUC), sensitivity, specificity, precision, and f-score values were calculated to provide models' performance. Friedman test, followed by post hoc tests corrected with Dunn-Sidak correction for multiple comparisons, was used to verify if statistically significant differences were found in the different ensemble models over the 30 iterations. The model ensemble trained with the combination of robust and fine-tuning features obtained the highest average accuracy (79.52%), AUC (85.75%), specificity (84.29%), precision (82.85%), and f-score (78.26%). Statistically significant differences (p < 0.05) were found for some performance metrics. These findings support the role of [18F]-PSMA-1007 PET radiomics in improving risk stratification for PCa, by reducing dependence on biopsies.

本研究旨在通过一个强大的放射组学集合模型,研究[18F]-PSMA-1007 PET 在区分高危和低危前列腺癌(PCa)中的作用。这项回顾性研究纳入了143名接受[18F]-PSMA-1007 PET/CT成像的PCa患者。对 PET 图像上的 PCa 区域进行了人工轮廓分析,并提取了 1781 个符合图像生物标记标准化倡议(IBSI)的放射组学特征。该研究采用了迭代 30 次的初步分析管道,包括用于特征选择的最小绝对收缩和选择算子(LASSO)和用于模型优化的五倍交叉验证,以确定对数据集变化最稳健的特征,选择用于集合建模的候选模型,并优化超参数。所选特征的 13 个子集(11 个从初步分析中生成,另外两个子集基于鲁棒特征和微调特征的组合,第二个子集仅基于微调特征)被用于训练模型集合。计算精确度、曲线下面积(AUC)、灵敏度、特异性、精确度和 f-score 值,以提供模型的性能。使用弗里德曼检验(Friedman test)和经邓恩-西达克(Dunn-Sidak)多重比较校正的事后检验(post hoc tests)来验证不同的集合模型在 30 次迭代中是否存在显著的统计学差异。使用鲁棒性特征和微调特征组合训练的模型集合获得了最高的平均准确率(79.52%)、AUC(85.75%)、特异性(84.29%)、精确度(82.85%)和 f 分数(78.26%)。通过减少对活检的依赖,P 18F]-PSMA-1007 PET 放射线组学在改善 PCa 风险分层方面的差异具有统计学意义(p 18F]-PSMA-1007)。
{"title":"A Robust [<sup>18</sup>F]-PSMA-1007 Radiomics Ensemble Model for Prostate Cancer Risk Stratification.","authors":"Giovanni Pasini, Alessandro Stefano, Cristina Mantarro, Selene Richiusa, Albert Comelli, Giorgio Ivan Russo, Maria Gabriella Sabini, Sebastiano Cosentino, Massimo Ippolito, Giorgio Russo","doi":"10.1007/s10278-024-01281-w","DOIUrl":"https://doi.org/10.1007/s10278-024-01281-w","url":null,"abstract":"<p><p>The aim of this study is to investigate the role of [<sup>18</sup>F]-PSMA-1007 PET in differentiating high- and low-risk prostate cancer (PCa) through a robust radiomics ensemble model. This retrospective study included 143 PCa patients who underwent [<sup>18</sup>F]-PSMA-1007 PET/CT imaging. PCa areas were manually contoured on PET images and 1781 image biomarker standardization initiative (IBSI)-compliant radiomics features were extracted. A 30 times iterated preliminary analysis pipeline, comprising of the least absolute shrinkage and selection operator (LASSO) for feature selection and fivefold cross-validation for model optimization, was adopted to identify the most robust features to dataset variations, select candidate models for ensemble modelling, and optimize hyperparameters. Thirteen subsets of selected features, 11 generated from the preliminary analysis plus two additional subsets, the first based on the combination of robust and fine-tuning features, and the second only on fine-tuning features were used to train the model ensemble. Accuracy, area under curve (AUC), sensitivity, specificity, precision, and f-score values were calculated to provide models' performance. Friedman test, followed by post hoc tests corrected with Dunn-Sidak correction for multiple comparisons, was used to verify if statistically significant differences were found in the different ensemble models over the 30 iterations. The model ensemble trained with the combination of robust and fine-tuning features obtained the highest average accuracy (79.52%), AUC (85.75%), specificity (84.29%), precision (82.85%), and f-score (78.26%). Statistically significant differences (p < 0.05) were found for some performance metrics. These findings support the role of [<sup>18</sup>F]-PSMA-1007 PET radiomics in improving risk stratification for PCa, by reducing dependence on biopsies.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142336026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning Classification of Ischemic Stroke Territory on Diffusion-Weighted MRI: Added Value of Augmenting the Input with Image Transformations. 基于弥散加权核磁共振成像的缺血性卒中区域深度学习分类:利用图像变换增强输入的附加值。
Pub Date : 2024-09-30 DOI: 10.1007/s10278-024-01277-6
Ilker Ozgur Koska, Alper Selver, Fazil Gelal, Muhsin Engin Uluc, Yusuf Kenan Çetinoğlu, Nursel Yurttutan, Mehmet Serindere, Oğuz Dicle

Our primary aim with this study was to build a patient-level classifier for stroke territory in DWI using AI to facilitate fast triage of stroke to a dedicated stroke center. A retrospective collection of DWI images of 271 and 122 consecutive acute ischemic stroke patients from two centers was carried out. Pretrained MobileNetV2 and EfficientNetB0 architectures were used to classify territorial subtypes as middle cerebral artery, posterior circulation, or watershed infarcts along with normal slices. Various input combinations using edge maps, thresholding, and hard attention versions were explored. The effect of augmenting the three-channel inputs of pre-trained models on classification performance was analyzed. ROC analyses and confusion matrix-derived performance metrics of the models were reported. Of the 271 patients included in this study, 151 (55.7%) were male and 120 (44.3%) were female. One hundred twenty-nine patients had MCA (47.6%), 65 patients had posterior circulation (24%), and 77 patients had watershed (28.0%) infarcts for center 1. Of the 122 patients from center 2, 78 (64%) were male and 44 (34%) were female. Fifty-two patients (43%) had MCA, 51 patients had posterior circulation (42%), and 19 (15%) patients had watershed infarcts. The Mobile-Crop model had the best performance with 0.95 accuracy and a 0.91 mean f1 score for slice-wise classification and 0.88 accuracy on external test sets, along with a 0.92 mean AUC. In conclusion, modified pre-trained models may be augmented with the transformation of images to provide a more accurate classification of affected territory by stroke in DWI.

我们这项研究的主要目的是利用人工智能在 DWI 中建立一个患者级别的卒中区域分类器,以便将卒中患者快速分流到专门的卒中中心。我们对来自两个中心的 271 名和 122 名连续急性缺血性脑卒中患者的 DWI 图像进行了回顾性收集。使用预训练的 MobileNetV2 和 EfficientNetB0 架构将区域亚型分为大脑中动脉、后循环或分水岭梗死以及正常切片。探索了使用边缘图、阈值和硬注意力版本的各种输入组合。分析了增强预训练模型的三通道输入对分类性能的影响。报告了模型的 ROC 分析和混淆矩阵得出的性能指标。本研究共纳入 271 名患者,其中 151 名(55.7%)为男性,120 名(44.3%)为女性。中心 1 的 129 名患者为 MCA 梗死(47.6%),65 名患者为后循环梗死(24%),77 名患者为分水岭梗死(28.0%)。在中心 2 的 122 名患者中,78 名(64%)为男性,44 名(34%)为女性。52名患者(43%)患有MCA,51名患者患有后循环(42%),19名患者(15%)患有分水岭梗死。Mobile-Crop 模型性能最佳,切片分类准确率为 0.95,平均 f1 得分为 0.91,外部测试集准确率为 0.88,平均 AUC 为 0.92。总之,经过修改的预训练模型可以通过图像转换来增强对 DWI 中卒中受影响区域的分类准确性。
{"title":"Deep Learning Classification of Ischemic Stroke Territory on Diffusion-Weighted MRI: Added Value of Augmenting the Input with Image Transformations.","authors":"Ilker Ozgur Koska, Alper Selver, Fazil Gelal, Muhsin Engin Uluc, Yusuf Kenan Çetinoğlu, Nursel Yurttutan, Mehmet Serindere, Oğuz Dicle","doi":"10.1007/s10278-024-01277-6","DOIUrl":"https://doi.org/10.1007/s10278-024-01277-6","url":null,"abstract":"<p><p>Our primary aim with this study was to build a patient-level classifier for stroke territory in DWI using AI to facilitate fast triage of stroke to a dedicated stroke center. A retrospective collection of DWI images of 271 and 122 consecutive acute ischemic stroke patients from two centers was carried out. Pretrained MobileNetV2 and EfficientNetB0 architectures were used to classify territorial subtypes as middle cerebral artery, posterior circulation, or watershed infarcts along with normal slices. Various input combinations using edge maps, thresholding, and hard attention versions were explored. The effect of augmenting the three-channel inputs of pre-trained models on classification performance was analyzed. ROC analyses and confusion matrix-derived performance metrics of the models were reported. Of the 271 patients included in this study, 151 (55.7%) were male and 120 (44.3%) were female. One hundred twenty-nine patients had MCA (47.6%), 65 patients had posterior circulation (24%), and 77 patients had watershed (28.0%) infarcts for center 1. Of the 122 patients from center 2, 78 (64%) were male and 44 (34%) were female. Fifty-two patients (43%) had MCA, 51 patients had posterior circulation (42%), and 19 (15%) patients had watershed infarcts. The Mobile-Crop model had the best performance with 0.95 accuracy and a 0.91 mean f1 score for slice-wise classification and 0.88 accuracy on external test sets, along with a 0.92 mean AUC. In conclusion, modified pre-trained models may be augmented with the transformation of images to provide a more accurate classification of affected territory by stroke in DWI.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142336031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MR Image Fusion-Based Parotid Gland Tumor Detection. 基于磁共振图像融合的腮腺肿瘤检测
Pub Date : 2024-09-26 DOI: 10.1007/s10278-024-01137-3
Kubilay Muhammed Sunnetci, Esat Kaba, Fatma Beyazal Celiker, Ahmet Alkan

The differentiation of benign and malignant parotid gland tumors is of major significance as it directly affects the treatment process. In addition, it is also a vital task in terms of early and accurate diagnosis of parotid gland tumors and the determination of treatment planning accordingly. As in other diseases, the differentiation of tumor types involves several challenging, time-consuming, and laborious processes. In the study, Magnetic Resonance (MR) images of 114 patients with parotid gland tumors are used for training and testing purposes by Image Fusion (IF). After the Apparent Diffusion Coefficient (ADC), Contrast-enhanced T1-w (T1C-w), and T2-w sequences are cropped, IF (ADC, T1C-w), IF (ADC, T2-w), IF (T1C-w, T2-w), and IF (ADC, T1C-w, T2-w) datasets are obtained for different combinations of these sequences using a two-dimensional Discrete Wavelet Transform (DWT)-based fusion technique. For each of these four datasets, ResNet18, GoogLeNet, and DenseNet-201 architectures are trained separately, and thus, 12 models are obtained in the study. A Graphical User Interface (GUI) application that contains the most successful of these trained architectures for each data is also designed to support the users. The designed GUI application not only allows the fusing of different sequence images but also predicts whether the label of the fused image is benign or malignant. The results show that the DenseNet-201 models for IF (ADC, T1C-w), IF (ADC, T2-w), and IF (ADC, T1C-w, T2-w) are better than the others, with accuracies of 95.45%, 95.96%, and 92.93%, respectively. It is also noted in the study that the most successful model for IF (T1C-w, T2-w) is ResNet18, and its accuracy is equal to 94.95%.

腮腺肿瘤的良恶性鉴别意义重大,因为它直接影响到治疗过程。此外,这也是早期准确诊断腮腺肿瘤并据此确定治疗方案的一项重要任务。与其他疾病一样,肿瘤类型的鉴别涉及多个具有挑战性、耗时耗力的过程。本研究利用图像融合(IF)技术对 114 名腮腺肿瘤患者的磁共振(MR)图像进行训练和测试。使用基于二维离散小波变换(DWT)的融合技术,对显像扩散系数(ADC)、对比增强 T1-w (T1C-w)和 T2-w 序列进行裁剪后,得到这些序列不同组合的 IF(ADC、T1C-w)、IF(ADC、T2-w)、IF(T1C-w、T2-w)和 IF(ADC、T1C-w、T2-w)数据集。针对这四个数据集中的每一个数据集,分别训练了 ResNet18、GoogLeNet 和 DenseNet-201 体系结构,从而在研究中获得了 12 个模型。为了支持用户,我们还设计了一个图形用户界面(GUI)应用程序,其中包含针对每种数据训练最成功的架构。设计的 GUI 应用程序不仅可以融合不同的序列图像,还能预测融合图像的标签是良性还是恶性。结果显示,DenseNet-201 模型在 IF (ADC、T1C-w)、IF (ADC、T2-w) 和 IF (ADC、T1C-w、T2-w) 方面优于其他模型,准确率分别为 95.45%、95.96% 和 92.93%。研究还注意到,IF(T1C-w、T2-w)最成功的模型是 ResNet18,其准确率为 94.95%。
{"title":"MR Image Fusion-Based Parotid Gland Tumor Detection.","authors":"Kubilay Muhammed Sunnetci, Esat Kaba, Fatma Beyazal Celiker, Ahmet Alkan","doi":"10.1007/s10278-024-01137-3","DOIUrl":"https://doi.org/10.1007/s10278-024-01137-3","url":null,"abstract":"<p><p>The differentiation of benign and malignant parotid gland tumors is of major significance as it directly affects the treatment process. In addition, it is also a vital task in terms of early and accurate diagnosis of parotid gland tumors and the determination of treatment planning accordingly. As in other diseases, the differentiation of tumor types involves several challenging, time-consuming, and laborious processes. In the study, Magnetic Resonance (MR) images of 114 patients with parotid gland tumors are used for training and testing purposes by Image Fusion (IF). After the Apparent Diffusion Coefficient (ADC), Contrast-enhanced T1-w (T1C-w), and T2-w sequences are cropped, IF (ADC, T1C-w), IF (ADC, T2-w), IF (T1C-w, T2-w), and IF (ADC, T1C-w, T2-w) datasets are obtained for different combinations of these sequences using a two-dimensional Discrete Wavelet Transform (DWT)-based fusion technique. For each of these four datasets, ResNet18, GoogLeNet, and DenseNet-201 architectures are trained separately, and thus, 12 models are obtained in the study. A Graphical User Interface (GUI) application that contains the most successful of these trained architectures for each data is also designed to support the users. The designed GUI application not only allows the fusing of different sequence images but also predicts whether the label of the fused image is benign or malignant. The results show that the DenseNet-201 models for IF (ADC, T1C-w), IF (ADC, T2-w), and IF (ADC, T1C-w, T2-w) are better than the others, with accuracies of 95.45%, 95.96%, and 92.93%, respectively. It is also noted in the study that the most successful model for IF (T1C-w, T2-w) is ResNet18, and its accuracy is equal to 94.95%.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142336033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identification of Bipolar Disorder and Schizophrenia Based on Brain CT and Deep Learning Methods. 基于脑 CT 和深度学习方法的双相情感障碍和精神分裂症识别。
Pub Date : 2024-09-26 DOI: 10.1007/s10278-024-01279-4
Meilin Li, Xingyu Hou, Wanying Yan, Dawei Wang, Ruize Yu, Xixiang Li, Fuyan Li, Jinming Chen, Lingzhen Wei, Jiahao Liu, Huaizhen Wang, Qingshi Zeng

With the increasing prevalence of mental illness, accurate clinical diagnosis of mental illness is crucial. Compared with MRI, CT has the advantages of wide application, low price, short scanning time, and high patient cooperation. This study aims to construct a deep learning (DL) model based on CT images to make identification of bipolar disorder (BD) and schizophrenia (SZ). A total of 506 patients (BD = 227, SZ = 279) and 179 healthy controls (HC) was collected from January 2022 to May 2023 at two hospitals, and divided into an internal training set and an internal validation set according to a ratio of 4:1. An additional 65 patients (BD = 35, SZ = 30) and 40 HC were recruited from different hospitals, and served as an external test set. All subjects accepted the conventional brain CT examination. The DenseMD model for identify BD and SZ using multiple instance learning was developed and compared with other classical DL models. The results showed that DenseMD performed excellently with an accuracy of 0.745 in the internal validation set, whereas the accuracy of the ResNet-18, ResNeXt-50, and DenseNet-121model was 0.672, 0.664, and 0.679, respectively. For the external test set, DenseMD again outperformed other models with an accuracy of 0.724; however, the accuracy of the ResNet-18, ResNeXt-50, and DenseNet-121model was 0.657, 0.638, and 0.676, respectively. Therefore, the potential of DL models for identification of BD and SZ based on brain CT images was established, and identification ability of the DenseMD model was better than other classical DL models.

随着精神疾病发病率的不断上升,准确的精神疾病临床诊断至关重要。与核磁共振成像相比,CT具有应用广泛、价格低廉、扫描时间短、患者配合度高等优点。本研究旨在构建基于CT图像的深度学习(DL)模型,对双相情感障碍(BD)和精神分裂症(SZ)进行识别。研究人员于2022年1月至2023年5月在两家医院共收集了506名患者(BD = 227,SZ = 279)和179名健康对照(HC),并按照4:1的比例将其分为内部训练集和内部验证集。另外还从不同医院招募了65名患者(BD=35,SZ=30)和40名健康对照组(HC),作为外部测试集。所有受试者均接受常规脑 CT 检查。利用多实例学习开发了用于识别 BD 和 SZ 的 DenseMD 模型,并与其他经典 DL 模型进行了比较。结果表明,在内部验证集中,DenseMD 的准确率为 0.745,表现出色,而 ResNet-18、ResNeXt-50 和 DenseNet-121 模型的准确率分别为 0.672、0.664 和 0.679。在外部测试集上,DenseMD 的准确率为 0.724,再次超过了其他模型;然而,ResNet-18、ResNeXt-50 和 DenseNet-121 模型的准确率分别为 0.657、0.638 和 0.676。因此,DL 模型在基于脑 CT 图像的 BD 和 SZ 识别中的潜力得到了证实,并且 DenseMD 模型的识别能力优于其他经典 DL 模型。
{"title":"Identification of Bipolar Disorder and Schizophrenia Based on Brain CT and Deep Learning Methods.","authors":"Meilin Li, Xingyu Hou, Wanying Yan, Dawei Wang, Ruize Yu, Xixiang Li, Fuyan Li, Jinming Chen, Lingzhen Wei, Jiahao Liu, Huaizhen Wang, Qingshi Zeng","doi":"10.1007/s10278-024-01279-4","DOIUrl":"https://doi.org/10.1007/s10278-024-01279-4","url":null,"abstract":"<p><p>With the increasing prevalence of mental illness, accurate clinical diagnosis of mental illness is crucial. Compared with MRI, CT has the advantages of wide application, low price, short scanning time, and high patient cooperation. This study aims to construct a deep learning (DL) model based on CT images to make identification of bipolar disorder (BD) and schizophrenia (SZ). A total of 506 patients (BD = 227, SZ = 279) and 179 healthy controls (HC) was collected from January 2022 to May 2023 at two hospitals, and divided into an internal training set and an internal validation set according to a ratio of 4:1. An additional 65 patients (BD = 35, SZ = 30) and 40 HC were recruited from different hospitals, and served as an external test set. All subjects accepted the conventional brain CT examination. The DenseMD model for identify BD and SZ using multiple instance learning was developed and compared with other classical DL models. The results showed that DenseMD performed excellently with an accuracy of 0.745 in the internal validation set, whereas the accuracy of the ResNet-18, ResNeXt-50, and DenseNet-121model was 0.672, 0.664, and 0.679, respectively. For the external test set, DenseMD again outperformed other models with an accuracy of 0.724; however, the accuracy of the ResNet-18, ResNeXt-50, and DenseNet-121model was 0.657, 0.638, and 0.676, respectively. Therefore, the potential of DL models for identification of BD and SZ based on brain CT images was established, and identification ability of the DenseMD model was better than other classical DL models.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142336032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BSNEU-net: Block Feature Map Distortion and Switchable Normalization-Based Enhanced Union-net for Acute Leukemia Detection on Heterogeneous Dataset. BSNEU-net:基于块特征图失真和可切换归一化的增强联合网,用于在异构数据集上检测急性白血病。
Pub Date : 2024-09-25 DOI: 10.1007/s10278-024-01252-1
Rabul Saikia, Roopam Deka, Anupam Sarma, Salam Shuleenda Devi

Acute leukemia is characterized by the swift proliferation of immature white blood cells (WBC) in the blood and bone marrow. It is categorized into acute lymphoblastic leukemia (ALL) and acute myeloid leukemia (AML), depending on whether the cell-line origin is lymphoid or myeloid, respectively. Deep learning (DL) and artificial intelligence (AI) are revolutionizing medical sciences by assisting clinicians with rapid illness identification, reducing workload, and enhancing diagnostic accuracy. This paper proposes a DL-based novel BSNEU-net framework to detect acute leukemia. It comprises 4 Union Blocks (UB) and incorporates block feature map distortion (BFMD) with switchable normalization (SN) in each UB. The UB employs union convolution to extract more discriminant features. The BFMD is adapted to acquire more generalized patterns to minimize overfitting, whereas SN layers are appended to improve the model's convergence and generalization capabilities. The uniform utilization of batch normalization across convolution layers is sensitive to the mini-batch dimension changes, which is effectively remedied by incorporating an SN layer. Here, a new dataset comprising 2400 blood smear images of ALL, AML, and healthy cases is proposed, as DL methodologies necessitate a sizeable and well-annotated dataset to combat overfitting issues. Further, a heterogeneous dataset comprising 2700 smear images is created by combining four publicly accessible benchmark datasets of ALL, AML, and healthy cases. The BSNEU-net model achieved excellent performance with 99.37% accuracy on the novel dataset and 99.44% accuracy on the heterogeneous dataset. The comparative analysis signifies the superiority of the proposed methodology with comparing schemes.

急性白血病的特征是未成熟白细胞(WBC)在血液和骨髓中迅速增殖。根据细胞系起源是淋巴细胞还是髓细胞,急性白血病可分为急性淋巴细胞白血病(ALL)和急性髓细胞白血病(AML)。深度学习(DL)和人工智能(AI)通过帮助临床医生快速识别疾病、减少工作量和提高诊断准确性,正在给医学科学带来革命性的变化。本文提出了一种基于深度学习的新型 BSNEU 网框架,用于检测急性白血病。它由 4 个联合块(UB)组成,并在每个联合块中加入了带可切换归一化(SN)的块特征图变形(BFMD)。联合块采用联合卷积来提取更多的判别特征。BFMD 用于获取更多通用模式,以尽量减少过拟合,而 SN 层则用于提高模型的收敛性和通用能力。在卷积层中统一使用批次归一化对迷你批次维度变化很敏感,而通过加入 SN 层可以有效解决这一问题。在此,我们提出了一个由 2400 张 ALL、AML 和健康病例的血液涂片图像组成的新数据集,因为 DL 方法需要一个规模可观且注释清晰的数据集来解决过拟合问题。此外,通过合并四种可公开访问的 ALL、AML 和健康病例基准数据集,创建了由 2700 张涂片图像组成的异构数据集。BSNEU-net 模型在新型数据集上取得了 99.37% 的准确率,在异构数据集上取得了 99.44% 的准确率,表现出色。对比分析表明,与其他方案相比,所提出的方法更具优势。
{"title":"BSNEU-net: Block Feature Map Distortion and Switchable Normalization-Based Enhanced Union-net for Acute Leukemia Detection on Heterogeneous Dataset.","authors":"Rabul Saikia, Roopam Deka, Anupam Sarma, Salam Shuleenda Devi","doi":"10.1007/s10278-024-01252-1","DOIUrl":"https://doi.org/10.1007/s10278-024-01252-1","url":null,"abstract":"<p><p>Acute leukemia is characterized by the swift proliferation of immature white blood cells (WBC) in the blood and bone marrow. It is categorized into acute lymphoblastic leukemia (ALL) and acute myeloid leukemia (AML), depending on whether the cell-line origin is lymphoid or myeloid, respectively. Deep learning (DL) and artificial intelligence (AI) are revolutionizing medical sciences by assisting clinicians with rapid illness identification, reducing workload, and enhancing diagnostic accuracy. This paper proposes a DL-based novel BSNEU-net framework to detect acute leukemia. It comprises 4 Union Blocks (UB) and incorporates block feature map distortion (BFMD) with switchable normalization (SN) in each UB. The UB employs union convolution to extract more discriminant features. The BFMD is adapted to acquire more generalized patterns to minimize overfitting, whereas SN layers are appended to improve the model's convergence and generalization capabilities. The uniform utilization of batch normalization across convolution layers is sensitive to the mini-batch dimension changes, which is effectively remedied by incorporating an SN layer. Here, a new dataset comprising 2400 blood smear images of ALL, AML, and healthy cases is proposed, as DL methodologies necessitate a sizeable and well-annotated dataset to combat overfitting issues. Further, a heterogeneous dataset comprising 2700 smear images is created by combining four publicly accessible benchmark datasets of ALL, AML, and healthy cases. The BSNEU-net model achieved excellent performance with 99.37% accuracy on the novel dataset and 99.44% accuracy on the heterogeneous dataset. The comparative analysis signifies the superiority of the proposed methodology with comparing schemes.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142336028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Segmentation of Ultrasound-Guided Quadratus Lumborum Blocks Based on Artificial Intelligence. 基于人工智能的超声引导下腰椎四头肌阻滞的自动分割。
Pub Date : 2024-09-25 DOI: 10.1007/s10278-024-01267-8
Qiang Wang, Bingxi He, Jie Yu, Bowen Zhang, Jingchao Yang, Jin Liu, Xinwei Ma, Shijing Wei, Shuai Li, Hui Zheng, Zhenchao Tang

Ultrasound-guided quadratus lumborum block (QLB) technology has become a widely used perioperative analgesia method during abdominal and pelvic surgeries. Due to the anatomical complexity and individual variability of the quadratus lumborum muscle (QLM) on ultrasound images, nerve blocks heavily rely on anesthesiologist experience. Therefore, using artificial intelligence (AI) to identify different tissue regions in ultrasound images is crucial. In our study, we retrospectively collected 112 patients (3162 images) and developed a deep learning model named Q-VUM, which is a U-shaped network based on the Visual Geometry Group 16 (VGG16) network. Q-VUM precisely segments various tissues, including the QLM, the external oblique muscle, the internal oblique muscle, the transversus abdominis muscle (collectively referred to as the EIT), and the bones. Furthermore, we evaluated Q-VUM. Our model demonstrated robust performance, achieving mean intersection over union (mIoU), mean pixel accuracy, dice coefficient, and accuracy values of 0.734, 0.829, 0.841, and 0.944, respectively. The IoU, recall, precision, and dice coefficient achieved for the QLM were 0.711, 0.813, 0.850, and 0.831, respectively. Additionally, the Q-VUM predictions showed that 85% of the pixels in the blocked area fell within the actual blocked area. Finally, our model exhibited stronger segmentation performance than did the common deep learning segmentation networks (0.734 vs. 0.720 and 0.720, respectively). In summary, we proposed a model named Q-VUM that can accurately identify the anatomical structure of the quadratus lumborum in real time. This model aids anesthesiologists in precisely locating the nerve block site, thereby reducing potential complications and enhancing the effectiveness of nerve block procedures.

超声引导下的腰方肌阻滞(QLB)技术已成为腹部和骨盆手术中广泛使用的围术期镇痛方法。由于腰方肌(QLM)在超声图像上的解剖复杂性和个体差异性,神经阻滞在很大程度上依赖于麻醉师的经验。因此,利用人工智能(AI)识别超声图像中的不同组织区域至关重要。在我们的研究中,我们回顾性地收集了112名患者(3162张图像),并开发了一个名为Q-VUM的深度学习模型,它是一个基于视觉几何组16(VGG16)网络的U型网络。Q-VUM 可精确分割各种组织,包括 QLM、腹外斜肌、腹内斜肌、腹横肌(统称为 EIT)和骨骼。此外,我们还评估了 Q-VUM。我们的模型表现出了强大的性能,平均交集大于结合(mIoU)、平均像素精确度、骰子系数和精确度值分别达到了 0.734、0.829、0.841 和 0.944。QLM 的 IoU、召回率、精确度和骰子系数分别为 0.711、0.813、0.850 和 0.831。此外,Q-VUM 预测显示,阻塞区域中 85% 的像素位于实际阻塞区域内。最后,与常见的深度学习分割网络相比,我们的模型表现出更强的分割性能(分别为 0.734 对 0.720 和 0.720)。总之,我们提出了一个名为 Q-VUM 的模型,它能实时准确地识别腰椎四头肌的解剖结构。该模型有助于麻醉师精确定位神经阻滞部位,从而减少潜在并发症,提高神经阻滞手术的效果。
{"title":"Automatic Segmentation of Ultrasound-Guided Quadratus Lumborum Blocks Based on Artificial Intelligence.","authors":"Qiang Wang, Bingxi He, Jie Yu, Bowen Zhang, Jingchao Yang, Jin Liu, Xinwei Ma, Shijing Wei, Shuai Li, Hui Zheng, Zhenchao Tang","doi":"10.1007/s10278-024-01267-8","DOIUrl":"https://doi.org/10.1007/s10278-024-01267-8","url":null,"abstract":"<p><p>Ultrasound-guided quadratus lumborum block (QLB) technology has become a widely used perioperative analgesia method during abdominal and pelvic surgeries. Due to the anatomical complexity and individual variability of the quadratus lumborum muscle (QLM) on ultrasound images, nerve blocks heavily rely on anesthesiologist experience. Therefore, using artificial intelligence (AI) to identify different tissue regions in ultrasound images is crucial. In our study, we retrospectively collected 112 patients (3162 images) and developed a deep learning model named Q-VUM, which is a U-shaped network based on the Visual Geometry Group 16 (VGG16) network. Q-VUM precisely segments various tissues, including the QLM, the external oblique muscle, the internal oblique muscle, the transversus abdominis muscle (collectively referred to as the EIT), and the bones. Furthermore, we evaluated Q-VUM. Our model demonstrated robust performance, achieving mean intersection over union (mIoU), mean pixel accuracy, dice coefficient, and accuracy values of 0.734, 0.829, 0.841, and 0.944, respectively. The IoU, recall, precision, and dice coefficient achieved for the QLM were 0.711, 0.813, 0.850, and 0.831, respectively. Additionally, the Q-VUM predictions showed that 85% of the pixels in the blocked area fell within the actual blocked area. Finally, our model exhibited stronger segmentation performance than did the common deep learning segmentation networks (0.734 vs. 0.720 and 0.720, respectively). In summary, we proposed a model named Q-VUM that can accurately identify the anatomical structure of the quadratus lumborum in real time. This model aids anesthesiologists in precisely locating the nerve block site, thereby reducing potential complications and enhancing the effectiveness of nerve block procedures.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142336027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Large Language Model to Detect Negated Expressions in Radiology Reports. 检测放射学报告中否定表达的大型语言模型。
Pub Date : 2024-09-25 DOI: 10.1007/s10278-024-01274-9
Yvonne Su, Yonatan B Babore, Charles E Kahn

Natural language processing (NLP) is crucial to extract information accurately from unstructured text to provide insights for clinical decision-making, quality improvement, and medical research. This study compared the performance of a rule-based NLP system and a medical-domain transformer-based model to detect negated concepts in radiology reports. Using a corpus of 984 de-identified radiology reports from a large U.S.-based academic health system (1000 consecutive reports, excluding 16 duplicates), the investigators compared the rule-based medspaCy system and the Clinical Assertion and Negation Classification Bidirectional Encoder Representations from Transformers (CAN-BERT) system to detect negated expressions of terms from RadLex, the Unified Medical Language System Metathesaurus, and the Radiology Gamuts Ontology. Power analysis determined a sample size of 382 terms to achieve α = 0.05 and β = 0.8 for McNemar's test; based on an estimate of 15% negated terms, 2800 randomly selected terms were annotated manually as negated or not negated. Precision, recall, and F1 of the two models were compared using McNemar's test. Of the 2800 terms, 387 (13.8%) were negated. For negation detection, medspaCy attained a recall of 0.795, precision of 0.356, and F1 of 0.492. CAN-BERT achieved a recall of 0.785, precision of 0.768, and F1 of 0.777. Although recall was not significantly different, CAN-BERT had significantly better precision (χ2 = 304.64; p < 0.001). The transformer-based CAN-BERT model detected negated terms in radiology reports with high precision and recall; its precision significantly exceeded that of the rule-based medspaCy system. Use of this system will improve data extraction from textual reports to support information retrieval, AI model training, and discovery of causal relationships.

自然语言处理(NLP)对于从非结构化文本中准确提取信息,为临床决策、质量改进和医学研究提供见解至关重要。本研究比较了基于规则的 NLP 系统和基于医学领域转换器的模型在检测放射学报告中否定概念方面的性能。研究人员使用了来自美国一家大型学术医疗系统的 984 份去标识化放射学报告语料库(1000 份连续报告,不包括 16 份重复报告),比较了基于规则的 medspaCy 系统和临床断言和否定分类双向转换器表示法(CAN-BERT)系统,以检测 RadLex、统一医学语言系统元词库和放射学伽马本体论中术语的否定表达。功率分析确定了 382 个术语的样本量,以实现 McNemar 检验的 α = 0.05 和 β = 0.8;根据 15% 否定术语的估计值,2800 个随机选择的术语被手动标注为否定或未否定。使用 McNemar 检验比较了两个模型的精确度、召回率和 F1。在 2800 个术语中,有 387 个(13.8%)被否定。在否定检测方面,medspaCy 的召回率为 0.795,精确率为 0.356,F1 为 0.492。CAN-BERT 的召回率为 0.785,精确度为 0.768,F1 为 0.777。虽然召回率没有明显差异,但 CAN-BERT 的精确度明显更高(χ2 = 304.64; p
{"title":"A Large Language Model to Detect Negated Expressions in Radiology Reports.","authors":"Yvonne Su, Yonatan B Babore, Charles E Kahn","doi":"10.1007/s10278-024-01274-9","DOIUrl":"https://doi.org/10.1007/s10278-024-01274-9","url":null,"abstract":"<p><p>Natural language processing (NLP) is crucial to extract information accurately from unstructured text to provide insights for clinical decision-making, quality improvement, and medical research. This study compared the performance of a rule-based NLP system and a medical-domain transformer-based model to detect negated concepts in radiology reports. Using a corpus of 984 de-identified radiology reports from a large U.S.-based academic health system (1000 consecutive reports, excluding 16 duplicates), the investigators compared the rule-based medspaCy system and the Clinical Assertion and Negation Classification Bidirectional Encoder Representations from Transformers (CAN-BERT) system to detect negated expressions of terms from RadLex, the Unified Medical Language System Metathesaurus, and the Radiology Gamuts Ontology. Power analysis determined a sample size of 382 terms to achieve α = 0.05 and β = 0.8 for McNemar's test; based on an estimate of 15% negated terms, 2800 randomly selected terms were annotated manually as negated or not negated. Precision, recall, and F1 of the two models were compared using McNemar's test. Of the 2800 terms, 387 (13.8%) were negated. For negation detection, medspaCy attained a recall of 0.795, precision of 0.356, and F1 of 0.492. CAN-BERT achieved a recall of 0.785, precision of 0.768, and F1 of 0.777. Although recall was not significantly different, CAN-BERT had significantly better precision (χ2 = 304.64; p < 0.001). The transformer-based CAN-BERT model detected negated terms in radiology reports with high precision and recall; its precision significantly exceeded that of the rule-based medspaCy system. Use of this system will improve data extraction from textual reports to support information retrieval, AI model training, and discovery of causal relationships.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142336015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-site Validation of AI Segmentation and Harmonization in Breast MRI. 乳腺 MRI 中人工智能分段和协调的跨站点验证。
Pub Date : 2024-09-25 DOI: 10.1007/s10278-024-01266-9
Yu Huang, Nicholas J Leotta, Lukas Hirsch, Roberto Lo Gullo, Mary Hughes, Jeffrey Reiner, Nicole B Saphier, Kelly S Myers, Babita Panigrahi, Emily Ambinder, Philip Di Carlo, Lars J Grimm, Dorothy Lowell, Sora Yoon, Sujata V Ghate, Lucas C Parra, Elizabeth J Sutton

This work aims to perform a cross-site validation of automated segmentation for breast cancers in MRI and to compare the performance to radiologists. A three-dimensional (3D) U-Net was trained to segment cancers in dynamic contrast-enhanced axial MRIs using a large dataset from Site 1 (n = 15,266; 449 malignant and 14,817 benign). Performance was validated on site-specific test data from this and two additional sites, and common publicly available testing data. Four radiologists from each of the three clinical sites provided two-dimensional (2D) segmentations as ground truth. Segmentation performance did not differ between the network and radiologists on the test data from Sites 1 and 2 or the common public data (median Dice score Site 1, network 0.86 vs. radiologist 0.85, n = 114; Site 2, 0.91 vs. 0.91, n = 50; common: 0.93 vs. 0.90). For Site 3, an affine input layer was fine-tuned using segmentation labels, resulting in comparable performance between the network and radiologist (0.88 vs. 0.89, n = 42). Radiologist performance differed on the common test data, and the network numerically outperformed 11 of the 12 radiologists (median Dice: 0.85-0.94, n = 20). In conclusion, a deep network with a novel supervised harmonization technique matches radiologists' performance in MRI tumor segmentation across clinical sites. We make code and weights publicly available to promote reproducible AI in radiology.

这项工作旨在对核磁共振成像中的乳腺癌自动分割进行跨站点验证,并将其性能与放射科医生进行比较。利用第一站点的大型数据集(n = 15,266; 449 个恶性肿瘤和 14,817 个良性肿瘤)对三维 U-Net 进行了训练,以分割动态对比增强轴向 MRI 中的癌症。在该站点和另外两个站点的特定测试数据以及常见的公开测试数据上对性能进行了验证。三个临床站点的四位放射科医生分别提供了二维(2D)分割结果作为基本事实。在 1 号和 2 号站点的测试数据或通用公共数据上,网络和放射科医生的分割性能没有差异(中位数 Dice 分数 1 号站点,网络 0.86 vs. 放射科医生 0.85,n = 114;2 号站点,0.91 vs. 0.91,n = 50;通用:0.93 vs. 0.90)。对于第 3 个部位,使用分割标签对仿射输入层进行了微调,结果网络和放射科医生的表现相当(0.88 vs. 0.89,n = 42)。放射科医生在共同测试数据上的表现各不相同,网络在数值上优于 12 位放射科医生中的 11 位(Dice 中位数:0.85-0.94,n = 20)。总之,采用新型监督协调技术的深度网络与放射科医生在不同临床部位的核磁共振肿瘤分割中的表现相匹配。我们公开了代码和权重,以促进放射学中人工智能的可重复性。
{"title":"Cross-site Validation of AI Segmentation and Harmonization in Breast MRI.","authors":"Yu Huang, Nicholas J Leotta, Lukas Hirsch, Roberto Lo Gullo, Mary Hughes, Jeffrey Reiner, Nicole B Saphier, Kelly S Myers, Babita Panigrahi, Emily Ambinder, Philip Di Carlo, Lars J Grimm, Dorothy Lowell, Sora Yoon, Sujata V Ghate, Lucas C Parra, Elizabeth J Sutton","doi":"10.1007/s10278-024-01266-9","DOIUrl":"https://doi.org/10.1007/s10278-024-01266-9","url":null,"abstract":"<p><p>This work aims to perform a cross-site validation of automated segmentation for breast cancers in MRI and to compare the performance to radiologists. A three-dimensional (3D) U-Net was trained to segment cancers in dynamic contrast-enhanced axial MRIs using a large dataset from Site 1 (n = 15,266; 449 malignant and 14,817 benign). Performance was validated on site-specific test data from this and two additional sites, and common publicly available testing data. Four radiologists from each of the three clinical sites provided two-dimensional (2D) segmentations as ground truth. Segmentation performance did not differ between the network and radiologists on the test data from Sites 1 and 2 or the common public data (median Dice score Site 1, network 0.86 vs. radiologist 0.85, n = 114; Site 2, 0.91 vs. 0.91, n = 50; common: 0.93 vs. 0.90). For Site 3, an affine input layer was fine-tuned using segmentation labels, resulting in comparable performance between the network and radiologist (0.88 vs. 0.89, n = 42). Radiologist performance differed on the common test data, and the network numerically outperformed 11 of the 12 radiologists (median Dice: 0.85-0.94, n = 20). In conclusion, a deep network with a novel supervised harmonization technique matches radiologists' performance in MRI tumor segmentation across clinical sites. We make code and weights publicly available to promote reproducible AI in radiology.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142336029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Robust Deep Learning Method with Uncertainty Estimation for the Pathological Classification of Renal Cell Carcinoma Based on CT Images. 基于 CT 图像的稳健深度学习方法与不确定性估计用于肾细胞癌的病理分类
Pub Date : 2024-09-23 DOI: 10.1007/s10278-024-01276-7
Ni Yao, Hang Hu, Kaicong Chen, Huan Huang, Chen Zhao, Yuan Guo, Boya Li, Jiaofen Nan, Yanting Li, Chuang Han, Fubao Zhu, Weihua Zhou, Li Tian

This study developed and validated a deep learning-based diagnostic model with uncertainty estimation to aid radiologists in the preoperative differentiation of pathological subtypes of renal cell carcinoma (RCC) based on computed tomography (CT) images. Data from 668 consecutive patients with pathologically confirmed RCC were retrospectively collected from Center 1, and the model was trained using fivefold cross-validation to classify RCC subtypes into clear cell RCC (ccRCC), papillary RCC (pRCC), and chromophobe RCC (chRCC). An external validation with 78 patients from Center 2 was conducted to evaluate the performance of the model. In the fivefold cross-validation, the area under the receiver operating characteristic curve (AUC) for the classification of ccRCC, pRCC, and chRCC was 0.868 (95% CI, 0.826-0.923), 0.846 (95% CI, 0.812-0.886), and 0.839 (95% CI, 0.802-0.88), respectively. In the external validation set, the AUCs were 0.856 (95% CI, 0.838-0.882), 0.787 (95% CI, 0.757-0.818), and 0.793 (95% CI, 0.758-0.831) for ccRCC, pRCC, and chRCC, respectively. The model demonstrated robust performance in predicting the pathological subtypes of RCC, while the incorporated uncertainty emphasized the importance of understanding model confidence. The proposed approach, integrated with uncertainty estimation, offers clinicians a dual advantage: accurate RCC subtype predictions complemented by diagnostic confidence metrics, thereby promoting informed decision-making for patients with RCC.

本研究开发并验证了一种基于深度学习的诊断模型,该模型具有不确定性估计功能,可帮助放射科医生根据计算机断层扫描(CT)图像对肾细胞癌(RCC)的病理亚型进行术前分型。该模型通过五倍交叉验证进行训练,将 RCC 亚型分为透明细胞 RCC(ccRCC)、乳头状 RCC(pRCC)和嗜色细胞 RCC(chRCC)。为了评估该模型的性能,对来自第二中心的 78 名患者进行了外部验证。在五倍交叉验证中,ccRCC、pRCC和chRCC分类的接收者操作特征曲线下面积(AUC)分别为0.868(95% CI,0.826-0.923)、0.846(95% CI,0.812-0.886)和0.839(95% CI,0.802-0.88)。在外部验证集中,ccRCC、pRCC和chRCC的AUC分别为0.856(95% CI,0.838-0.882)、0.787(95% CI,0.757-0.818)和0.793(95% CI,0.758-0.831)。该模型在预测 RCC 的病理亚型方面表现出稳健的性能,而纳入的不确定性则强调了了解模型置信度的重要性。所提出的方法与不确定性估计相结合,为临床医生提供了双重优势:准确的 RCC 亚型预测与诊断置信度指标相辅相成,从而促进了 RCC 患者的知情决策。
{"title":"A Robust Deep Learning Method with Uncertainty Estimation for the Pathological Classification of Renal Cell Carcinoma Based on CT Images.","authors":"Ni Yao, Hang Hu, Kaicong Chen, Huan Huang, Chen Zhao, Yuan Guo, Boya Li, Jiaofen Nan, Yanting Li, Chuang Han, Fubao Zhu, Weihua Zhou, Li Tian","doi":"10.1007/s10278-024-01276-7","DOIUrl":"https://doi.org/10.1007/s10278-024-01276-7","url":null,"abstract":"<p><p>This study developed and validated a deep learning-based diagnostic model with uncertainty estimation to aid radiologists in the preoperative differentiation of pathological subtypes of renal cell carcinoma (RCC) based on computed tomography (CT) images. Data from 668 consecutive patients with pathologically confirmed RCC were retrospectively collected from Center 1, and the model was trained using fivefold cross-validation to classify RCC subtypes into clear cell RCC (ccRCC), papillary RCC (pRCC), and chromophobe RCC (chRCC). An external validation with 78 patients from Center 2 was conducted to evaluate the performance of the model. In the fivefold cross-validation, the area under the receiver operating characteristic curve (AUC) for the classification of ccRCC, pRCC, and chRCC was 0.868 (95% CI, 0.826-0.923), 0.846 (95% CI, 0.812-0.886), and 0.839 (95% CI, 0.802-0.88), respectively. In the external validation set, the AUCs were 0.856 (95% CI, 0.838-0.882), 0.787 (95% CI, 0.757-0.818), and 0.793 (95% CI, 0.758-0.831) for ccRCC, pRCC, and chRCC, respectively. The model demonstrated robust performance in predicting the pathological subtypes of RCC, while the incorporated uncertainty emphasized the importance of understanding model confidence. The proposed approach, integrated with uncertainty estimation, offers clinicians a dual advantage: accurate RCC subtype predictions complemented by diagnostic confidence metrics, thereby promoting informed decision-making for patients with RCC.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncertainty Estimation for Dual View X-ray Mammographic Image Registration Using Deep Ensembles. 利用深度集合进行双视角 X 射线乳腺摄影图像注册的不确定性估计。
Pub Date : 2024-09-23 DOI: 10.1007/s10278-024-01244-1
William C Walton, Seung-Jun Kim

Techniques are developed for generating uncertainty estimates for convolutional neural network (CNN)-based methods for registering the locations of lesions between the craniocaudal (CC) and mediolateral oblique (MLO) mammographic X-ray image views. Multi-view lesion correspondence is an important task that clinicians perform for characterizing lesions during routine mammographic exams. Automated registration tools can aid in this task, yet if the tools also provide confidence estimates, they can be of greater value to clinicians, especially in cases involving dense tissue where lesions may be difficult to see. A set of deep ensemble-based techniques, which leverage a negative log-likelihood (NLL)-based cost function, are implemented for estimating uncertainties. The ensemble architectures involve significant modifications to an existing CNN dual-view lesion registration algorithm. Three architectural designs are evaluated, and different ensemble sizes are compared using various performance metrics. The techniques are tested on synthetic X-ray data, real 2D X-ray data, and slices from real 3D X-ray data. The ensembles generate covariance-based uncertainty ellipses that are correlated with registration accuracy, such that the ellipse sizes can give a clinician an indication of confidence in the mapping between the CC and MLO views. The results also show that the ellipse sizes can aid in improving computer-aided detection (CAD) results by matching CC/MLO lesion detects and reducing false alarms from both views, adding to clinical utility. The uncertainty estimation techniques show promise as a means for aiding clinicians in confidently establishing multi-view lesion correspondence, thereby improving diagnostic capability.

本研究开发的技术可为基于卷积神经网络(CNN)的方法生成不确定性估计值,这些方法用于在颅尾(CC)和内外侧斜(MLO)乳腺 X 射线图像视图之间登记病变位置。多视图病灶对应是临床医生在常规乳腺 X 射线检查中确定病灶特征的一项重要任务。自动配准工具可以帮助完成这项任务,但如果这些工具还能提供置信度估计,就能为临床医生带来更大的价值,尤其是在涉及致密组织的病例中,因为在这些病例中病变可能难以被看到。我们利用基于负对数似然(NLL)的成本函数,实施了一套基于深度集合的技术来估计不确定性。这些集合架构涉及对现有 CNN 双视角病变配准算法的重大修改。对三种架构设计进行了评估,并使用各种性能指标对不同的集合规模进行了比较。这些技术在合成 X 光数据、真实 2D X 光数据和真实 3D X 光数据切片上进行了测试。集合生成的基于协方差的不确定性椭圆与配准精度相关,因此椭圆的大小可以为临床医生提供 CC 和 MLO 视图之间映射的可信度指示。研究结果还表明,椭圆的大小可以通过匹配 CC/MLO 病灶检测和减少两个视图的误报来帮助改善计算机辅助检测 (CAD) 结果,从而增加临床实用性。不确定性估计技术有望帮助临床医生自信地建立多视图病变对应关系,从而提高诊断能力。
{"title":"Uncertainty Estimation for Dual View X-ray Mammographic Image Registration Using Deep Ensembles.","authors":"William C Walton, Seung-Jun Kim","doi":"10.1007/s10278-024-01244-1","DOIUrl":"https://doi.org/10.1007/s10278-024-01244-1","url":null,"abstract":"<p><p>Techniques are developed for generating uncertainty estimates for convolutional neural network (CNN)-based methods for registering the locations of lesions between the craniocaudal (CC) and mediolateral oblique (MLO) mammographic X-ray image views. Multi-view lesion correspondence is an important task that clinicians perform for characterizing lesions during routine mammographic exams. Automated registration tools can aid in this task, yet if the tools also provide confidence estimates, they can be of greater value to clinicians, especially in cases involving dense tissue where lesions may be difficult to see. A set of deep ensemble-based techniques, which leverage a negative log-likelihood (NLL)-based cost function, are implemented for estimating uncertainties. The ensemble architectures involve significant modifications to an existing CNN dual-view lesion registration algorithm. Three architectural designs are evaluated, and different ensemble sizes are compared using various performance metrics. The techniques are tested on synthetic X-ray data, real 2D X-ray data, and slices from real 3D X-ray data. The ensembles generate covariance-based uncertainty ellipses that are correlated with registration accuracy, such that the ellipse sizes can give a clinician an indication of confidence in the mapping between the CC and MLO views. The results also show that the ellipse sizes can aid in improving computer-aided detection (CAD) results by matching CC/MLO lesion detects and reducing false alarms from both views, adding to clinical utility. The uncertainty estimation techniques show promise as a means for aiding clinicians in confidently establishing multi-view lesion correspondence, thereby improving diagnostic capability.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of imaging informatics in medicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1