首页 > 最新文献

Journal of Digital Imaging最新文献

英文 中文
Reliable Delineation of Clinical Target Volumes for Cervical Cancer Radiotherapy on CT/MR Dual-Modality Images 在 CT/MR 双模态图像上可靠划分宫颈癌放疗临床靶区
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00951-5
Ying Sun, Yuening Wang, Kexin Gan, Yuxin Wang, Ying Chen, Yun Ge, Jie Yuan, Hanzi Xu

Accurate delineation of the clinical target volume (CTV) is a crucial prerequisite for safe and effective radiotherapy characterized. This study addresses the integration of magnetic resonance (MR) images to aid in target delineation on computed tomography (CT) images. However, obtaining MR images directly can be challenging. Therefore, we employ AI-based image generation techniques to “intelligentially generate” MR images from CT images to improve CTV delineation based on CT images. To generate high-quality MR images, we propose an attention-guided single-loop image generation model. The model can yield higher-quality images by introducing an attention mechanism in feature extraction and enhancing the loss function. Based on the generated MR images, we propose a CTV segmentation model fusing multi-scale features through image fusion and a hollow space pyramid module to enhance segmentation accuracy. The image generation model used in this study improves the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) from 14.87 and 0.58 to 16.72 and 0.67, respectively, and improves the feature distribution distance and learning-perception image similarity from 180.86 and 0.28 to 110.98 and 0.22, achieving higher quality image generation. The proposed segmentation method demonstrates high accuracy, compared with the FCN method, the intersection over union ratio and the Dice coefficient are improved from 0.8360 and 0.8998 to 0.9043 and 0.9473, respectively. Hausdorff distance and mean surface distance decreased from 5.5573 mm and 2.3269 mm to 4.7204 mm and 0.9397 mm, respectively, achieving clinically acceptable segmentation accuracy. Our method might reduce physicians’ manual workload and accelerate the diagnosis and treatment process while decreasing inter-observer variability in identifying anatomical structures.

准确划定临床靶区(CTV)是安全有效放疗的重要前提。本研究探讨了如何整合磁共振(MR)图像,以帮助在计算机断层扫描(CT)图像上进行靶区划分。然而,直接获取磁共振图像可能具有挑战性。因此,我们采用基于人工智能的图像生成技术,从 CT 图像 "智能生成 "磁共振图像,以改善基于 CT 图像的 CTV 划分。为了生成高质量的 MR 图像,我们提出了一种注意力引导的单环图像生成模型。通过在特征提取中引入注意力机制和增强损失函数,该模型可生成更高质量的图像。根据生成的磁共振图像,我们提出了一种通过图像融合和空心空间金字塔模块融合多尺度特征的 CTV 分割模型,以提高分割精度。本研究采用的图像生成模型将峰值信噪比(PSNR)和结构相似性指数(SSIM)分别从 14.87 和 0.58 提高到 16.72 和 0.67,并将特征分布距离和学习感知图像相似性从 180.86 和 0.28 提高到 110.98 和 0.22,实现了更高质量的图像生成。与 FCN 方法相比,所提出的分割方法具有较高的准确性,交集大于联合比和 Dice 系数分别从 0.8360 和 0.8998 提高到 0.9043 和 0.9473。豪斯多夫距离和平均表面距离分别从 5.5573 毫米和 2.3269 毫米降至 4.7204 毫米和 0.9397 毫米,达到了临床可接受的分割精度。我们的方法可以减轻医生的人工工作量,加快诊断和治疗过程,同时减少观察者之间在识别解剖结构方面的差异。
{"title":"Reliable Delineation of Clinical Target Volumes for Cervical Cancer Radiotherapy on CT/MR Dual-Modality Images","authors":"Ying Sun, Yuening Wang, Kexin Gan, Yuxin Wang, Ying Chen, Yun Ge, Jie Yuan, Hanzi Xu","doi":"10.1007/s10278-023-00951-5","DOIUrl":"https://doi.org/10.1007/s10278-023-00951-5","url":null,"abstract":"<p>Accurate delineation of the clinical target volume (CTV) is a crucial prerequisite for safe and effective radiotherapy characterized. This study addresses the integration of magnetic resonance (MR) images to aid in target delineation on computed tomography (CT) images. However, obtaining MR images directly can be challenging. Therefore, we employ AI-based image generation techniques to “intelligentially generate” MR images from CT images to improve CTV delineation based on CT images. To generate high-quality MR images, we propose an attention-guided single-loop image generation model. The model can yield higher-quality images by introducing an attention mechanism in feature extraction and enhancing the loss function. Based on the generated MR images, we propose a CTV segmentation model fusing multi-scale features through image fusion and a hollow space pyramid module to enhance segmentation accuracy. The image generation model used in this study improves the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) from 14.87 and 0.58 to 16.72 and 0.67, respectively, and improves the feature distribution distance and learning-perception image similarity from 180.86 and 0.28 to 110.98 and 0.22, achieving higher quality image generation. The proposed segmentation method demonstrates high accuracy, compared with the FCN method, the intersection over union ratio and the Dice coefficient are improved from 0.8360 and 0.8998 to 0.9043 and 0.9473, respectively. Hausdorff distance and mean surface distance decreased from 5.5573 mm and 2.3269 mm to 4.7204 mm and 0.9397 mm, respectively, achieving clinically acceptable segmentation accuracy. Our method might reduce physicians’ manual workload and accelerate the diagnosis and treatment process while decreasing inter-observer variability in identifying anatomical structures.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"1 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139421530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CT-Based Intratumoral and Peritumoral Radiomics Nomograms for the Preoperative Prediction of Spread Through Air Spaces in Clinical Stage IA Non-small Cell Lung Cancer 基于CT的瘤内和瘤周放射omics提名图用于术前预测临床IA期非小细胞肺癌的气隙扩散情况
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00939-1
Yun Wang, Deng Lyu, Lei Hu, Junhong Wu, Shaofeng Duan, Taohu Zhou, Wenting Tu, Yi Xiao, Li Fan, Shiyuan Liu

The study aims to investigate the value of intratumoral and peritumoral radiomics and clinical-radiological features for predicting spread through air spaces (STAS) in patients with clinical stage IA non-small cell lung cancer (NSCLC). A total of 336 NSCLC patients from our hospital were randomly divided into the training cohort (n = 236) and the internal validation cohort (n = 100) at a ratio of 7:3, and 69 patients from the other two external hospitals were collected as the external validation cohort. Univariate and multivariate analyses were used to select clinical-radiological features and construct a clinical model. The GTV, PTV5, PTV10, PTV15, PTV20, GPTV5, GPTV10, GPTV15, and GPTV20 models were constructed based on intratumoral and peritumoral (5 mm, 10 mm, 15 mm, 20 mm) radiomics features. Additionally, the radscore of the optimal radiomics model and clinical-radiological predictors were used to construct a combined model and plot a nomogram. Lastly, the ROC curve and AUC value were used to evaluate the diagnostic performance of the model. Tumor density type (OR = 6.738) and distal ribbon sign (OR = 5.141) were independent risk factors for the occurrence of STAS. The GPTV10 model outperformed the other radiomics models, and its AUC values were 0.887, 0.876, and 0.868 in the three cohorts. The AUC values of the combined model constructed based on GPTV10 radscore and clinical-radiological predictors were 0.901, 0.875, and 0.878. DeLong test results revealed that the combined model was superior to the clinical model in the three cohorts. The nomogram based on GPTV10 radscore and clinical-radiological features exhibited high predictive efficiency for STAS status in NSCLC.

本研究旨在探讨瘤内、瘤周放射组学和临床放射学特征对临床IA期非小细胞肺癌(NSCLC)患者气隙扩散(STAS)的预测价值。将本院的336名NSCLC患者按7:3的比例随机分为训练队列(n = 236)和内部验证队列(n = 100),并收集了另外两家外部医院的69名患者作为外部验证队列。通过单变量和多变量分析选择临床放射学特征并构建临床模型。根据瘤内和瘤周(5 毫米、10 毫米、15 毫米、20 毫米)放射组学特征构建了 GTV、PTV5、PTV10、PTV15、PTV20、GPTV5、GPTV10、GPTV15 和 GPTV20 模型。此外,最佳放射组学模型的 radscore 和临床放射学预测指标被用来构建一个组合模型并绘制提名图。最后,利用 ROC 曲线和 AUC 值评估模型的诊断性能。肿瘤密度类型(OR = 6.738)和远端带状征(OR = 5.141)是STAS发生的独立危险因素。GPTV10模型优于其他放射组学模型,其在三个队列中的AUC值分别为0.887、0.876和0.868。基于 GPTV10 radscore 和临床放射学预测因子构建的组合模型的 AUC 值分别为 0.901、0.875 和 0.878。DeLong 检验结果显示,在三个队列中,组合模型优于临床模型。基于GPTV10 radscore和临床放射学特征的提名图对NSCLC的STAS状态具有较高的预测效率。
{"title":"CT-Based Intratumoral and Peritumoral Radiomics Nomograms for the Preoperative Prediction of Spread Through Air Spaces in Clinical Stage IA Non-small Cell Lung Cancer","authors":"Yun Wang, Deng Lyu, Lei Hu, Junhong Wu, Shaofeng Duan, Taohu Zhou, Wenting Tu, Yi Xiao, Li Fan, Shiyuan Liu","doi":"10.1007/s10278-023-00939-1","DOIUrl":"https://doi.org/10.1007/s10278-023-00939-1","url":null,"abstract":"<p>The study aims to investigate the value of intratumoral and peritumoral radiomics and clinical-radiological features for predicting spread through air spaces (STAS) in patients with clinical stage IA non-small cell lung cancer (NSCLC). A total of 336 NSCLC patients from our hospital were randomly divided into the training cohort (<i>n</i> = 236) and the internal validation cohort (<i>n</i> = 100) at a ratio of 7:3, and 69 patients from the other two external hospitals were collected as the external validation cohort. Univariate and multivariate analyses were used to select clinical-radiological features and construct a clinical model. The GTV, PTV5, PTV10, PTV15, PTV20, GPTV5, GPTV10, GPTV15, and GPTV20 models were constructed based on intratumoral and peritumoral (5 mm, 10 mm, 15 mm, 20 mm) radiomics features. Additionally, the radscore of the optimal radiomics model and clinical-radiological predictors were used to construct a combined model and plot a nomogram. Lastly, the ROC curve and AUC value were used to evaluate the diagnostic performance of the model. Tumor density type (OR = 6.738) and distal ribbon sign (OR = 5.141) were independent risk factors for the occurrence of STAS. The GPTV10 model outperformed the other radiomics models, and its AUC values were 0.887, 0.876, and 0.868 in the three cohorts. The AUC values of the combined model constructed based on GPTV10 radscore and clinical-radiological predictors were 0.901, 0.875, and 0.878. DeLong test results revealed that the combined model was superior to the clinical model in the three cohorts. The nomogram based on GPTV10 radscore and clinical-radiological features exhibited high predictive efficiency for STAS status in NSCLC.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"24 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139421588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning–based Diagnosis of Pulmonary Tuberculosis on Chest X-ray in the Emergency Department: A Retrospective Study 基于深度学习的急诊科胸部 X 光片肺结核诊断:一项回顾性研究
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00952-4
Chih-Hung Wang, Weishan Chang, Meng-Rui Lee, Joyce Tay, Cheng-Yi Wu, Meng-Che Wu, Holger R. Roth, Dong Yang, Can Zhao, Weichung Wang, Chien-Hua Huang

Prompt and correct detection of pulmonary tuberculosis (PTB) is critical in preventing its spread. We aimed to develop a deep learning–based algorithm for detecting PTB on chest X-ray (CXRs) in the emergency department. This retrospective study included 3498 CXRs acquired from the National Taiwan University Hospital (NTUH). The images were chronologically split into a training dataset, NTUH-1519 (images acquired during the years 2015 to 2019; n = 2144), and a testing dataset, NTUH-20 (images acquired during the year 2020; n = 1354). Public databases, including the NIH ChestX-ray14 dataset (model training; 112,120 images), Montgomery County (model testing; 138 images), and Shenzhen (model testing; 662 images), were also used in model development. EfficientNetV2 was the basic architecture of the algorithm. Images from ChestX-ray14 were employed for pseudo-labelling to perform semi-supervised learning. The algorithm demonstrated excellent performance in detecting PTB (area under the receiver operating characteristic curve [AUC] 0.878, 95% confidence interval [CI] 0.854–0.900) in NTUH-20. The algorithm showed significantly better performance in posterior-anterior (PA) CXR (AUC 0.940, 95% CI 0.912–0.965, p-value < 0.001) compared with anterior–posterior (AUC 0.782, 95% CI 0.644–0.897) or portable anterior–posterior (AUC 0.869, 95% CI 0.814–0.918) CXR. The algorithm accurately detected cases of bacteriologically confirmed PTB (AUC 0.854, 95% CI 0.823–0.883). Finally, the algorithm tested favourably in Montgomery County (AUC 0.838, 95% CI 0.765–0.904) and Shenzhen (AUC 0.806, 95% CI 0.771–0.839). A deep learning–based algorithm could detect PTB on CXR with excellent performance, which may help shorten the interval between detection and airborne isolation for patients with PTB.

及时、正确地检测肺结核(PTB)对于防止其传播至关重要。我们旨在开发一种基于深度学习的算法,用于检测急诊科胸部 X 光片(CXR)上的肺结核。这项回顾性研究包括从台湾大学医院(台大医院)获取的 3498 张 CXR。这些图像按时间顺序分为训练数据集 NTUH-1519(2015 年至 2019 年采集的图像;n = 2144)和测试数据集 NTUH-20(2020 年采集的图像;n = 1354)。在模型开发过程中还使用了公共数据库,包括美国国立卫生研究院ChestX-ray14数据集(模型训练;112,120张图像)、蒙哥马利县(模型测试;138张图像)和深圳(模型测试;662张图像)。EfficientNetV2 是该算法的基本架构。来自 ChestX-ray14 的图像被用于伪标签,以执行半监督学习。在 NTUH-20 中,该算法在检测 PTB 方面表现出色(接收者操作特征曲线下面积 [AUC] 0.878,95% 置信区间 [CI] 0.854-0.900)。与前后位(AUC 0.782,95% CI 0.644-0.897)或便携式前后位(AUC 0.869,95% CI 0.814-0.918)CXR 相比,该算法在后前位(PA)CXR 中的表现明显更好(AUC 0.940,95% CI 0.912-0.965,p 值为 0.001)。该算法能准确检测出细菌学确诊的 PTB 病例(AUC 0.854,95% CI 0.823-0.883)。最后,该算法在蒙哥马利县(AUC 0.838,95% CI 0.765-0.904)和深圳(AUC 0.806,95% CI 0.771-0.839)的测试结果良好。基于深度学习的算法可以在CXR上检测出PTB,并且表现出色,这可能有助于缩短PTB患者从检测到空气隔离的时间间隔。
{"title":"Deep Learning–based Diagnosis of Pulmonary Tuberculosis on Chest X-ray in the Emergency Department: A Retrospective Study","authors":"Chih-Hung Wang, Weishan Chang, Meng-Rui Lee, Joyce Tay, Cheng-Yi Wu, Meng-Che Wu, Holger R. Roth, Dong Yang, Can Zhao, Weichung Wang, Chien-Hua Huang","doi":"10.1007/s10278-023-00952-4","DOIUrl":"https://doi.org/10.1007/s10278-023-00952-4","url":null,"abstract":"<p>Prompt and correct detection of pulmonary tuberculosis (PTB) is critical in preventing its spread. We aimed to develop a deep learning–based algorithm for detecting PTB on chest X-ray (CXRs) in the emergency department. This retrospective study included 3498 CXRs acquired from the National Taiwan University Hospital (NTUH). The images were chronologically split into a training dataset, NTUH-1519 (images acquired during the years 2015 to 2019; <i>n</i> = 2144), and a testing dataset, NTUH-20 (images acquired during the year 2020; <i>n</i> = 1354). Public databases, including the NIH ChestX-ray14 dataset (model training; 112,120 images), Montgomery County (model testing; 138 images), and Shenzhen (model testing; 662 images), were also used in model development. EfficientNetV2 was the basic architecture of the algorithm. Images from ChestX-ray14 were employed for pseudo-labelling to perform semi-supervised learning. The algorithm demonstrated excellent performance in detecting PTB (area under the receiver operating characteristic curve [AUC] 0.878, 95% confidence interval [CI] 0.854–0.900) in NTUH-20. The algorithm showed significantly better performance in posterior-anterior (PA) CXR (AUC 0.940, 95% CI 0.912–0.965, <i>p-value</i> &lt; 0.001) compared with anterior–posterior (AUC 0.782, 95% CI 0.644–0.897) or portable anterior–posterior (AUC 0.869, 95% CI 0.814–0.918) CXR. The algorithm accurately detected cases of bacteriologically confirmed PTB (AUC 0.854, 95% CI 0.823–0.883). Finally, the algorithm tested favourably in Montgomery County (AUC 0.838, 95% CI 0.765–0.904) and Shenzhen (AUC 0.806, 95% CI 0.771–0.839). A deep learning–based algorithm could detect PTB on CXR with excellent performance, which may help shorten the interval between detection and airborne isolation for patients with PTB.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"39 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139423453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Medical Diagnosis: A Novel Two-Phase Deep Learning Framework for Adversarial Proof Disease Detection in Radiology Images 强大的医疗诊断:用于放射学图像中对抗性疾病检测的新型两阶段深度学习框架
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00916-8
Sheikh Burhan ul haque, Aasim Zafar

In the realm of medical diagnostics, the utilization of deep learning techniques, notably in the context of radiology images, has emerged as a transformative force. The significance of artificial intelligence (AI), specifically machine learning (ML) and deep learning (DL), lies in their capacity to rapidly and accurately diagnose diseases from radiology images. This capability has been particularly vital during the COVID-19 pandemic, where rapid and precise diagnosis played a pivotal role in managing the spread of the virus. DL models, trained on vast datasets of radiology images, have showcased remarkable proficiency in distinguishing between normal and COVID-19-affected cases, offering a ray of hope amidst the crisis. However, as with any technological advancement, vulnerabilities emerge. Deep learning-based diagnostic models, although proficient, are not immune to adversarial attacks. These attacks, characterized by carefully crafted perturbations to input data, can potentially disrupt the models’ decision-making processes. In the medical context, such vulnerabilities could have dire consequences, leading to misdiagnoses and compromised patient care. To address this, we propose a two-phase defense framework that combines advanced adversarial learning and adversarial image filtering techniques. We use a modified adversarial learning algorithm to enhance the model’s resilience against adversarial examples during the training phase. During the inference phase, we apply JPEG compression to mitigate perturbations that cause misclassification. We evaluate our approach on three models based on ResNet-50, VGG-16, and Inception-V3. These models perform exceptionally in classifying radiology images (X-ray and CT) of lung regions into normal, pneumonia, and COVID-19 pneumonia categories. We then assess the vulnerability of these models to three targeted adversarial attacks: fast gradient sign method (FGSM), projected gradient descent (PGD), and basic iterative method (BIM). The results show a significant drop in model performance after the attacks. However, our defense framework greatly improves the models’ resistance to adversarial attacks, maintaining high accuracy on adversarial examples. Importantly, our framework ensures the reliability of the models in diagnosing COVID-19 from clean images.

在医疗诊断领域,深度学习技术的应用,尤其是在放射图像方面的应用,已成为一股变革力量。人工智能(AI),特别是机器学习(ML)和深度学习(DL)的重要意义在于它们能够从放射图像中快速、准确地诊断疾病。在 COVID-19 大流行期间,这种能力尤为重要,快速准确的诊断在控制病毒传播方面发挥了关键作用。在大量放射学图像数据集上训练的 DL 模型在区分正常病例和受 COVID-19 影响的病例方面表现出了非凡的能力,为危机中的人们带来了一线希望。然而,与任何技术进步一样,也会出现漏洞。基于深度学习的诊断模型虽然精通,但也难免受到恶意攻击。这些攻击的特点是对输入数据进行精心设计的扰动,有可能破坏模型的决策过程。在医疗领域,这种漏洞可能会造成严重后果,导致误诊和患者护理受损。为了解决这个问题,我们提出了一个两阶段防御框架,结合了先进的对抗学习和对抗图像过滤技术。在训练阶段,我们使用改进的对抗性学习算法来增强模型对对抗性示例的抵御能力。在推理阶段,我们采用 JPEG 压缩技术来减轻导致误分类的扰动。我们在基于 ResNet-50、VGG-16 和 Inception-V3 的三个模型上评估了我们的方法。这些模型在将肺部区域的放射图像(X 光和 CT)分类为正常、肺炎和 COVID-19 肺炎类别方面表现优异。然后,我们评估了这些模型在三种针对性对抗攻击下的脆弱性:快速梯度符号法 (FGSM)、投射梯度下降法 (PGD) 和基本迭代法 (BIM)。结果表明,受到攻击后,模型性能明显下降。然而,我们的防御框架大大提高了模型抵御对抗性攻击的能力,在对抗性示例上保持了较高的准确性。重要的是,我们的框架确保了模型从干净图像中诊断 COVID-19 的可靠性。
{"title":"Robust Medical Diagnosis: A Novel Two-Phase Deep Learning Framework for Adversarial Proof Disease Detection in Radiology Images","authors":"Sheikh Burhan ul haque, Aasim Zafar","doi":"10.1007/s10278-023-00916-8","DOIUrl":"https://doi.org/10.1007/s10278-023-00916-8","url":null,"abstract":"<p>In the realm of medical diagnostics, the utilization of deep learning techniques, notably in the context of radiology images, has emerged as a transformative force. The significance of artificial intelligence (AI), specifically machine learning (ML) and deep learning (DL), lies in their capacity to rapidly and accurately diagnose diseases from radiology images. This capability has been particularly vital during the COVID-19 pandemic, where rapid and precise diagnosis played a pivotal role in managing the spread of the virus. DL models, trained on vast datasets of radiology images, have showcased remarkable proficiency in distinguishing between normal and COVID-19-affected cases, offering a ray of hope amidst the crisis. However, as with any technological advancement, vulnerabilities emerge. Deep learning-based diagnostic models, although proficient, are not immune to adversarial attacks. These attacks, characterized by carefully crafted perturbations to input data, can potentially disrupt the models’ decision-making processes. In the medical context, such vulnerabilities could have dire consequences, leading to misdiagnoses and compromised patient care. To address this, we propose a two-phase defense framework that combines advanced adversarial learning and adversarial image filtering techniques. We use a modified adversarial learning algorithm to enhance the model’s resilience against adversarial examples during the training phase. During the inference phase, we apply JPEG compression to mitigate perturbations that cause misclassification. We evaluate our approach on three models based on ResNet-50, VGG-16, and Inception-V3. These models perform exceptionally in classifying radiology images (X-ray and CT) of lung regions into normal, pneumonia, and COVID-19 pneumonia categories. We then assess the vulnerability of these models to three targeted adversarial attacks: fast gradient sign method (FGSM), projected gradient descent (PGD), and basic iterative method (BIM). The results show a significant drop in model performance after the attacks. However, our defense framework greatly improves the models’ resistance to adversarial attacks, maintaining high accuracy on adversarial examples. Importantly, our framework ensures the reliability of the models in diagnosing COVID-19 from clean images.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"41 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139420755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Urinary Stone Detection System for Abdominal Non-Enhanced CT Images Reduces the Burden on Radiologists 腹部非增强 CT 图像尿石自动检测系统减轻了放射科医生的负担
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00946-2
Zhaoyu Xing, Zuhui Zhu, Zhenxing Jiang, Jingshi Zhao, Qin Chen, Wei Xing, Liang Pan, Yan Zeng, Aie Liu, Jiule Ding

To develop a fully automatic urinary stone detection system (kidney, ureter, and bladder) and to test it in a real clinical environment. The local institutional review board approved this retrospective single-center study that used non-enhanced abdominopelvic CT scans from patients admitted urology (uPatients) and emergency (ePatients). The uPatients were randomly divided into training and validation sets in a ratio of 3:1. We designed a cascade urinary stone map location-feature pyramid networks (USm-FPNs) and innovatively proposed a ureter distance heatmap method to estimate the ureter position on non-enhanced CT to further reduce the false positives. The performances of the system were compared using the free-response receiver operating characteristic curve and the precision-recall curve. This study included 811 uPatients and 356 ePatients. At stone level, the cascade detector USm-FPNs has the mean of false positives per scan (mFP) 1.88 with the sensitivity 0.977 in validation set, and mFP was further reduced to 1.18 with the sensitivity 0.977 after combining the ureter distance heatmap. At patient level, the sensitivity and precision were as high as 0.995 and 0.990 in validation set, respectively. In a real clinical set of ePatients (27.5% of patients contain stones), the mFP was 1.31 with as high as sensitivity 0.977, and the diagnostic time reduced by > 20% with the system help. A fully automatic detection system for entire urinary stones on non-enhanced CT scans was proposed and reduces obviously the burden on junior radiologists without compromising sensitivity in real emergency data.

开发全自动尿石检测系统(肾脏、输尿管和膀胱),并在真实临床环境中进行测试。当地机构审查委员会批准了这项回顾性单中心研究,该研究使用了泌尿科(uPatients)和急诊(ePatients)患者的非增强腹盆腔 CT 扫描。泌尿科患者按 3:1 的比例随机分为训练集和验证集。我们设计了一个级联泌尿系结石图位置-特征金字塔网络(USm-FPNs),并创新性地提出了一种输尿管距离热图方法来估计非增强 CT 上的输尿管位置,以进一步减少假阳性。利用自由响应接收器工作特征曲线和精确度-召回曲线比较了系统的性能。这项研究包括 811 名住院患者和 356 名电子患者。在结石层面,级联检测器USm-FPNs每次扫描的平均误报率(mFP)为1.88,灵敏度为0.977;结合输尿管距离热图后,误报率进一步降至1.18,灵敏度为0.977。在患者层面,验证集的灵敏度和精确度分别高达 0.995 和 0.990。在一组真实的临床电子病人(27.5% 的病人含有结石)中,mFP 为 1.31,灵敏度高达 0.977,在系统的帮助下,诊断时间缩短了 20%。我们提出了一种在非增强型 CT 扫描中全自动检测泌尿系统结石的系统,在不影响真实急诊数据灵敏度的情况下,明显减轻了初级放射医师的负担。
{"title":"Automatic Urinary Stone Detection System for Abdominal Non-Enhanced CT Images Reduces the Burden on Radiologists","authors":"Zhaoyu Xing, Zuhui Zhu, Zhenxing Jiang, Jingshi Zhao, Qin Chen, Wei Xing, Liang Pan, Yan Zeng, Aie Liu, Jiule Ding","doi":"10.1007/s10278-023-00946-2","DOIUrl":"https://doi.org/10.1007/s10278-023-00946-2","url":null,"abstract":"<p>To develop a fully automatic urinary stone detection system (kidney, ureter, and bladder) and to test it in a real clinical environment. The local institutional review board approved this retrospective single-center study that used non-enhanced abdominopelvic CT scans from patients admitted urology (uPatients) and emergency (ePatients). The uPatients were randomly divided into training and validation sets in a ratio of 3:1. We designed a cascade urinary stone map location-feature pyramid networks (USm-FPNs) and innovatively proposed a ureter distance heatmap method to estimate the ureter position on non-enhanced CT to further reduce the false positives. The performances of the system were compared using the free-response receiver operating characteristic curve and the precision-recall curve. This study included 811 uPatients and 356 ePatients. At stone level, the cascade detector USm-FPNs has the mean of false positives per scan (mFP) 1.88 with the sensitivity 0.977 in validation set, and mFP was further reduced to 1.18 with the sensitivity 0.977 after combining the ureter distance heatmap. At patient level, the sensitivity and precision were as high as 0.995 and 0.990 in validation set, respectively. In a real clinical set of ePatients (27.5% of patients contain stones), the mFP was 1.31 with as high as sensitivity 0.977, and the diagnostic time reduced by &gt; 20% with the system help. A fully automatic detection system for entire urinary stones on non-enhanced CT scans was proposed and reduces obviously the burden on junior radiologists without compromising sensitivity in real emergency data.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"69 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139421522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robustness of Deep Networks for Mammography: Replication Across Public Datasets 用于乳腺 X 射线照相术的深度网络的鲁棒性:在公共数据集上复制
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00943-5

Abstract

Deep neural networks have demonstrated promising performance in screening mammography with recent studies reporting performance at or above the level of trained radiologists on internal datasets. However, it remains unclear whether the performance of these trained models is robust and replicates across external datasets. In this study, we evaluate four state-of-the-art publicly available models using four publicly available mammography datasets (CBIS-DDSM, INbreast, CMMD, OMI-DB). Where test data was available, published results were replicated. The best-performing model, which achieved an area under the ROC curve (AUC) of 0.88 on internal data from NYU, achieved here an AUC of 0.9 on the external CMMD dataset (N = 826 exams). On the larger OMI-DB dataset (N = 11,440 exams), it achieved an AUC of 0.84 but did not match the performance of individual radiologists (at a specificity of 0.92, the sensitivity was 0.97 for the radiologist and 0.53 for the network for a 1-year follow-up). The network showed higher performance for in situ cancers, as opposed to invasive cancers. Among invasive cancers, it was relatively weaker at identifying asymmetries and was relatively stronger at identifying masses. The three other trained models that we evaluated all performed poorly on external datasets. Independent validation of trained models is an essential step to ensure safe and reliable use. Future progress in AI for mammography may depend on a concerted effort to make larger datasets publicly available that span multiple clinical sites.

摘要 深度神经网络在乳腺 X 线照相术筛查中表现出了良好的性能,最近的研究报告显示,其在内部数据集上的性能达到或超过了经过培训的放射科医生的水平。然而,这些训练有素的模型的性能是否稳健,是否能在外部数据集上复制,目前仍不清楚。在本研究中,我们使用四个公开的乳腺 X 射线摄影数据集(CBIS-DDSM、INbreast、CMMD、OMI-DB)对四个最先进的公开可用模型进行了评估。在有测试数据的情况下,我们复制了已公布的结果。表现最好的模型在纽约大学的内部数据上的 ROC 曲线下面积(AUC)为 0.88,在外部 CMMD 数据集(N = 826 次检查)上的 AUC 为 0.9。在更大的 OMI-DB 数据集(N = 11,440 次检查)上,它的 AUC 达到了 0.84,但与放射科医生个人的表现不相称(在特异性为 0.92 的情况下,放射科医生的灵敏度为 0.97,网络在 1 年随访中的灵敏度为 0.53)。与浸润性癌症相比,网络对原位癌的诊断率更高。在浸润性癌症中,它识别不对称的能力相对较弱,而识别肿块的能力相对较强。我们评估的其他三个训练有素的模型在外部数据集上的表现都很差。对训练有素的模型进行独立验证是确保使用安全可靠的必要步骤。乳腺 X 射线人工智能的未来进展可能取决于我们是否能齐心协力,公开提供跨多个临床站点的更大数据集。
{"title":"Robustness of Deep Networks for Mammography: Replication Across Public Datasets","authors":"","doi":"10.1007/s10278-023-00943-5","DOIUrl":"https://doi.org/10.1007/s10278-023-00943-5","url":null,"abstract":"<h3>Abstract</h3> <p>Deep neural networks have demonstrated promising performance in screening mammography with recent studies reporting performance at or above the level of trained radiologists on internal datasets. However, it remains unclear whether the performance of these trained models is robust and replicates across external datasets. In this study, we evaluate four state-of-the-art publicly available models using four publicly available mammography datasets (CBIS-DDSM, INbreast, CMMD, OMI-DB). Where test data was available, published results were replicated. The best-performing model, which achieved an area under the ROC curve (AUC) of 0.88 on internal data from NYU, achieved here an AUC of 0.9 on the external CMMD dataset (<em>N</em> = 826 exams). On the larger OMI-DB dataset (<em>N</em> = 11,440 exams), it achieved an AUC of 0.84 but did not match the performance of individual radiologists (at a specificity of 0.92, the sensitivity was 0.97 for the radiologist and 0.53 for the network for a 1-year follow-up). The network showed higher performance for in situ cancers, as opposed to invasive cancers. Among invasive cancers, it was relatively weaker at identifying asymmetries and was relatively stronger at identifying masses. The three other trained models that we evaluated all performed poorly on external datasets. Independent validation of trained models is an essential step to ensure safe and reliable use. Future progress in AI for mammography may depend on a concerted effort to make larger datasets publicly available that span multiple clinical sites.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"2 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139421724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine Learning Supported the Modified Gustafson’s Criteria for Dental Age Estimation in Southwest China 机器学习支持中国西南地区牙龄估计的改良古斯塔夫森标准
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00956-0
Xinhua Dai, Anjie Liu, Junhong Liu, Mengjun Zhan, Yuanyuan Liu, Wenchi Ke, Lei Shi, Xinyu Huang, Hu Chen, Zhenhua Deng, Fei Fan

Adult age estimation is one of the most challenging problems in forensic science and physical anthropology. In this study, we aimed to develop and evaluate machine learning (ML) methods based on the modified Gustafson’s criteria for dental age estimation. In this retrospective study, a total of 851 orthopantomograms were collected from patients aged 15 to 40 years old. The secondary dentin formation (SE), periodontal recession (PE), and attrition (AT) of four mandibular premolars were analyzed according to the modified Gustafson’s criteria. Ten ML models were generated and compared for age estimation. The partial least squares regressor outperformed other models in males with a mean absolute error (MAE) of 4.151 years. The support vector regressor (MAE = 3.806 years) showed good performance in females. The accuracy of ML models is better than the single-tooth model provided in the previous studies (MAE = 4.747 years in males and MAE = 4.957 years in females). The Shapley additive explanations method was used to reveal the importance of the 12 features in ML models and found that AT and PE are the most influential in age estimation. The findings suggest that the modified Gustafson method can be effectively employed for adult age estimation in the southwest Chinese population. Furthermore, this study highlights the potential of machine learning models to assist experts in achieving accurate and interpretable age estimation.

成人年龄估计是法医学和体质人类学中最具挑战性的问题之一。在这项研究中,我们旨在开发和评估基于修改后的 Gustafson 牙齿年龄估计标准的机器学习(ML)方法。在这项回顾性研究中,共收集了 851 张 15 至 40 岁患者的正畸照片。根据修改后的 Gustafson 标准分析了四颗下颌前磨牙的继发性牙本质形成(SE)、牙周退缩(PE)和损耗(AT)。在年龄估计方面,共生成并比较了 10 个 ML 模型。偏最小二乘回归模型在男性中的表现优于其他模型,平均绝对误差(MAE)为 4.151 岁。支持向量回归模型(MAE = 3.806 岁)在女性中表现良好。ML 模型的准确性优于以往研究中提供的单齿模型(男性 MAE = 4.747 年,女性 MAE = 4.957 年)。利用 Shapley 加法解释法揭示了 12 个特征在 ML 模型中的重要性,发现 AT 和 PE 对年龄估计的影响最大。研究结果表明,改进的 Gustafson 方法可以有效地用于中国西南地区人群的成人年龄估计。此外,本研究还强调了机器学习模型在协助专家实现准确、可解释的年龄估计方面的潜力。
{"title":"Machine Learning Supported the Modified Gustafson’s Criteria for Dental Age Estimation in Southwest China","authors":"Xinhua Dai, Anjie Liu, Junhong Liu, Mengjun Zhan, Yuanyuan Liu, Wenchi Ke, Lei Shi, Xinyu Huang, Hu Chen, Zhenhua Deng, Fei Fan","doi":"10.1007/s10278-023-00956-0","DOIUrl":"https://doi.org/10.1007/s10278-023-00956-0","url":null,"abstract":"<p>Adult age estimation is one of the most challenging problems in forensic science and physical anthropology. In this study, we aimed to develop and evaluate machine learning (ML) methods based on the modified Gustafson’s criteria for dental age estimation. In this retrospective study, a total of 851 orthopantomograms were collected from patients aged 15 to 40 years old. The secondary dentin formation (SE), periodontal recession (PE), and attrition (AT) of four mandibular premolars were analyzed according to the modified Gustafson’s criteria. Ten ML models were generated and compared for age estimation. The partial least squares regressor outperformed other models in males with a mean absolute error (MAE) of 4.151 years. The support vector regressor (MAE = 3.806 years) showed good performance in females. The accuracy of ML models is better than the single-tooth model provided in the previous studies (MAE = 4.747 years in males and MAE = 4.957 years in females). The Shapley additive explanations method was used to reveal the importance of the 12 features in ML models and found that AT and PE are the most influential in age estimation. The findings suggest that the modified Gustafson method can be effectively employed for adult age estimation in the southwest Chinese population. Furthermore, this study highlights the potential of machine learning models to assist experts in achieving accurate and interpretable age estimation.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"118 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139421766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TiCNet: Transformer in Convolutional Neural Network for Pulmonary Nodule Detection on CT Images TiCNet:用于 CT 图像肺结节检测的卷积神经网络变换器
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00904-y
Ling Ma, Gen Li, Xingyu Feng, Qiliang Fan, Lizhi Liu

Lung cancer is the leading cause of cancer death. Since lung cancer appears as nodules in the early stage, detecting the pulmonary nodules in an early phase could enhance the treatment efficiency and improve the survival rate of patients. The development of computer-aided analysis technology has made it possible to automatically detect lung nodules in Computed Tomography (CT) screening. In this paper, we propose a novel detection network, TiCNet. It is attempted to embed a transformer module in the 3D Convolutional Neural Network (CNN) for pulmonary nodule detection on CT images. First, we integrate the transformer and CNN in an end-to-end structure to capture both the short- and long-range dependency to provide rich information on the characteristics of nodules. Second, we design the attention block and multi-scale skip pathways for improving the detection of small nodules. Last, we develop a two-head detector to guarantee high sensitivity and specificity. Experimental results on the LUNA16 dataset and PN9 dataset showed that our proposed TiCNet achieved superior performance compared with existing lung nodule detection methods. Moreover, the effectiveness of each module has been proven. The proposed TiCNet model is an effective tool for pulmonary nodule detection. Validation revealed that this model exhibited excellent performance, suggesting its potential usefulness to support lung cancer screening.

肺癌是导致癌症死亡的主要原因。由于肺癌在早期表现为结节,因此早期发现肺结节可以提高治疗效率,改善患者的生存率。计算机辅助分析技术的发展使得在计算机断层扫描(CT)筛查中自动检测肺结节成为可能。本文提出了一种新型检测网络 TiCNet。它尝试在三维卷积神经网络(CNN)中嵌入一个变换器模块,用于 CT 图像上的肺结节检测。首先,我们将变换器和 CNN 整合为端到端结构,捕捉短程和长程依赖关系,从而提供丰富的结节特征信息。其次,我们设计了注意块和多尺度跳过路径,以提高对小结节的检测。最后,我们开发了一种双头检测器,以保证高灵敏度和高特异性。在 LUNA16 数据集和 PN9 数据集上的实验结果表明,与现有的肺结节检测方法相比,我们提出的 TiCNet 性能更优。此外,每个模块的有效性也得到了证实。提出的 TiCNet 模型是肺结节检测的有效工具。验证结果表明,该模型表现出卓越的性能,表明它在支持肺癌筛查方面具有潜在的实用性。
{"title":"TiCNet: Transformer in Convolutional Neural Network for Pulmonary Nodule Detection on CT Images","authors":"Ling Ma, Gen Li, Xingyu Feng, Qiliang Fan, Lizhi Liu","doi":"10.1007/s10278-023-00904-y","DOIUrl":"https://doi.org/10.1007/s10278-023-00904-y","url":null,"abstract":"<p>Lung cancer is the leading cause of cancer death. Since lung cancer appears as nodules in the early stage, detecting the pulmonary nodules in an early phase could enhance the treatment efficiency and improve the survival rate of patients. The development of computer-aided analysis technology has made it possible to automatically detect lung nodules in Computed Tomography (CT) screening. In this paper, we propose a novel detection network, TiCNet. It is attempted to embed a transformer module in the 3D Convolutional Neural Network (CNN) for pulmonary nodule detection on CT images. First, we integrate the transformer and CNN in an end-to-end structure to capture both the short- and long-range dependency to provide rich information on the characteristics of nodules. Second, we design the attention block and multi-scale skip pathways for improving the detection of small nodules. Last, we develop a two-head detector to guarantee high sensitivity and specificity. Experimental results on the LUNA16 dataset and PN9 dataset showed that our proposed TiCNet achieved superior performance compared with existing lung nodule detection methods. Moreover, the effectiveness of each module has been proven. The proposed TiCNet model is an effective tool for pulmonary nodule detection. Validation revealed that this model exhibited excellent performance, suggesting its potential usefulness to support lung cancer screening.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"85 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139421537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An MRI-Based Deep Transfer Learning Radiomics Nomogram to Predict Ki-67 Proliferation Index of Meningioma 预测脑膜瘤 Ki-67 增殖指数的基于磁共振成像的深度迁移学习放射组学提名图
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00937-3
Chongfeng Duan, Dapeng Hao, Jiufa Cui, Gang Wang, Wenjian Xu, Nan Li, Xuejun Liu

The objective of this study was to predict Ki-67 proliferation index of meningioma by using a nomogram based on clinical, radiomics, and deep transfer learning (DTL) features. A total of 318 cases were enrolled in the study. The clinical, radiomics, and DTL features were selected to construct models. The calculation of radiomics and DTL score was completed by using selected features and correlation coefficient. The deep transfer learning radiomics (DTLR) nomogram was constructed by selected clinical features, radiomics score, and DTL score. The area under the receiver operator characteristic curve (AUC) was calculated. The models were compared by Delong test of AUCs and decision curve analysis (DCA). The features of sex, size, and peritumoral edema were selected to construct clinical model. Seven radiomics features and 15 DTL features were selected. The AUCs of clinical, radiomics, DTL model, and DTLR nomogram were 0.746, 0.75, 0.717, and 0.779 respectively. DTLR nomogram had the highest AUC of 0.779 (95% CI 0.6643–0.8943) with an accuracy rate of 0.734, a sensitivity value of 0.719, and a specificity value of 0.75 in test set. There was no significant difference in AUCs among four models in Delong test. The DTLR nomogram had a larger net benefit than other models across all the threshold probability. The DTLR nomogram had a satisfactory performance in Ki-67 prediction and could be a new evaluation method of meningioma which would be useful in the clinical decision-making.

本研究旨在利用基于临床、放射组学和深度迁移学习(DTL)特征的提名图预测脑膜瘤的Ki-67增殖指数。研究共纳入了318个病例。研究人员选择了临床、放射组学和 DTL 特征来构建模型。利用选定的特征和相关系数完成放射组学和 DTL 评分的计算。通过选定的临床特征、放射组学评分和 DTL 评分,构建了深度迁移学习放射组学(DTLR)提名图。计算接受者操作特征曲线下面积(AUC)。通过 AUC 的 Delong 检验和决策曲线分析(DCA)对模型进行比较。选择性别、大小和瘤周水肿特征构建临床模型。选择了 7 个放射组学特征和 15 个 DTL 特征。临床、放射组学、DTL模型和DTLR提名图的AUC分别为0.746、0.75、0.717和0.779。在测试集中,DTLR提名图的AUC最高,为0.779(95% CI 0.6643-0.8943),准确率为0.734,灵敏度为0.719,特异性为0.75。在 Delong 检验中,四个模型的 AUC 没有明显差异。在所有阈值概率中,DTLR提名图的净效益均大于其他模型。DTLR提名图在Ki-67预测方面的表现令人满意,可以作为脑膜瘤的一种新的评估方法,有助于临床决策。
{"title":"An MRI-Based Deep Transfer Learning Radiomics Nomogram to Predict Ki-67 Proliferation Index of Meningioma","authors":"Chongfeng Duan, Dapeng Hao, Jiufa Cui, Gang Wang, Wenjian Xu, Nan Li, Xuejun Liu","doi":"10.1007/s10278-023-00937-3","DOIUrl":"https://doi.org/10.1007/s10278-023-00937-3","url":null,"abstract":"<p>The objective of this study was to predict Ki-67 proliferation index of meningioma by using a nomogram based on clinical, radiomics, and deep transfer learning (DTL) features. A total of 318 cases were enrolled in the study. The clinical, radiomics, and DTL features were selected to construct models. The calculation of radiomics and DTL score was completed by using selected features and correlation coefficient. The deep transfer learning radiomics (DTLR) nomogram was constructed by selected clinical features, radiomics score, and DTL score. The area under the receiver operator characteristic curve (AUC) was calculated. The models were compared by Delong test of AUCs and decision curve analysis (DCA). The features of sex, size, and peritumoral edema were selected to construct clinical model. Seven radiomics features and 15 DTL features were selected. The AUCs of clinical, radiomics, DTL model, and DTLR nomogram were 0.746, 0.75, 0.717, and 0.779 respectively. DTLR nomogram had the highest AUC of 0.779 (95% CI 0.6643–0.8943) with an accuracy rate of 0.734, a sensitivity value of 0.719, and a specificity value of 0.75 in test set. There was no significant difference in AUCs among four models in Delong test. The DTLR nomogram had a larger net benefit than other models across all the threshold probability. The DTLR nomogram had a satisfactory performance in Ki-67 prediction and could be a new evaluation method of meningioma which would be useful in the clinical decision-making.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"40 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139421540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PET KinetiX—A Software Solution for PET Parametric Imaging at the Whole Field of View Level PET KinetiX--全视场 PET 参数成像软件解决方案
IF 4.4 2区 工程技术 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00965-z
Florent L. Besson, Sylvain Faure

Kinetic modeling represents the ultimate foundations of PET quantitative imaging, a unique opportunity to better characterize the diseases or prevent the reduction of drugs development. Primarily designed for research, parametric imaging based on PET kinetic modeling may become a reality in future clinical practice, enhanced by the technical abilities of the latest generation of commercially available PET systems. In the era of precision medicine, such paradigm shift should be promoted, regardless of the PET system. In order to anticipate and stimulate this emerging clinical paradigm shift, we developed a constructor-independent software package, called PET KinetiX, allowing a faster and easier computation of parametric images from any 4D PET DICOM series, at the whole field of view level. The PET KinetiX package is currently a plug-in for Osirix DICOM viewer. The package provides a suite of five PET kinetic models: Patlak, Logan, 1-tissue compartment model, 2-tissue compartment model, and first pass blood flow. After uploading the 4D-PET DICOM series into Osirix, the image processing requires very few steps: the choice of the kinetic model and the definition of an input function. After a 2-min process, the PET parametric and error maps of the chosen model are automatically estimated voxel-wise and written in DICOM format. The software benefits from the graphical user interface of Osirix, making it user-friendly. Compared to PMOD-PKIN (version 4.4) on twelve 18F-FDG PET dynamic datasets, PET KinetiX provided an absolute bias of 0.1% (0.05–0.25) and 5.8% (3.3–12.3) for KiPatlak and Ki2TCM, respectively. Several clinical research illustrative cases acquired on different hybrid PET systems (standard or extended axial fields of view, PET/CT, and PET/MRI), with different acquisition schemes (single-bed single-pass or multi-bed multipass), are also provided. PET KinetiX is a very fast and efficient independent research software that helps molecular imaging users easily and quickly produce 3D PET parametric images from any reconstructed 4D-PET data acquired on standard or large PET systems.

动力学建模代表了 PET 定量成像的最终基础,是更好地描述疾病特征或防止药物开发减少的独特机会。基于 PET 动力学建模的参数成像主要是为研究设计的,在最新一代商用 PET 系统技术能力的提升下,它可能会在未来的临床实践中成为现实。在精准医疗时代,无论使用哪种 PET 系统,都应促进这种模式的转变。为了预测和促进这一新兴临床范式的转变,我们开发了一个独立于构造器的软件包,名为 PET KinetiX,可以更快、更方便地计算来自任何 4D PET DICOM 系列的全视场参数图像。PET KinetiX 软件包目前是 Osirix DICOM 查看器的插件。该软件包提供一套五种 PET 动力学模型:Patlak、Logan、1-组织间隙模型、2-组织间隙模型和首过血流。将 4D-PET DICOM 系列上传到 Osirix 后,图像处理只需几个步骤:选择动力学模型和定义输入函数。经过 2 分钟的处理后,所选模型的 PET 参数图和误差图将自动按体素估算,并以 DICOM 格式写入。该软件得益于 Osirix 的图形用户界面,用户使用起来非常方便。在 12 个 18F-FDG PET 动态数据集上,PET KinetiX 与 PMOD-PKIN(4.4 版)相比,KiPatlak 和 Ki2TCM 的绝对偏差分别为 0.1%(0.05-0.25)和 5.8%(3.3-12.3)。此外,还提供了在不同的混合 PET 系统(标准或扩展轴向视场、PET/CT 和 PET/MRI)、不同的采集方案(单床单通或多床多通)上采集的几个临床研究示例。PET KinetiX 是一款非常快速高效的独立研究软件,可帮助分子成像用户从标准或大型 PET 系统获取的任何重建 4D-PET 数据中轻松快速地生成三维 PET 参数图像。
{"title":"PET KinetiX—A Software Solution for PET Parametric Imaging at the Whole Field of View Level","authors":"Florent L. Besson, Sylvain Faure","doi":"10.1007/s10278-023-00965-z","DOIUrl":"https://doi.org/10.1007/s10278-023-00965-z","url":null,"abstract":"<p>Kinetic modeling represents the ultimate foundations of PET quantitative imaging, a unique opportunity to better characterize the diseases or prevent the reduction of drugs development. Primarily designed for research, parametric imaging based on PET kinetic modeling may become a reality in future clinical practice, enhanced by the technical abilities of the latest generation of commercially available PET systems. In the era of precision medicine, such paradigm shift should be promoted, regardless of the PET system. In order to anticipate and stimulate this emerging clinical paradigm shift, we developed a constructor-independent software package, called PET KinetiX, allowing a faster and easier computation of parametric images from any 4D PET DICOM series, at the whole field of view level. The PET KinetiX package is currently a plug-in for Osirix DICOM viewer. The package provides a suite of five PET kinetic models: Patlak, Logan, 1-tissue compartment model, 2-tissue compartment model, and first pass blood flow. After uploading the 4D-PET DICOM series into Osirix, the image processing requires very few steps: the choice of the kinetic model and the definition of an input function. After a 2-min process, the PET parametric and error maps of the chosen model are automatically estimated voxel-wise and written in DICOM format. The software benefits from the graphical user interface of Osirix, making it user-friendly. Compared to PMOD-PKIN (version 4.4) on twelve <sup>18</sup>F-FDG PET dynamic datasets, PET KinetiX provided an absolute bias of 0.1% (0.05–0.25) and 5.8% (3.3–12.3) for Ki<sub>Patlak</sub> and Ki<sub>2TCM</sub>, respectively. Several clinical research illustrative cases acquired on different hybrid PET systems (standard or extended axial fields of view, PET/CT, and PET/MRI), with different acquisition schemes (single-bed single-pass or multi-bed multipass), are also provided. PET KinetiX is a very fast and efficient independent research software that helps molecular imaging users easily and quickly produce 3D PET parametric images from any reconstructed 4D-PET data acquired on standard or large PET systems.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"62 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139421633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Digital Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1