首页 > 最新文献

Radiology-Artificial Intelligence最新文献

英文 中文
Transformers in the Womb: Swin-UNETR Takes on Fetal Brain Imaging. 子宫里的变形金刚Swin-UNETR 对胎儿大脑成像的研究。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-01 DOI: 10.1148/ryai.240677
Sanjay P Prabhu
{"title":"Transformers in the Womb: Swin-UNETR Takes on Fetal Brain Imaging.","authors":"Sanjay P Prabhu","doi":"10.1148/ryai.240677","DOIUrl":"https://doi.org/10.1148/ryai.240677","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 6","pages":"e240677"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142628786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Performance of Transformer-based Models for Fetal Brain MR Image Segmentation. 优化基于变压器模型的胎儿脑磁共振图像分割性能
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-01 DOI: 10.1148/ryai.230229
Nicolò Pecco, Pasquale Anthony Della Rosa, Matteo Canini, Gianluca Nocera, Paola Scifo, Paolo Ivo Cavoretto, Massimo Candiani, Andrea Falini, Antonella Castellano, Cristina Baldoli

Purpose To test the performance of a transformer-based model when manipulating pretraining weights, dataset size, and input size and comparing the best model with the reference standard and state-of-the-art models for a resting-state functional (rs-fMRI) fetal brain extraction task. Materials and Methods An internal retrospective dataset (172 fetuses, 519 images; collected 2018-2022) was used to investigate influence of dataset size, pretraining approaches, and image input size on Swin-U-Net transformer (UNETR) and UNETR models. The internal and external (131 fetuses, 561 images) datasets were used to cross-validate and to assess generalization capability of the best model versus state-of-the-art models on different scanner types and number of gestational weeks (GWs). The Dice similarity coefficient (DSC) and the balanced average Hausdorff distance (BAHD) were used as segmentation performance metrics. Generalized equation estimation multifactorial models were used to assess significant model and interaction effects of interest. Results The Swin-UNETR model was not affected by the pretraining approach and dataset size and performed best with the mean dataset image size, with a mean DSC of 0.92 and BAHD of 0.097. Swin-UNETR was not affected by scanner type. Generalization results on the internal dataset showed that Swin-UNETR had lower performance compared with the reference standard models and comparable performance on the external dataset. Cross-validation on internal and external test sets demonstrated better and comparable performance of Swin-UNETR versus convolutional neural network architectures during the late-fetal period (GWs > 25) but lower performance during the midfetal period (GWs ≤ 25). Conclusion Swin-UNTER showed flexibility in dealing with smaller datasets, regardless of pretraining approaches. For fetal brain extraction from rs-fMR images, Swin-UNTER showed comparable performance with that of reference standard models during the late-fetal period and lower performance during the early GW period. Keywords: Transformers, CNN, Medical Imaging Segmentation, MRI, Dataset Size, Input Size, Transfer Learning Supplemental material is available for this article. © RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 测试基于转换器的模型在处理预训练权重、数据集大小、输入大小时的性能,并将最佳模型与参考标准模型和最先进模型进行比较,用于静息态功能(rs-fMRI)胎儿大脑提取任务。材料与方法 使用内部回顾性数据集(胎儿 = 172;图像 = 519;收集时间为 2018-2022 年)研究数据集大小、预训练方法和图像输入大小对 Swin-UNETR 和 UNETR 模型的影响。内部数据集和外部数据集(胎儿 = 131;图像 = 561)用于交叉验证和评估最佳模型在不同扫描仪类型和孕周数(GW)上与最先进模型的泛化能力。狄斯相似系数(DSC)和平衡平均豪斯多夫距离(BAHD)被用作分割性能指标。使用 GEE 多因素模型来评估感兴趣的重要模型和交互效应。结果 Swin-UNETR 不受预训练方法和数据集大小的影响,在使用平均数据集图像大小时表现最佳,平均 DSC 为 0.92,BAHD 为 0.097。Swin-UNETR 不受扫描仪类型的影响。内部数据集的泛化结果表明,与参考标准模型相比,Swin-UNETR 的性能较低,而在外部数据集上的性能相当。内部和外部测试集的交叉验证结果表明,在胎儿晚期(GWs > 25),Swin-UNETR 与卷积神经网络架构的性能更好,两者性能相当,但在胎儿中期(GWs ≤ 25),Swin-UNETR 的性能较低。结论 无论采用哪种预训练方法,Swin-UNTER 在处理较小的数据集时都表现出了灵活性。对于 rs-fMRI 的胎儿大脑提取,Swin-UNTER 在胎儿晚期表现出与参考标准模型相当的性能,而在 GW 早期表现较差。©RSNA,2024。
{"title":"Optimizing Performance of Transformer-based Models for Fetal Brain MR Image Segmentation.","authors":"Nicolò Pecco, Pasquale Anthony Della Rosa, Matteo Canini, Gianluca Nocera, Paola Scifo, Paolo Ivo Cavoretto, Massimo Candiani, Andrea Falini, Antonella Castellano, Cristina Baldoli","doi":"10.1148/ryai.230229","DOIUrl":"10.1148/ryai.230229","url":null,"abstract":"<p><p>Purpose To test the performance of a transformer-based model when manipulating pretraining weights, dataset size, and input size and comparing the best model with the reference standard and state-of-the-art models for a resting-state functional (rs-fMRI) fetal brain extraction task. Materials and Methods An internal retrospective dataset (172 fetuses, 519 images; collected 2018-2022) was used to investigate influence of dataset size, pretraining approaches, and image input size on Swin-U-Net transformer (UNETR) and UNETR models. The internal and external (131 fetuses, 561 images) datasets were used to cross-validate and to assess generalization capability of the best model versus state-of-the-art models on different scanner types and number of gestational weeks (GWs). The Dice similarity coefficient (DSC) and the balanced average Hausdorff distance (BAHD) were used as segmentation performance metrics. Generalized equation estimation multifactorial models were used to assess significant model and interaction effects of interest. Results The Swin-UNETR model was not affected by the pretraining approach and dataset size and performed best with the mean dataset image size, with a mean DSC of 0.92 and BAHD of 0.097. Swin-UNETR was not affected by scanner type. Generalization results on the internal dataset showed that Swin-UNETR had lower performance compared with the reference standard models and comparable performance on the external dataset. Cross-validation on internal and external test sets demonstrated better and comparable performance of Swin-UNETR versus convolutional neural network architectures during the late-fetal period (GWs > 25) but lower performance during the midfetal period (GWs ≤ 25). Conclusion Swin-UNTER showed flexibility in dealing with smaller datasets, regardless of pretraining approaches. For fetal brain extraction from rs-fMR images, Swin-UNTER showed comparable performance with that of reference standard models during the late-fetal period and lower performance during the early GW period. <b>Keywords:</b> Transformers, CNN, Medical Imaging Segmentation, MRI, Dataset Size, Input Size, Transfer Learning <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230229"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141451658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WAW-TACE: A Hepatocellular Carcinoma Multiphase CT Dataset with Segmentations, Radiomics Features, and Clinical Data. WAW-TACE:包含分割、放射组学特征和临床数据的肝细胞癌多相 CT 数据集。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-23 DOI: 10.1148/ryai.240296
Krzysztof Bartnik, Tomasz Bartczak, Mateusz Krzyziński, Krzysztof Korzeniowski, Krzysztof Lamparski, Piotr Węgrzyn, Eric Lam, Mateusz Bartkowiak, Tadeusz Wróblewski, Katarzyna Mech, Magdalena Januszewicz, Przemysław Biecek

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. The WAW-TACE dataset contains baseline multiphase abdominal CT images from 233 treatment-naive patients with hepatocellular carcinoma treated with transarterial chemoembolization and includes with 377 hand-crafted liver tumor masks, automated segmentations of multiple internal organs, extracted radiomics features, and corresponding extensive clinical data. The dataset can be accessed at: https://zenodo.org/records/12741586 (DOI:10.5281/zenodo.11063784).

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。WAW-TACE数据集包含233名接受经动脉化疗栓塞治疗的肝细胞癌患者的基线多相腹部CT图像,包括377个手工制作的肝脏肿瘤掩膜、多个内脏器官的自动分割、提取的放射组学特征以及相应的大量临床数据。该数据集可通过以下网址访问:https://zenodo.org/records/12741586(DOI:10.5281/zenodo.11063784)。
{"title":"WAW-TACE: A Hepatocellular Carcinoma Multiphase CT Dataset with Segmentations, Radiomics Features, and Clinical Data.","authors":"Krzysztof Bartnik, Tomasz Bartczak, Mateusz Krzyziński, Krzysztof Korzeniowski, Krzysztof Lamparski, Piotr Węgrzyn, Eric Lam, Mateusz Bartkowiak, Tadeusz Wróblewski, Katarzyna Mech, Magdalena Januszewicz, Przemysław Biecek","doi":"10.1148/ryai.240296","DOIUrl":"https://doi.org/10.1148/ryai.240296","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> The WAW-TACE dataset contains baseline multiphase abdominal CT images from 233 treatment-naive patients with hepatocellular carcinoma treated with transarterial chemoembolization and includes with 377 hand-crafted liver tumor masks, automated segmentations of multiple internal organs, extracted radiomics features, and corresponding extensive clinical data. The dataset can be accessed at: https://zenodo.org/records/12741586 (DOI:10.5281/zenodo.11063784).</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240296"},"PeriodicalIF":8.1,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142509377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing the Performance of Models from the 2022 RSNA Cervical Spine Fracture Detection Competition at a Level I Trauma Center. 评估 2022 年 RSNA 颈椎骨折检测竞赛模型在一级创伤中心的性能。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-18 DOI: 10.1148/ryai.230550
Zixuan Hu, Markand Patel, Robyn L Ball, Hui Ming Lin, Luciano M Prevedello, Mitra Naseri, Shobhit Mathur, Robert Moreland, Jefferson Wilson, Christopher Witiw, Kristen W Yeom, Qishen Ha, Darragh Hanley, Selim Seferbekov, Hao Chen, Philipp Singer, Christof Henkel, Pascal Pfeiffer, Ian Pan, Harshit Sheoran, Wuqi Li, Adam E Flanders, Felipe C Kitamura, Tyler Richards, Jason Talbott, Ervin Sejdić, Errol Colak

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To evaluate the performance of the top models from the RSNA 2022 Cervical Spine Fracture Detection challenge on a clinical test dataset of both noncontrast and contrast-enhanced CT scans acquired at a level I trauma center. Materials and Methods Seven top-performing models in the RSNA 2022 Cervical Spine Fracture Detection challenge were retrospectively evaluated on a clinical test set of 1,828 CT scans (1,829 series: 130 positive for fracture, 1,699 negative for fracture; 1,308 noncontrast, 521 contrast-enhanced) from 1,779 patients (mean age, 55.8 ± 22.1 years; 1,154 male). Scans were acquired without exclusion criteria over one year (January to December 2022) from the emergency department of a neurosurgical and level I trauma center. Model performance was assessed using area under the receiver operating characteristic curve (AUC), sensitivity, and specificity. False positive and false negative cases were further analyzed by a neuroradiologist. Results Although all 7 models showed decreased performance on the clinical test set compared with the challenge dataset, the models maintained high performances. On noncontrast CT scans, the models achieved a mean AUC of 0.89 (range: 0.81-0.92), sensitivity of 67.0% (range: 30.9%-80.0%), and specificity of 92.9% (range: 82.1%-99.0%). On contrast-enhanced CT scans, the models had a mean AUC of 0.88 (range: 0.76-0.94), sensitivity of 81.9% (range: 42.7%-100.0%), and specificity of 72.1% (range: 16.4%-92.8%). The models identified 10 fractures missed by radiologists. False-positives were more common in contrast-enhanced scans and observed in patients with degenerative changes on noncontrast scans, while false-negatives were often associated with degenerative changes and osteopenia. Conclusion The winning models from the 2022 RSNA AI Challenge demonstrated a high performance for cervical spine fracture detection on a clinical test dataset, warranting further evaluation for their use as clinical support tools. ©RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 评估 RSNA 2022 颈椎骨折检测挑战赛中的顶级模型在临床测试数据集上的表现,这些数据集包括在一级创伤中心获得的非对比度和对比度增强 CT 扫描。材料与方法 对 RSNA 2022 颈椎骨折检测挑战赛中表现最出色的七个模型进行了回顾性评估,临床测试集包括 1,828 份 CT 扫描(1,829 个系列:1,829 个系列:130 个骨折阳性,1,699 个骨折阴性;1,308 个非对比,521 个对比增强)进行了回顾性评估,这些扫描来自 1,779 名患者(平均年龄 55.8 ± 22.1 岁;1,154 名男性)。扫描数据是在一年内(2022 年 1 月至 12 月)从神经外科和一级创伤中心的急诊科获得的,无排除标准。使用接收者操作特征曲线下面积(AUC)、灵敏度和特异性评估模型性能。假阳性和假阴性病例由神经放射科医生进一步分析。结果 虽然与挑战数据集相比,所有 7 个模型在临床测试集上的性能都有所下降,但这些模型仍然保持了较高的性能。在非对比 CT 扫描中,模型的平均 AUC 为 0.89(范围:0.81-0.92),灵敏度为 67.0%(范围:30.9%-80.0%),特异性为 92.9%(范围:82.1%-99.0%)。在对比增强 CT 扫描中,模型的平均 AUC 为 0.88(范围:0.76-0.94),灵敏度为 81.9%(范围:42.7%-100.0%),特异性为 72.1%(范围:16.4%-92.8%)。这些模型发现了放射科医生漏诊的 10 处骨折。假阳性在对比度增强扫描中更为常见,在非对比度扫描中有退行性病变的患者中也可观察到,而假阴性通常与退行性病变和骨质疏松有关。结论 在 2022 年 RSNA 人工智能挑战赛中获胜的模型在临床测试数据集上表现出了很高的颈椎骨折检测性能,值得进一步评估其作为临床支持工具的用途。©RSNA,2024。
{"title":"Assessing the Performance of Models from the 2022 RSNA Cervical Spine Fracture Detection Competition at a Level I Trauma Center.","authors":"Zixuan Hu, Markand Patel, Robyn L Ball, Hui Ming Lin, Luciano M Prevedello, Mitra Naseri, Shobhit Mathur, Robert Moreland, Jefferson Wilson, Christopher Witiw, Kristen W Yeom, Qishen Ha, Darragh Hanley, Selim Seferbekov, Hao Chen, Philipp Singer, Christof Henkel, Pascal Pfeiffer, Ian Pan, Harshit Sheoran, Wuqi Li, Adam E Flanders, Felipe C Kitamura, Tyler Richards, Jason Talbott, Ervin Sejdić, Errol Colak","doi":"10.1148/ryai.230550","DOIUrl":"https://doi.org/10.1148/ryai.230550","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To evaluate the performance of the top models from the RSNA 2022 Cervical Spine Fracture Detection challenge on a clinical test dataset of both noncontrast and contrast-enhanced CT scans acquired at a level I trauma center. Materials and Methods Seven top-performing models in the RSNA 2022 Cervical Spine Fracture Detection challenge were retrospectively evaluated on a clinical test set of 1,828 CT scans (1,829 series: 130 positive for fracture, 1,699 negative for fracture; 1,308 noncontrast, 521 contrast-enhanced) from 1,779 patients (mean age, 55.8 ± 22.1 years; 1,154 male). Scans were acquired without exclusion criteria over one year (January to December 2022) from the emergency department of a neurosurgical and level I trauma center. Model performance was assessed using area under the receiver operating characteristic curve (AUC), sensitivity, and specificity. False positive and false negative cases were further analyzed by a neuroradiologist. Results Although all 7 models showed decreased performance on the clinical test set compared with the challenge dataset, the models maintained high performances. On noncontrast CT scans, the models achieved a mean AUC of 0.89 (range: 0.81-0.92), sensitivity of 67.0% (range: 30.9%-80.0%), and specificity of 92.9% (range: 82.1%-99.0%). On contrast-enhanced CT scans, the models had a mean AUC of 0.88 (range: 0.76-0.94), sensitivity of 81.9% (range: 42.7%-100.0%), and specificity of 72.1% (range: 16.4%-92.8%). The models identified 10 fractures missed by radiologists. False-positives were more common in contrast-enhanced scans and observed in patients with degenerative changes on noncontrast scans, while false-negatives were often associated with degenerative changes and osteopenia. Conclusion The winning models from the 2022 RSNA AI Challenge demonstrated a high performance for cervical spine fracture detection on a clinical test dataset, warranting further evaluation for their use as clinical support tools. ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230550"},"PeriodicalIF":8.1,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
External Testing of a Deep Learning Model to Estimate Biologic Age Using Chest Radiographs. 利用胸片估算生物年龄的深度学习模型的外部测试。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-01 DOI: 10.1148/ryai.230433
Jong Hyuk Lee, Dongheon Lee, Michael T Lu, Vineet K Raghu, Jin Mo Goo, Yunhee Choi, Seung Ho Choi, Hyungjin Kim

Purpose To assess the prognostic value of a deep learning-based chest radiographic age (hereafter, CXR-Age) model in a large external test cohort of Asian individuals. Materials and Methods This single-center, retrospective study included chest radiographs from consecutive, asymptomatic Asian individuals aged 50-80 years who underwent health checkups between January 2004 and June 2018. This study performed a dedicated external test of a previously developed CXR-Age model, which predicts an age adjusted based on the risk of all-cause mortality. Adjusted hazard ratios (HRs) of CXR-Age for all-cause, cardiovascular, lung cancer, and respiratory disease mortality were assessed using multivariable Cox or Fine-Gray models, and their added values were evaluated by likelihood ratio tests. Results A total of 36 924 individuals (mean chronological age, 58 years ± 7 [SD]; CXR-Age, 60 years ± 5; 22 352 male) were included. During a median follow-up of 11.0 years, 1250 individuals (3.4%) died, including 153 cardiovascular (0.4%), 166 lung cancer (0.4%), and 98 respiratory (0.3%) deaths. CXR-Age was a significant risk factor for all-cause (adjusted HR at chronological age of 50 years, 1.03; at 60 years, 1.05; at 70 years, 1.07), cardiovascular (adjusted HR, 1.11), lung cancer (adjusted HR for individuals who formerly smoked, 1.12; for those who currently smoke, 1.05), and respiratory disease (adjusted HR, 1.12) mortality (P < .05 for all). The likelihood ratio test demonstrated added prognostic value of CXR-Age to clinical factors, including chronological age for all outcomes (P < .001 for all). Conclusion Deep learning-based chest radiographic age was associated with various survival outcomes and had added value to clinical factors in asymptomatic Asian individuals, suggesting its generalizability. Keywords: Conventional Radiography, Thorax, Heart, Lung, Mediastinum, Outcomes Analysis, Quantification, Prognosis, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2024 See also the commentary by Adams and Bressem in this issue.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现一些错误,从而影响文章内容。目的 评估基于深度学习的胸部放射学年龄模型(CXR-Age)在亚洲人大型外部测试队列中的预后价值。材料和方法 这项单中心回顾性研究纳入了 2004 年 1 月至 2018 年 6 月期间接受健康检查的 50 至 80 岁连续无症状亚洲人的胸部X光片。本研究对之前开发的 CXR-Age 模型进行了专门的外部测试,该模型根据全因死亡风险预测调整后的年龄。使用多变量 Cox 或 Fine-Gray 模型评估了 CXR-Age 对全因、心血管、肺癌和呼吸系统疾病死亡率的调整后危险比(HRs),并通过似然比检验评估了其附加值。结果 共纳入 36,924 人(平均年龄(± SD),58±7 岁;CXR-Age,60±5 岁;男性 22,352 人)。在中位 11.0 年的随访期间,1250 人(3.4%)死亡,其中心血管死亡 153 人(0.4%),肺癌死亡 166 人(0.4%),呼吸系统死亡 98 人(0.3%)。CXR-年龄是导致全因死亡的重要风险因素(50 岁时的调整 HR 为 1.03;60 岁时为 1.03):1.03;60 岁1.05;70 岁时1.07)、心血管疾病(调整后 HR:1.11)、肺癌(曾经吸烟者调整后 HR:1.12;目前吸烟者调整后 HR:1.05)和呼吸系统疾病死亡率(调整后 HR:1.12)(所有 P 值均小于 0.05)。似然比检验表明,在所有结果中,CXR-年龄比包括年代年龄在内的临床因素更具预后价值(所有 P 值均小于 0.001)。结论 在无症状的亚洲人中,基于深度学习的胸片年龄与各种生存结果相关,并具有临床因素的附加价值,这表明它具有普遍性。©RSNA,2024。
{"title":"External Testing of a Deep Learning Model to Estimate Biologic Age Using Chest Radiographs.","authors":"Jong Hyuk Lee, Dongheon Lee, Michael T Lu, Vineet K Raghu, Jin Mo Goo, Yunhee Choi, Seung Ho Choi, Hyungjin Kim","doi":"10.1148/ryai.230433","DOIUrl":"10.1148/ryai.230433","url":null,"abstract":"<p><p>Purpose To assess the prognostic value of a deep learning-based chest radiographic age (hereafter, CXR-Age) model in a large external test cohort of Asian individuals. Materials and Methods This single-center, retrospective study included chest radiographs from consecutive, asymptomatic Asian individuals aged 50-80 years who underwent health checkups between January 2004 and June 2018. This study performed a dedicated external test of a previously developed CXR-Age model, which predicts an age adjusted based on the risk of all-cause mortality. Adjusted hazard ratios (HRs) of CXR-Age for all-cause, cardiovascular, lung cancer, and respiratory disease mortality were assessed using multivariable Cox or Fine-Gray models, and their added values were evaluated by likelihood ratio tests. Results A total of 36 924 individuals (mean chronological age, 58 years ± 7 [SD]; CXR-Age, 60 years ± 5; 22 352 male) were included. During a median follow-up of 11.0 years, 1250 individuals (3.4%) died, including 153 cardiovascular (0.4%), 166 lung cancer (0.4%), and 98 respiratory (0.3%) deaths. CXR-Age was a significant risk factor for all-cause (adjusted HR at chronological age of 50 years, 1.03; at 60 years, 1.05; at 70 years, 1.07), cardiovascular (adjusted HR, 1.11), lung cancer (adjusted HR for individuals who formerly smoked, 1.12; for those who currently smoke, 1.05), and respiratory disease (adjusted HR, 1.12) mortality (<i>P</i> < .05 for all). The likelihood ratio test demonstrated added prognostic value of CXR-Age to clinical factors, including chronological age for all outcomes (<i>P</i> < .001 for all). Conclusion Deep learning-based chest radiographic age was associated with various survival outcomes and had added value to clinical factors in asymptomatic Asian individuals, suggesting its generalizability. <b>Keywords:</b> Conventional Radiography, Thorax, Heart, Lung, Mediastinum, Outcomes Analysis, Quantification, Prognosis, Convolutional Neural Network (CNN) <i>Supplemental material is available for this article.</i> © RSNA, 2024 See also the commentary by Adams and Bressem in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230433"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11427929/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141752995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-based Unsupervised Domain Adaptation via a Unified Model for Prostate Lesion Detection Using Multisite Biparametric MRI Datasets. 利用多部位双参数磁共振成像数据集,通过统一模型进行前列腺病变检测的基于深度学习的无监督领域适应。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-01 DOI: 10.1148/ryai.230521
Hao Li, Han Liu, Heinrich von Busch, Robert Grimm, Henkjan Huisman, Angela Tong, David Winkel, Tobias Penzkofer, Ivan Shabunin, Moon Hyung Choi, Qingsong Yang, Dieter Szolar, Steven Shea, Fergus Coakley, Mukesh Harisinghani, Ipek Oguz, Dorin Comaniciu, Ali Kamen, Bin Lou

Purpose To determine whether the unsupervised domain adaptation (UDA) method with generated images improves the performance of a supervised learning (SL) model for prostate cancer (PCa) detection using multisite biparametric (bp) MRI datasets. Materials and Methods This retrospective study included data from 5150 patients (14 191 samples) collected across nine different imaging centers. A novel UDA method using a unified generative model was developed for PCa detection using multisite bpMRI datasets. This method translates diffusion-weighted imaging (DWI) acquisitions, including apparent diffusion coefficient (ADC) and individual diffusion-weighted (DW) images acquired using various b values, to align with the style of images acquired using b values recommended by Prostate Imaging Reporting and Data System (PI-RADS) guidelines. The generated ADC and DW images replace the original images for PCa detection. An independent set of 1692 test cases (2393 samples) was used for evaluation. The area under the receiver operating characteristic curve (AUC) was used as the primary metric, and statistical analysis was performed via bootstrapping. Results For all test cases, the AUC values for baseline SL and UDA methods were 0.73 and 0.79 (P < .001), respectively, for PCa lesions with PI-RADS score of 3 or greater and 0.77 and 0.80 (P < .001) for lesions with PI-RADS scores of 4 or greater. In the 361 test cases under the most unfavorable image acquisition setting, the AUC values for baseline SL and UDA were 0.49 and 0.76 (P < .001) for lesions with PI-RADS scores of 3 or greater and 0.50 and 0.77 (P < .001) for lesions with PI-RADS scores of 4 or greater. Conclusion UDA with generated images improved the performance of SL methods in PCa lesion detection across multisite datasets with various b values, especially for images acquired with significant deviations from the PI-RADS-recommended DWI protocol (eg, with an extremely high b value). Keywords: Prostate Cancer Detection, Multisite, Unsupervised Domain Adaptation, Diffusion-weighted Imaging, b Value Supplemental material is available for this article. © RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响文章内容的错误。目的 确定使用生成图像的无监督领域适应(UDA)方法是否能提高使用多部位 bp-MRI 数据集进行前列腺癌(PCa)检测的监督学习(SL)模型的性能。材料与方法 这项回顾性研究包括九个不同成像中心收集的 5,150 名患者(14,191 个样本)的数据。研究人员使用统一生成模型开发了一种新型 UDA 方法,用于使用多部位 bp-MRI 数据集检测 PCa。该方法将扩散加权成像(DWI)采集数据(包括表观扩散系数(ADC)和使用不同 b 值采集的单个 DW 图像)转换为前列腺成像报告和数据系统(PI-RADS)指南推荐的 b 值采集图像样式。生成的 ADC 和 DW 图像取代了用于 PCa 检测的原始图像。评估使用了一组独立的 1,692 个测试案例(2,393 个样本)。接收者操作特征曲线下面积(AUC)被用作主要指标,统计分析通过引导法进行。结果 在所有测试病例中,对于 PI-RADS ≥ 3 的 PCa 病变,基线 SL 和 UDA 方法的 AUC 值分别为 0.73 和 0.79(P < .001);对于 PI-RADS ≥ 4 的 PCa 病变,基线 SL 和 UDA 方法的 AUC 值分别为 0.77 和 0.80(P < .001)。在最不利的图像采集设置下的 361 个测试病例中,基线 SL 和 UDA 的 AUC 值分别为:PI-RADS ≥ 3 为 0.49 和 0.76(P < .001),PI-RADS ≥ 4 PCa 病变为 0.50 和 0.77(P < .001)。结论 使用生成图像的 UDA 提高了 SL 方法在不同 b 值的多部位数据集上检测 PCa 病灶的性能,尤其是在采集的图像明显偏离 PI-RADS 推荐的 DWI 方案(如具有极高 b 值)时。©RSNA,2024。
{"title":"Deep Learning-based Unsupervised Domain Adaptation via a Unified Model for Prostate Lesion Detection Using Multisite Biparametric MRI Datasets.","authors":"Hao Li, Han Liu, Heinrich von Busch, Robert Grimm, Henkjan Huisman, Angela Tong, David Winkel, Tobias Penzkofer, Ivan Shabunin, Moon Hyung Choi, Qingsong Yang, Dieter Szolar, Steven Shea, Fergus Coakley, Mukesh Harisinghani, Ipek Oguz, Dorin Comaniciu, Ali Kamen, Bin Lou","doi":"10.1148/ryai.230521","DOIUrl":"10.1148/ryai.230521","url":null,"abstract":"<p><p>Purpose To determine whether the unsupervised domain adaptation (UDA) method with generated images improves the performance of a supervised learning (SL) model for prostate cancer (PCa) detection using multisite biparametric (bp) MRI datasets. Materials and Methods This retrospective study included data from 5150 patients (14 191 samples) collected across nine different imaging centers. A novel UDA method using a unified generative model was developed for PCa detection using multisite bpMRI datasets. This method translates diffusion-weighted imaging (DWI) acquisitions, including apparent diffusion coefficient (ADC) and individual diffusion-weighted (DW) images acquired using various <i>b</i> values, to align with the style of images acquired using <i>b</i> values recommended by Prostate Imaging Reporting and Data System (PI-RADS) guidelines. The generated ADC and DW images replace the original images for PCa detection. An independent set of 1692 test cases (2393 samples) was used for evaluation. The area under the receiver operating characteristic curve (AUC) was used as the primary metric, and statistical analysis was performed via bootstrapping. Results For all test cases, the AUC values for baseline SL and UDA methods were 0.73 and 0.79 (<i>P</i> < .001), respectively, for PCa lesions with PI-RADS score of 3 or greater and 0.77 and 0.80 (<i>P</i> < .001) for lesions with PI-RADS scores of 4 or greater. In the 361 test cases under the most unfavorable image acquisition setting, the AUC values for baseline SL and UDA were 0.49 and 0.76 (<i>P</i> < .001) for lesions with PI-RADS scores of 3 or greater and 0.50 and 0.77 (<i>P</i> < .001) for lesions with PI-RADS scores of 4 or greater. Conclusion UDA with generated images improved the performance of SL methods in PCa lesion detection across multisite datasets with various <i>b</i> values, especially for images acquired with significant deviations from the PI-RADS-recommended DWI protocol (eg, with an extremely high <i>b</i> value). <b>Keywords:</b> Prostate Cancer Detection, Multisite, Unsupervised Domain Adaptation, Diffusion-weighted Imaging, <i>b</i> Value <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230521"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11449150/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142018898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
nnU-Net-based Segmentation of Tumor Subcompartments in Pediatric Medulloblastoma Using Multiparametric MRI: A Multi-institutional Study. 基于 Nn-Unet 的多参数磁共振成像对小儿髓母细胞瘤肿瘤亚区的分割:一项多机构研究
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-01 DOI: 10.1148/ryai.230115
Rohan Bareja, Marwa Ismail, Douglas Martin, Ameya Nayate, Ipsa Yadav, Murad Labbad, Prateek Dullur, Sanya Garg, Benita Tamrazi, Ralph Salloum, Ashley Margol, Alexander Judkins, Sukanya Iyer, Peter de Blank, Pallavi Tiwari

Purpose To evaluate nnU-Net-based segmentation models for automated delineation of medulloblastoma tumors on multi-institutional MRI scans. Materials and Methods This retrospective study included 78 pediatric patients (52 male, 26 female), with ages ranging from 2 to 18 years, with medulloblastomas, from three different sites (28 from hospital A, 18 from hospital B, and 32 from hospital C), who had data available from three clinical MRI protocols (gadolinium-enhanced T1-weighted, T2-weighted, and fluid-attenuated inversion recovery). The scans were retrospectively collected from the year 2000 until May 2019. Reference standard annotations of the tumor habitat, including enhancing tumor, edema, and cystic core plus nonenhancing tumor subcompartments, were performed by two experienced neuroradiologists. Preprocessing included registration to age-appropriate atlases, skull stripping, bias correction, and intensity matching. The two models were trained as follows: (a) the transfer learning nnU-Net model was pretrained on an adult glioma cohort (n = 484) and fine-tuned on medulloblastoma studies using Models Genesis and (b) the direct deep learning nnU-Net model was trained directly on the medulloblastoma datasets, across fivefold cross-validation. Model robustness was evaluated on the three datasets when using different combinations of training and test sets, with data from two sites at a time used for training and data from the third site used for testing. Results Analysis on the three test sites yielded Dice scores of 0.81, 0.86, and 0.86 and 0.80, 0.86, and 0.85 for tumor habitat; 0.68, 0.84, and 0.77 and 0.67, 0.83, and 0.76 for enhancing tumor; 0.56, 0.71, and 0.69 and 0.56, 0.71, and 0.70 for edema; and 0.32, 0.48, and 0.43 and 0.29, 0.44, and 0.41 for cystic core plus nonenhancing tumor for the transfer learning and direct nnU-Net models, respectively. The models were largely robust to site-specific variations. Conclusion nnU-Net segmentation models hold promise for accurate, robust automated delineation of medulloblastoma tumor subcompartments, potentially leading to more effective radiation therapy planning in pediatric medulloblastoma. Keywords: Pediatrics, MR Imaging, Segmentation, Transfer Learning, Medulloblastoma, nnU-Net, MRI Supplemental material is available for this article. © RSNA, 2024 See also the commentary by Rudie and Correia de Verdier in this issue.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 评估基于 nn-Unet 的分割模型在多机构 MRI 扫描中自动划分髓母细胞瘤(MB)肿瘤的情况。材料与方法 这项回顾性研究纳入了 78 名儿科患者(52 名男性,26 名女性),他们的年龄在 2-18 岁之间,患有来自三个不同部位的 MB 肿瘤(28 名来自 A 医院,18 名来自 B 医院,32 名来自 C 医院),他们拥有三种临床 MRI 方案(钆增强 T1 加权、T2 加权、FLAIR)的数据。这些扫描数据是回顾性收集的,时间从 2000 年至 2019 年 5 月。肿瘤生境的参考标准注释,包括增强肿瘤、水肿、囊性核心+非增强肿瘤亚分区,由两位经验丰富的神经放射科医生完成。预处理包括与年龄相适应的图谱配准、头骨剥离、偏差校正和强度匹配。两个模型的训练方法如下:(1) 转移学习 nn-Unet 模型在成人胶质瘤队列(n = 484)上进行预训练,并使用 Models Genesis 在 MB 研究上进行微调;(2) 直接深度学习 nn-Unet 模型直接在 MB 数据集上进行训练,并进行五倍交叉验证。使用不同的训练集和测试集组合在三个数据集上评估了模型的鲁棒性,每次使用两个站点的数据进行训练,使用第三个站点的数据进行测试。结果 对 3 个测试点进行分析后发现,肿瘤生境的 Dice 分数分别为 0.81、0.86、0.86 和 0.80、0.86、0.85;肿瘤增强的 Dice 分数分别为 0.68、0.84、0.77 和 0.67、0.83、0.76;肿瘤生长的 Dice 分数分别为 0.56、0.对于转移学习模型和直接 nn-Unet 模型,水肿分别为 0.56、0.71、0.69 和 0.56、0.71、0.70;囊核 + 非增强肿瘤分别为 0.32、0.48、0.43 和 0.29、0.44、0.41。这些模型对特定部位的变化基本没有影响。结论 nn-Unet 分割模型有望准确、稳健地自动划分 MB 肿瘤亚分区,从而更有效地制定小儿 MB 放疗计划。©RSNA,2024。
{"title":"nnU-Net-based Segmentation of Tumor Subcompartments in Pediatric Medulloblastoma Using Multiparametric MRI: A Multi-institutional Study.","authors":"Rohan Bareja, Marwa Ismail, Douglas Martin, Ameya Nayate, Ipsa Yadav, Murad Labbad, Prateek Dullur, Sanya Garg, Benita Tamrazi, Ralph Salloum, Ashley Margol, Alexander Judkins, Sukanya Iyer, Peter de Blank, Pallavi Tiwari","doi":"10.1148/ryai.230115","DOIUrl":"10.1148/ryai.230115","url":null,"abstract":"<p><p>Purpose To evaluate nnU-Net-based segmentation models for automated delineation of medulloblastoma tumors on multi-institutional MRI scans. Materials and Methods This retrospective study included 78 pediatric patients (52 male, 26 female), with ages ranging from 2 to 18 years, with medulloblastomas, from three different sites (28 from hospital A, 18 from hospital B, and 32 from hospital C), who had data available from three clinical MRI protocols (gadolinium-enhanced T1-weighted, T2-weighted, and fluid-attenuated inversion recovery). The scans were retrospectively collected from the year 2000 until May 2019. Reference standard annotations of the tumor habitat, including enhancing tumor, edema, and cystic core plus nonenhancing tumor subcompartments, were performed by two experienced neuroradiologists. Preprocessing included registration to age-appropriate atlases, skull stripping, bias correction, and intensity matching. The two models were trained as follows: <i>(a)</i> the transfer learning nnU-Net model was pretrained on an adult glioma cohort (<i>n</i> = 484) and fine-tuned on medulloblastoma studies using Models Genesis and <i>(b)</i> the direct deep learning nnU-Net model was trained directly on the medulloblastoma datasets, across fivefold cross-validation. Model robustness was evaluated on the three datasets when using different combinations of training and test sets, with data from two sites at a time used for training and data from the third site used for testing. Results Analysis on the three test sites yielded Dice scores of 0.81, 0.86, and 0.86 and 0.80, 0.86, and 0.85 for tumor habitat; 0.68, 0.84, and 0.77 and 0.67, 0.83, and 0.76 for enhancing tumor; 0.56, 0.71, and 0.69 and 0.56, 0.71, and 0.70 for edema; and 0.32, 0.48, and 0.43 and 0.29, 0.44, and 0.41 for cystic core plus nonenhancing tumor for the transfer learning and direct nnU-Net models, respectively. The models were largely robust to site-specific variations. Conclusion nnU-Net segmentation models hold promise for accurate, robust automated delineation of medulloblastoma tumor subcompartments, potentially leading to more effective radiation therapy planning in pediatric medulloblastoma. <b>Keywords:</b> Pediatrics, MR Imaging, Segmentation, Transfer Learning, Medulloblastoma, nnU-Net, MRI <i>Supplemental material is available for this article.</i> © RSNA, 2024 See also the commentary by Rudie and Correia de Verdier in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230115"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11427926/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142018900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning Segmentation of Ascites on Abdominal CT Scans for Automatic Volume Quantification. 深度学习分割腹部 CT 扫描上的腹水,实现自动体积定量。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-01 DOI: 10.1148/ryai.230601
Benjamin Hou, Sungwon Lee, Jung-Min Lee, Christopher Koh, Jing Xiao, Perry J Pickhardt, Ronald M Summers

Purpose To evaluate the performance of an automated deep learning method in detecting ascites and subsequently quantifying its volume in patients with liver cirrhosis and patients with ovarian cancer. Materials and Methods This retrospective study included contrast-enhanced and noncontrast abdominal-pelvic CT scans of patients with cirrhotic ascites and patients with ovarian cancer from two institutions, National Institutes of Health (NIH) and University of Wisconsin (UofW). The model, trained on The Cancer Genome Atlas Ovarian Cancer dataset (mean age [±SD], 60 years ± 11; 143 female), was tested on two internal datasets (NIH-LC and NIH-OV) and one external dataset (UofW-LC). Its performance was measured by the F1/Dice coefficient, SDs, and 95% CIs, focusing on ascites volume in the peritoneal cavity. Results On NIH-LC (25 patients; mean age, 59 years ± 14; 14 male) and NIH-OV (166 patients; mean age, 65 years ± 9; all female), the model achieved F1/Dice scores of 85.5% ± 6.1 (95% CI: 83.1, 87.8) and 82.6% ± 15.3 (95% CI: 76.4, 88.7), with median volume estimation errors of 19.6% (IQR, 13.2%-29.0%) and 5.3% (IQR: 2.4%-9.7%), respectively. On UofW-LC (124 patients; mean age, 46 years ± 12; 73 female), the model had a F1/Dice score of 83.0% ± 10.7 (95% CI: 79.8, 86.3) and median volume estimation error of 9.7% (IQR, 4.5%-15.1%). The model showed strong agreement with expert assessments, with r2 values of 0.79, 0.98, and 0.97 across the test sets. Conclusion The proposed deep learning method performed well in segmenting and quantifying the volume of ascites in patients with cirrhosis and those with ovarian cancer, in concordance with expert radiologist assessments. Keywords: Abdomen/GI, Cirrhosis, Deep Learning, Segmentation Supplemental material is available for this article. © RSNA, 2024 See also commentary by Aisen and Rodrigues in this issue.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 评估一种自动深度学习方法在检测肝硬化和卵巢癌患者腹水并随后量化其体积方面的性能。材料与方法 这项回顾性研究纳入了来自美国国立卫生研究院(NIH)和威斯康星大学(UofW)两家机构的肝硬化腹水患者和卵巢癌患者的对比增强和非对比腹盆腔 CT 扫描。该模型在癌症基因组图谱卵巢癌数据集(平均年龄为 60 岁 ± 11 [SD];143 名女性)上进行了训练,并在两个内部数据集(NIH-LC 和 NIH-OV)和一个外部数据集(UofW-LC)上进行了测试。其性能通过狄斯系数、标准偏差和 95% 置信区间来衡量,重点是腹腔腹水体积。结果 在 NIH-LC(25 名患者;平均年龄为 59 岁 ± 14 岁;14 名男性)和 NIH-OV(166 名患者;平均年龄为 65 岁 ± 9 岁;均为女性)上,该模型的 Dice 分数分别为 85.5% ± 6.1% (CI: 83.1%-87.8%) 和 82.6% ± 15.3% (CI: 76.4%-88.7%) ,体积估计误差中位数分别为 19.6% (IQR: 13.2%-29.0%) 和 5.3% (IQR: 2.4%- 9.7%)。在 UofW-LC 中(124 名患者;平均年龄 46 岁 ± 12 岁;73 名女性),该模型的 Dice 得分为 83.0% ± 10.7% (CI:79.8%-86.3%),体积估计误差中位数为 9.7%(IQR:4.5%-15.1%)。该模型与专家评估结果具有很高的一致性,在所有测试集中的 r2 值分别为 0.79、0.98 和 0.97。结论 所提出的深度学习方法在分割和量化腹水体积方面表现良好,与放射科专家的评估结果一致。©RSNA, 2024.
{"title":"Deep Learning Segmentation of Ascites on Abdominal CT Scans for Automatic Volume Quantification.","authors":"Benjamin Hou, Sungwon Lee, Jung-Min Lee, Christopher Koh, Jing Xiao, Perry J Pickhardt, Ronald M Summers","doi":"10.1148/ryai.230601","DOIUrl":"10.1148/ryai.230601","url":null,"abstract":"<p><p>Purpose To evaluate the performance of an automated deep learning method in detecting ascites and subsequently quantifying its volume in patients with liver cirrhosis and patients with ovarian cancer. Materials and Methods This retrospective study included contrast-enhanced and noncontrast abdominal-pelvic CT scans of patients with cirrhotic ascites and patients with ovarian cancer from two institutions, National Institutes of Health (NIH) and University of Wisconsin (UofW). The model, trained on The Cancer Genome Atlas Ovarian Cancer dataset (mean age [±SD], 60 years ± 11; 143 female), was tested on two internal datasets (NIH-LC and NIH-OV) and one external dataset (UofW-LC). Its performance was measured by the F1/Dice coefficient, SDs, and 95% CIs, focusing on ascites volume in the peritoneal cavity. Results On NIH-LC (25 patients; mean age, 59 years ± 14; 14 male) and NIH-OV (166 patients; mean age, 65 years ± 9; all female), the model achieved F1/Dice scores of 85.5% ± 6.1 (95% CI: 83.1, 87.8) and 82.6% ± 15.3 (95% CI: 76.4, 88.7), with median volume estimation errors of 19.6% (IQR, 13.2%-29.0%) and 5.3% (IQR: 2.4%-9.7%), respectively. On UofW-LC (124 patients; mean age, 46 years ± 12; 73 female), the model had a F1/Dice score of 83.0% ± 10.7 (95% CI: 79.8, 86.3) and median volume estimation error of 9.7% (IQR, 4.5%-15.1%). The model showed strong agreement with expert assessments, with <i>r<sup>2</sup></i> values of 0.79, 0.98, and 0.97 across the test sets. Conclusion The proposed deep learning method performed well in segmenting and quantifying the volume of ascites in patients with cirrhosis and those with ovarian cancer, in concordance with expert radiologist assessments. <b>Keywords:</b> Abdomen/GI, Cirrhosis, Deep Learning, Segmentation <i>Supplemental material is available for this article</i>. © RSNA, 2024 See also commentary by Aisen and Rodrigues in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230601"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11449171/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141427768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning to Detect Intracranial Hemorrhage in a National Teleradiology Program and the Impact on Interpretation Time. 深度学习检测国家远程放射学项目中的颅内出血及其对判读时间的影响。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-01 DOI: 10.1148/ryai.240067
Andrew James Del Gaizo, Thomas F Osborne, Troy Shahoumian, Robert Sherrier

The diagnostic performance of an artificial intelligence (AI) clinical decision support solution for acute intracranial hemorrhage (ICH) detection was assessed in a large teleradiology practice. The impact on radiologist read times and system efficiency was also quantified. A total of 61 704 consecutive noncontrast head CT examinations were retrospectively evaluated. System performance was calculated along with mean and median read times for CT studies obtained before (baseline, pre-AI period; August 2021 to May 2022) and after (post-AI period; January 2023 to February 2024) AI implementation. The AI solution had a sensitivity of 75.6%, specificity of 92.1%, accuracy of 91.7%, prevalence of 2.70%, and positive predictive value of 21.1%. Of the 56 745 post-AI CT scans with no bleed identified by a radiologist, examinations falsely flagged as suspected ICH by the AI solution (n = 4464) took an average of 9 minutes 40 seconds (median, 8 minutes 7 seconds) to interpret as compared with 8 minutes 25 seconds (median, 6 minutes 48 seconds) for unremarkable CT scans before AI (n = 49 007) (P < .001) and 8 minutes 38 seconds (median, 6 minutes 53 seconds) after AI when ICH was not suspected by the AI solution (n = 52 281) (P < .001). CT scans with no bleed identified by the AI but reported as positive for ICH by the radiologist (n = 384) took an average of 14 minutes 23 seconds (median, 13 minutes 35 seconds) to interpret as compared with 13 minutes 34 seconds (median, 12 minutes 30 seconds) for CT scans correctly reported as a bleed by the AI (n = 1192) (P = .04). With lengthened read times for falsely flagged examinations, system inefficiencies may outweigh the potential benefits of using the tool in a high volume, low prevalence environment. Keywords: Artificial Intelligence, Intracranial Hemorrhage, Read Time, Report Turnaround Time, System Efficiency Supplemental material is available for this article. © RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现一些可能影响内容的错误。在大型远程放射学实践中评估了人工智能(AI)临床决策支持(CDS)解决方案对急性颅内出血(ICH)检测的诊断性能。同时还量化了该方案对放射医师读片时间和系统效率的影响。共对 61,704 例连续的非对比头部 CT(NCHCT)进行了回顾性评估。计算了系统性能以及人工智能前(基线:2021 年 8 月至 2022 年 5 月)和人工智能后(2023 年 1 月至 2024 年 2 月)NCHCT 的平均和中位读取时间值。人工智能解决方案的灵敏度为 75.6%,特异性为 92.1%,准确性为 91.7%,流行率为 2.70%,阳性预测值为 21.1%。在56,745例经放射科医生确认无出血的AI后NCHCT中,被AI解决方案误标记为疑似ICH的检查(n = 4,464)的平均判读时间为9分40秒/中位数为8分7秒,而AI前无异常NCHCT(n = 49,007)的平均判读时间为8分25秒/中位数为6分48秒(P < .001)和 AI 后平均 8 分 38 秒/6 分 53 秒中位数(当 AI 方案未怀疑 ICH 时,n = 52,281 )(P < .001 )。人工智能未识别出血但放射科医生报告为 ICH 阳性的 NCHCT(n = 384)平均判读时间为 14 分 23 秒/中位数为 13 分 35 秒,而人工智能正确报告为出血的 NCHCT(n = 1192)平均判读时间为 13 分 34 秒/中位数为 12 分 30 秒(P = .04)。由于错误标记检查的读取时间延长,系统的低效率可能会超过在高流量、低流行率环境中使用该工具的潜在益处。©RSNA,2024。
{"title":"Deep Learning to Detect Intracranial Hemorrhage in a National Teleradiology Program and the Impact on Interpretation Time.","authors":"Andrew James Del Gaizo, Thomas F Osborne, Troy Shahoumian, Robert Sherrier","doi":"10.1148/ryai.240067","DOIUrl":"10.1148/ryai.240067","url":null,"abstract":"<p><p>The diagnostic performance of an artificial intelligence (AI) clinical decision support solution for acute intracranial hemorrhage (ICH) detection was assessed in a large teleradiology practice. The impact on radiologist read times and system efficiency was also quantified. A total of 61 704 consecutive noncontrast head CT examinations were retrospectively evaluated. System performance was calculated along with mean and median read times for CT studies obtained before (baseline, pre-AI period; August 2021 to May 2022) and after (post-AI period; January 2023 to February 2024) AI implementation. The AI solution had a sensitivity of 75.6%, specificity of 92.1%, accuracy of 91.7%, prevalence of 2.70%, and positive predictive value of 21.1%. Of the 56 745 post-AI CT scans with no bleed identified by a radiologist, examinations falsely flagged as suspected ICH by the AI solution (<i>n</i> = 4464) took an average of 9 minutes 40 seconds (median, 8 minutes 7 seconds) to interpret as compared with 8 minutes 25 seconds (median, 6 minutes 48 seconds) for unremarkable CT scans before AI (<i>n</i> = 49 007) (<i>P</i> < .001) and 8 minutes 38 seconds (median, 6 minutes 53 seconds) after AI when ICH was not suspected by the AI solution (<i>n</i> = 52 281) (<i>P</i> < .001). CT scans with no bleed identified by the AI but reported as positive for ICH by the radiologist (<i>n</i> = 384) took an average of 14 minutes 23 seconds (median, 13 minutes 35 seconds) to interpret as compared with 13 minutes 34 seconds (median, 12 minutes 30 seconds) for CT scans correctly reported as a bleed by the AI (<i>n</i> = 1192) (<i>P</i> = .04). With lengthened read times for falsely flagged examinations, system inefficiencies may outweigh the potential benefits of using the tool in a high volume, low prevalence environment. <b>Keywords:</b> Artificial Intelligence, Intracranial Hemorrhage, Read Time, Report Turnaround Time, System Efficiency <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240067"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11427938/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141627886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Open Access Data and Deep Learning for Cardiac Device Identification on Standard DICOM and Smartphone-based Chest Radiographs. 在标准 DICOM 和基于智能手机的胸部 X 光片上进行心脏设备识别的开放访问数据和深度学习。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-01 DOI: 10.1148/ryai.230502
Felix Busch, Keno K Bressem, Phillip Suwalski, Lena Hoffmann, Stefan M Niehues, Denis Poddubnyy, Marcus R Makowski, Hugo J W L Aerts, Andrei Zhukov, Lisa C Adams

Purpose To develop and evaluate a publicly available deep learning model for segmenting and classifying cardiac implantable electronic devices (CIEDs) on Digital Imaging and Communications in Medicine (DICOM) and smartphone-based chest radiographs. Materials and Methods This institutional review board-approved retrospective study included patients with implantable pacemakers, cardioverter defibrillators, cardiac resynchronization therapy devices, and cardiac monitors who underwent chest radiography between January 2012 and January 2022. A U-Net model with a ResNet-50 backbone was created to classify CIEDs on DICOM and smartphone images. Using 2321 chest radiographs in 897 patients (median age, 76 years [range, 18-96 years]; 625 male, 272 female), CIEDs were categorized into four manufacturers, 27 models, and one "other" category. Five smartphones were used to acquire 11 072 images. Performance was reported using the Dice coefficient on the validation set for segmentation or balanced accuracy on the test set for manufacturer and model classification, respectively. Results The segmentation tool achieved a mean Dice coefficient of 0.936 (IQR: 0.890-0.958). The model had an accuracy of 94.36% (95% CI: 90.93%, 96.84%; 251 of 266) for CIED manufacturer classification and 84.21% (95% CI: 79.31%, 88.30%; 224 of 266) for CIED model classification. Conclusion The proposed deep learning model, trained on both traditional DICOM and smartphone images, showed high accuracy for segmentation and classification of CIEDs on chest radiographs. Keywords: Conventional Radiography, Segmentation Supplemental material is available for this article. © RSNA, 2024 See also the commentary by Júdice de Mattos Farina and Celi in this issue.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响文章内容的错误。目的 开发和评估一种公开可用的深度学习模型,用于在医学数字成像与通信(DICOM)和基于智能手机的胸片(CXR)图像上分割和分类心脏植入式电子装置(CIED)。材料与方法 这项经机构审查委员会批准的回顾性研究纳入了在 2012 年 1 月至 2022 年 1 月期间接受胸部放射摄影检查的植入式心脏起搏器、心脏复律除颤器、心脏再同步治疗设备和心脏监护仪患者。我们创建了一个以 ResNet-50 为骨干的 U-Net 模型,用于对 DICOM 和智能手机图像上的 CIED 进行分类。利用 897 名患者(中位年龄 76 岁(18-96 岁不等);625 名男性,272 名女性)的 2321 张 CXR,将 CIED 分成 4 个制造商、27 个型号和一个 "其他 "类别。使用五部智能手机获取了 11,072 张图像。性能报告分别使用验证集上的骰子系数(Dice coefficient)进行分割,或使用测试集上的平衡准确率(balanced accuracy)进行制造商和型号分类。结果 图像分割工具的平均 Dice 系数为 0.936(IQR:0.890-0.958)。该模型的 CIED 制造商分类准确率为 94.36%(95% CI:90.93%-96.84%;n = 251/266),CIED 模型分类准确率为 84.21%(95% CI:79.31%-88.30%;n = 224/266)。结论 在传统 DICOM 和智能手机图像上训练的深度学习模型,对 CXR 上 CIED 的分割和分类显示出很高的准确性。©RSNA, 2024.
{"title":"Open Access Data and Deep Learning for Cardiac Device Identification on Standard DICOM and Smartphone-based Chest Radiographs.","authors":"Felix Busch, Keno K Bressem, Phillip Suwalski, Lena Hoffmann, Stefan M Niehues, Denis Poddubnyy, Marcus R Makowski, Hugo J W L Aerts, Andrei Zhukov, Lisa C Adams","doi":"10.1148/ryai.230502","DOIUrl":"10.1148/ryai.230502","url":null,"abstract":"<p><p>Purpose To develop and evaluate a publicly available deep learning model for segmenting and classifying cardiac implantable electronic devices (CIEDs) on Digital Imaging and Communications in Medicine (DICOM) and smartphone-based chest radiographs. Materials and Methods This institutional review board-approved retrospective study included patients with implantable pacemakers, cardioverter defibrillators, cardiac resynchronization therapy devices, and cardiac monitors who underwent chest radiography between January 2012 and January 2022. A U-Net model with a ResNet-50 backbone was created to classify CIEDs on DICOM and smartphone images. Using 2321 chest radiographs in 897 patients (median age, 76 years [range, 18-96 years]; 625 male, 272 female), CIEDs were categorized into four manufacturers, 27 models, and one \"other\" category. Five smartphones were used to acquire 11 072 images. Performance was reported using the Dice coefficient on the validation set for segmentation or balanced accuracy on the test set for manufacturer and model classification, respectively. Results The segmentation tool achieved a mean Dice coefficient of 0.936 (IQR: 0.890-0.958). The model had an accuracy of 94.36% (95% CI: 90.93%, 96.84%; 251 of 266) for CIED manufacturer classification and 84.21% (95% CI: 79.31%, 88.30%; 224 of 266) for CIED model classification. Conclusion The proposed deep learning model, trained on both traditional DICOM and smartphone images, showed high accuracy for segmentation and classification of CIEDs on chest radiographs. <b>Keywords:</b> Conventional Radiography, Segmentation <i>Supplemental material is available for this article</i>. © RSNA, 2024 See also the commentary by Júdice de Mattos Farina and Celi in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230502"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11427927/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141627887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Radiology-Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1