首页 > 最新文献

Radiology-Artificial Intelligence最新文献

英文 中文
Deep Learning to Detect Intracranial Hemorrhage in a National Teleradiology Program and the Impact on Interpretation Time. 深度学习检测国家远程放射学项目中的颅内出血及其对判读时间的影响。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-01 DOI: 10.1148/ryai.240067
Andrew James Del Gaizo, Thomas F Osborne, Troy Shahoumian, Robert Sherrier

The diagnostic performance of an artificial intelligence (AI) clinical decision support solution for acute intracranial hemorrhage (ICH) detection was assessed in a large teleradiology practice. The impact on radiologist read times and system efficiency was also quantified. A total of 61 704 consecutive noncontrast head CT examinations were retrospectively evaluated. System performance was calculated along with mean and median read times for CT studies obtained before (baseline, pre-AI period; August 2021 to May 2022) and after (post-AI period; January 2023 to February 2024) AI implementation. The AI solution had a sensitivity of 75.6%, specificity of 92.1%, accuracy of 91.7%, prevalence of 2.70%, and positive predictive value of 21.1%. Of the 56 745 post-AI CT scans with no bleed identified by a radiologist, examinations falsely flagged as suspected ICH by the AI solution (n = 4464) took an average of 9 minutes 40 seconds (median, 8 minutes 7 seconds) to interpret as compared with 8 minutes 25 seconds (median, 6 minutes 48 seconds) for unremarkable CT scans before AI (n = 49 007) (P < .001) and 8 minutes 38 seconds (median, 6 minutes 53 seconds) after AI when ICH was not suspected by the AI solution (n = 52 281) (P < .001). CT scans with no bleed identified by the AI but reported as positive for ICH by the radiologist (n = 384) took an average of 14 minutes 23 seconds (median, 13 minutes 35 seconds) to interpret as compared with 13 minutes 34 seconds (median, 12 minutes 30 seconds) for CT scans correctly reported as a bleed by the AI (n = 1192) (P = .04). With lengthened read times for falsely flagged examinations, system inefficiencies may outweigh the potential benefits of using the tool in a high volume, low prevalence environment. Keywords: Artificial Intelligence, Intracranial Hemorrhage, Read Time, Report Turnaround Time, System Efficiency Supplemental material is available for this article. © RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现一些可能影响内容的错误。在大型远程放射学实践中评估了人工智能(AI)临床决策支持(CDS)解决方案对急性颅内出血(ICH)检测的诊断性能。同时还量化了该方案对放射医师读片时间和系统效率的影响。共对 61,704 例连续的非对比头部 CT(NCHCT)进行了回顾性评估。计算了系统性能以及人工智能前(基线:2021 年 8 月至 2022 年 5 月)和人工智能后(2023 年 1 月至 2024 年 2 月)NCHCT 的平均和中位读取时间值。人工智能解决方案的灵敏度为 75.6%,特异性为 92.1%,准确性为 91.7%,流行率为 2.70%,阳性预测值为 21.1%。在56,745例经放射科医生确认无出血的AI后NCHCT中,被AI解决方案误标记为疑似ICH的检查(n = 4,464)的平均判读时间为9分40秒/中位数为8分7秒,而AI前无异常NCHCT(n = 49,007)的平均判读时间为8分25秒/中位数为6分48秒(P < .001)和 AI 后平均 8 分 38 秒/6 分 53 秒中位数(当 AI 方案未怀疑 ICH 时,n = 52,281 )(P < .001 )。人工智能未识别出血但放射科医生报告为 ICH 阳性的 NCHCT(n = 384)平均判读时间为 14 分 23 秒/中位数为 13 分 35 秒,而人工智能正确报告为出血的 NCHCT(n = 1192)平均判读时间为 13 分 34 秒/中位数为 12 分 30 秒(P = .04)。由于错误标记检查的读取时间延长,系统的低效率可能会超过在高流量、低流行率环境中使用该工具的潜在益处。©RSNA,2024。
{"title":"Deep Learning to Detect Intracranial Hemorrhage in a National Teleradiology Program and the Impact on Interpretation Time.","authors":"Andrew James Del Gaizo, Thomas F Osborne, Troy Shahoumian, Robert Sherrier","doi":"10.1148/ryai.240067","DOIUrl":"10.1148/ryai.240067","url":null,"abstract":"<p><p>The diagnostic performance of an artificial intelligence (AI) clinical decision support solution for acute intracranial hemorrhage (ICH) detection was assessed in a large teleradiology practice. The impact on radiologist read times and system efficiency was also quantified. A total of 61 704 consecutive noncontrast head CT examinations were retrospectively evaluated. System performance was calculated along with mean and median read times for CT studies obtained before (baseline, pre-AI period; August 2021 to May 2022) and after (post-AI period; January 2023 to February 2024) AI implementation. The AI solution had a sensitivity of 75.6%, specificity of 92.1%, accuracy of 91.7%, prevalence of 2.70%, and positive predictive value of 21.1%. Of the 56 745 post-AI CT scans with no bleed identified by a radiologist, examinations falsely flagged as suspected ICH by the AI solution (<i>n</i> = 4464) took an average of 9 minutes 40 seconds (median, 8 minutes 7 seconds) to interpret as compared with 8 minutes 25 seconds (median, 6 minutes 48 seconds) for unremarkable CT scans before AI (<i>n</i> = 49 007) (<i>P</i> < .001) and 8 minutes 38 seconds (median, 6 minutes 53 seconds) after AI when ICH was not suspected by the AI solution (<i>n</i> = 52 281) (<i>P</i> < .001). CT scans with no bleed identified by the AI but reported as positive for ICH by the radiologist (<i>n</i> = 384) took an average of 14 minutes 23 seconds (median, 13 minutes 35 seconds) to interpret as compared with 13 minutes 34 seconds (median, 12 minutes 30 seconds) for CT scans correctly reported as a bleed by the AI (<i>n</i> = 1192) (<i>P</i> = .04). With lengthened read times for falsely flagged examinations, system inefficiencies may outweigh the potential benefits of using the tool in a high volume, low prevalence environment. <b>Keywords:</b> Artificial Intelligence, Intracranial Hemorrhage, Read Time, Report Turnaround Time, System Efficiency <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240067"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11427938/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141627886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Open Access Data and Deep Learning for Cardiac Device Identification on Standard DICOM and Smartphone-based Chest Radiographs. 在标准 DICOM 和基于智能手机的胸部 X 光片上进行心脏设备识别的开放访问数据和深度学习。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-01 DOI: 10.1148/ryai.230502
Felix Busch, Keno K Bressem, Phillip Suwalski, Lena Hoffmann, Stefan M Niehues, Denis Poddubnyy, Marcus R Makowski, Hugo J W L Aerts, Andrei Zhukov, Lisa C Adams

Purpose To develop and evaluate a publicly available deep learning model for segmenting and classifying cardiac implantable electronic devices (CIEDs) on Digital Imaging and Communications in Medicine (DICOM) and smartphone-based chest radiographs. Materials and Methods This institutional review board-approved retrospective study included patients with implantable pacemakers, cardioverter defibrillators, cardiac resynchronization therapy devices, and cardiac monitors who underwent chest radiography between January 2012 and January 2022. A U-Net model with a ResNet-50 backbone was created to classify CIEDs on DICOM and smartphone images. Using 2321 chest radiographs in 897 patients (median age, 76 years [range, 18-96 years]; 625 male, 272 female), CIEDs were categorized into four manufacturers, 27 models, and one "other" category. Five smartphones were used to acquire 11 072 images. Performance was reported using the Dice coefficient on the validation set for segmentation or balanced accuracy on the test set for manufacturer and model classification, respectively. Results The segmentation tool achieved a mean Dice coefficient of 0.936 (IQR: 0.890-0.958). The model had an accuracy of 94.36% (95% CI: 90.93%, 96.84%; 251 of 266) for CIED manufacturer classification and 84.21% (95% CI: 79.31%, 88.30%; 224 of 266) for CIED model classification. Conclusion The proposed deep learning model, trained on both traditional DICOM and smartphone images, showed high accuracy for segmentation and classification of CIEDs on chest radiographs. Keywords: Conventional Radiography, Segmentation Supplemental material is available for this article. © RSNA, 2024 See also the commentary by Júdice de Mattos Farina and Celi in this issue.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响文章内容的错误。目的 开发和评估一种公开可用的深度学习模型,用于在医学数字成像与通信(DICOM)和基于智能手机的胸片(CXR)图像上分割和分类心脏植入式电子装置(CIED)。材料与方法 这项经机构审查委员会批准的回顾性研究纳入了在 2012 年 1 月至 2022 年 1 月期间接受胸部放射摄影检查的植入式心脏起搏器、心脏复律除颤器、心脏再同步治疗设备和心脏监护仪患者。我们创建了一个以 ResNet-50 为骨干的 U-Net 模型,用于对 DICOM 和智能手机图像上的 CIED 进行分类。利用 897 名患者(中位年龄 76 岁(18-96 岁不等);625 名男性,272 名女性)的 2321 张 CXR,将 CIED 分成 4 个制造商、27 个型号和一个 "其他 "类别。使用五部智能手机获取了 11,072 张图像。性能报告分别使用验证集上的骰子系数(Dice coefficient)进行分割,或使用测试集上的平衡准确率(balanced accuracy)进行制造商和型号分类。结果 图像分割工具的平均 Dice 系数为 0.936(IQR:0.890-0.958)。该模型的 CIED 制造商分类准确率为 94.36%(95% CI:90.93%-96.84%;n = 251/266),CIED 模型分类准确率为 84.21%(95% CI:79.31%-88.30%;n = 224/266)。结论 在传统 DICOM 和智能手机图像上训练的深度学习模型,对 CXR 上 CIED 的分割和分类显示出很高的准确性。©RSNA, 2024.
{"title":"Open Access Data and Deep Learning for Cardiac Device Identification on Standard DICOM and Smartphone-based Chest Radiographs.","authors":"Felix Busch, Keno K Bressem, Phillip Suwalski, Lena Hoffmann, Stefan M Niehues, Denis Poddubnyy, Marcus R Makowski, Hugo J W L Aerts, Andrei Zhukov, Lisa C Adams","doi":"10.1148/ryai.230502","DOIUrl":"10.1148/ryai.230502","url":null,"abstract":"<p><p>Purpose To develop and evaluate a publicly available deep learning model for segmenting and classifying cardiac implantable electronic devices (CIEDs) on Digital Imaging and Communications in Medicine (DICOM) and smartphone-based chest radiographs. Materials and Methods This institutional review board-approved retrospective study included patients with implantable pacemakers, cardioverter defibrillators, cardiac resynchronization therapy devices, and cardiac monitors who underwent chest radiography between January 2012 and January 2022. A U-Net model with a ResNet-50 backbone was created to classify CIEDs on DICOM and smartphone images. Using 2321 chest radiographs in 897 patients (median age, 76 years [range, 18-96 years]; 625 male, 272 female), CIEDs were categorized into four manufacturers, 27 models, and one \"other\" category. Five smartphones were used to acquire 11 072 images. Performance was reported using the Dice coefficient on the validation set for segmentation or balanced accuracy on the test set for manufacturer and model classification, respectively. Results The segmentation tool achieved a mean Dice coefficient of 0.936 (IQR: 0.890-0.958). The model had an accuracy of 94.36% (95% CI: 90.93%, 96.84%; 251 of 266) for CIED manufacturer classification and 84.21% (95% CI: 79.31%, 88.30%; 224 of 266) for CIED model classification. Conclusion The proposed deep learning model, trained on both traditional DICOM and smartphone images, showed high accuracy for segmentation and classification of CIEDs on chest radiographs. <b>Keywords:</b> Conventional Radiography, Segmentation <i>Supplemental material is available for this article</i>. © RSNA, 2024 See also the commentary by Júdice de Mattos Farina and Celi in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230502"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11427927/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141627887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Chest Radiographs as Biological Clocks: Implications for Risk Stratification and Personalized Care. 作为生物钟的胸片:风险分层和个性化护理的意义。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-01 DOI: 10.1148/ryai.240410
Lisa C Adams, Keno K Bressem
{"title":"Chest Radiographs as Biological Clocks: Implications for Risk Stratification and Personalized Care.","authors":"Lisa C Adams, Keno K Bressem","doi":"10.1148/ryai.240410","DOIUrl":"10.1148/ryai.240410","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 5","pages":"e240410"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11427918/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141976813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unveiling Disease Progression in Chest Radiographs through AI. 通过人工智能揭示胸片中的疾病进展。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-01 DOI: 10.1148/ryai.240426
Natália Alves, Kiran Vaidhya Venkadesh
{"title":"Unveiling Disease Progression in Chest Radiographs through AI.","authors":"Natália Alves, Kiran Vaidhya Venkadesh","doi":"10.1148/ryai.240426","DOIUrl":"10.1148/ryai.240426","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 5","pages":"e240426"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11427916/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142018902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Smartphone Imaging and AI: A Commentary on Cardiac Device Classification. 智能手机成像与人工智能:关于心脏设备分类的评论。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-01 DOI: 10.1148/ryai.240418
Eduardo Moreno Júdice de Mattos Farina, Leo Anthony Celi
{"title":"Smartphone Imaging and AI: A Commentary on Cardiac Device Classification.","authors":"Eduardo Moreno Júdice de Mattos Farina, Leo Anthony Celi","doi":"10.1148/ryai.240418","DOIUrl":"10.1148/ryai.240418","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 5","pages":"e240418"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11427923/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142081916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anatomy-specific Progression Classification in Chest Radiographs via Weakly Supervised Learning. 通过弱监督学习对胸部 X 光片进行特定解剖学进展分类
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-01 DOI: 10.1148/ryai.230277
Ke Yu, Shantanu Ghosh, Zhexiong Liu, Christopher Deible, Clare B Poynton, Kayhan Batmanghelich

Purpose To develop a machine learning approach for classifying disease progression in chest radiographs using weak labels automatically derived from radiology reports. Materials and Methods In this retrospective study, a twin neural network was developed to classify anatomy-specific disease progression into four categories: improved, unchanged, worsened, and new. A two-step weakly supervised learning approach was employed, pretraining the model on 243 008 frontal chest radiographs from 63 877 patients (mean age, 51.7 years ± 17.0 [SD]; 34 813 [55%] female) included in the MIMIC-CXR database and fine-tuning it on the subset with progression labels derived from consecutive studies. Model performance was evaluated for six pathologic observations on test datasets of unseen patients from the MIMIC-CXR database. Area under the receiver operating characteristic (AUC) analysis was used to evaluate classification performance. The algorithm is also capable of generating bounding-box predictions to localize areas of new progression. Recall, precision, and mean average precision were used to evaluate the new progression localization. One-tailed paired t tests were used to assess statistical significance. Results The model outperformed most baselines in progression classification, achieving macro AUC scores of 0.72 ± 0.004 for atelectasis, 0.75 ± 0.007 for consolidation, 0.76 ± 0.017 for edema, 0.81 ± 0.006 for effusion, 0.7 ± 0.032 for pneumonia, and 0.69 ± 0.01 for pneumothorax. For new observation localization, the model achieved mean average precision scores of 0.25 ± 0.03 for atelectasis, 0.34 ± 0.03 for consolidation, 0.33 ± 0.03 for edema, and 0.31 ± 0.03 for pneumothorax. Conclusion Disease progression classification models were developed on a large chest radiograph dataset, which can be used to monitor interval changes and detect new pathologic conditions on chest radiographs. Keywords: Prognosis, Unsupervised Learning, Transfer Learning, Convolutional Neural Network (CNN), Emergency Radiology, Named Entity Recognition Supplemental material is available for this article. © RSNA, 2024 See also commentary by Alves and Venkadesh in this issue.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现一些错误,从而影响文章内容。目的 开发一种机器学习方法,利用从放射学报告中自动提取的弱标签对胸片中的疾病进展进行分类。材料与方法 在这项回顾性研究中,开发了一种孪生神经网络,将特定解剖结构的疾病进展分为四类:好转、不变、恶化和新发。研究采用了两步弱监督学习法,在来自 63,877 名 MIMIC-CXR 患者(平均年龄 51.7 岁;女性 34,813 人)的 243,008 张正面胸部 X 光片上对模型进行预训练,并在从连续研究中获得的疾病进展标签子集上对模型进行微调。在未见过的 MIMIC-CXR 患者测试数据集上对六种病理观察结果进行了模型性能评估。接受者操作特征下面积(AUC)分析用于评估分类性能。该算法还能生成边界框预测,以定位新的进展区域。采用召回率、精确度和平均精确度(mAP)来评估新进展定位。采用单尾配对 t 检验来评估统计意义。结果 该模型在进展分类方面的表现优于大多数基线模型,其宏观AUC得分分别为:肺不张(0.72 ± 0.004)、肺不张(0.75 ± 0.007)、肺水肿(0.76 ± 0.017)、肺积液(0.81 ± 0.006)、肺炎(0.7 ± 0.032)和气胸(0.69 ± 0.01)。对于新的观察定位,该模型的 mAP 评分分别为:肺不张(0.25 ± 0.03)、肺不张(0.34 ± 0.03)、肺水肿(0.33 ± 0.03)和气胸(0.31 ± 0.03)。结论 在大型胸片数据集上开发了疾病进展分类模型,可用于监测间隔变化和检测胸片上的新病变。©RSNA,2024。
{"title":"Anatomy-specific Progression Classification in Chest Radiographs via Weakly Supervised Learning.","authors":"Ke Yu, Shantanu Ghosh, Zhexiong Liu, Christopher Deible, Clare B Poynton, Kayhan Batmanghelich","doi":"10.1148/ryai.230277","DOIUrl":"10.1148/ryai.230277","url":null,"abstract":"<p><p>Purpose To develop a machine learning approach for classifying disease progression in chest radiographs using weak labels automatically derived from radiology reports. Materials and Methods In this retrospective study, a twin neural network was developed to classify anatomy-specific disease progression into four categories: improved, unchanged, worsened, and new. A two-step weakly supervised learning approach was employed, pretraining the model on 243 008 frontal chest radiographs from 63 877 patients (mean age, 51.7 years ± 17.0 [SD]; 34 813 [55%] female) included in the MIMIC-CXR database and fine-tuning it on the subset with progression labels derived from consecutive studies. Model performance was evaluated for six pathologic observations on test datasets of unseen patients from the MIMIC-CXR database. Area under the receiver operating characteristic (AUC) analysis was used to evaluate classification performance. The algorithm is also capable of generating bounding-box predictions to localize areas of new progression. Recall, precision, and mean average precision were used to evaluate the new progression localization. One-tailed paired <i>t</i> tests were used to assess statistical significance. Results The model outperformed most baselines in progression classification, achieving macro AUC scores of 0.72 ± 0.004 for atelectasis, 0.75 ± 0.007 for consolidation, 0.76 ± 0.017 for edema, 0.81 ± 0.006 for effusion, 0.7 ± 0.032 for pneumonia, and 0.69 ± 0.01 for pneumothorax. For new observation localization, the model achieved mean average precision scores of 0.25 ± 0.03 for atelectasis, 0.34 ± 0.03 for consolidation, 0.33 ± 0.03 for edema, and 0.31 ± 0.03 for pneumothorax. Conclusion Disease progression classification models were developed on a large chest radiograph dataset, which can be used to monitor interval changes and detect new pathologic conditions on chest radiographs. <b>Keywords:</b> Prognosis, Unsupervised Learning, Transfer Learning, Convolutional Neural Network (CNN), Emergency Radiology, Named Entity Recognition <i>Supplemental material is available for this article.</i> © RSNA, 2024 See also commentary by Alves and Venkadesh in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230277"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11427915/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141752994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Challenges of Implementing Artificial Intelligence-enabled Programs in the Clinical Practice of Radiology. 在放射学临床实践中实施人工智能程序的挑战。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-01 DOI: 10.1148/ryai.240411
James H Thrall
{"title":"Challenges of Implementing Artificial Intelligence-enabled Programs in the Clinical Practice of Radiology.","authors":"James H Thrall","doi":"10.1148/ryai.240411","DOIUrl":"10.1148/ryai.240411","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 5","pages":"e240411"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11427920/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Better AI for Kids: Learning from the AI-OPiNE Study. 更好的儿童人工智能:从 AI-OPiNE 研究中学习。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-01 DOI: 10.1148/ryai.240376
Patricia P Rafful, Sara Reis Teixeira
{"title":"Better AI for Kids: Learning from the AI-OPiNE Study.","authors":"Patricia P Rafful, Sara Reis Teixeira","doi":"10.1148/ryai.240376","DOIUrl":"10.1148/ryai.240376","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 5","pages":"e240376"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11427914/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141898482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating Clinical Workflow for Breast Cancer Screening with AI. 利用人工智能整合乳腺癌筛查的临床工作流程。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-01 DOI: 10.1148/ryai.240532
Hoyeon Lee
{"title":"Integrating Clinical Workflow for Breast Cancer Screening with AI.","authors":"Hoyeon Lee","doi":"10.1148/ryai.240532","DOIUrl":"10.1148/ryai.240532","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 5","pages":"e240532"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11427922/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Fairness of Automated Chest Radiograph Diagnosis by Contrastive Learning. 通过对比学习提高胸片自动诊断的公平性
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-01 DOI: 10.1148/ryai.230342
Mingquan Lin, Tianhao Li, Zhaoyi Sun, Gregory Holste, Ying Ding, Fei Wang, George Shih, Yifan Peng

Purpose To develop an artificial intelligence model that uses supervised contrastive learning (SCL) to minimize bias in chest radiograph diagnosis. Materials and Methods In this retrospective study, the proposed method was evaluated on two datasets: the Medical Imaging and Data Resource Center (MIDRC) dataset with 77 887 chest radiographs in 27 796 patients collected as of April 20, 2023, for COVID-19 diagnosis and the National Institutes of Health ChestX-ray14 dataset with 112 120 chest radiographs in 30 805 patients collected between 1992 and 2015. In the ChestX-ray14 dataset, thoracic abnormalities included atelectasis, cardiomegaly, effusion, infiltration, mass, nodule, pneumonia, pneumothorax, consolidation, edema, emphysema, fibrosis, pleural thickening, and hernia. The proposed method used SCL with carefully selected positive and negative samples to generate fair image embeddings, which were fine-tuned for subsequent tasks to reduce bias in chest radiograph diagnosis. The method was evaluated using the marginal area under the receiver operating characteristic curve difference (∆mAUC). Results The proposed model showed a significant decrease in bias across all subgroups compared with the baseline models, as evidenced by a paired t test (P < .001). The ∆mAUCs obtained by the proposed method were 0.01 (95% CI: 0.01, 0.01), 0.21 (95% CI: 0.21, 0.21), and 0.10 (95% CI: 0.10, 0.10) for sex, race, and age subgroups, respectively, on the MIDRC dataset and 0.01 (95% CI: 0.01, 0.01) and 0.05 (95% CI: 0.05, 0.05) for sex and age subgroups, respectively, on the ChestX-ray14 dataset. Conclusion Employing SCL can mitigate bias in chest radiograph diagnosis, addressing concerns of fairness and reliability in deep learning-based diagnostic methods. Keywords: Thorax, Diagnosis, Supervised Learning, Convolutional Neural Network (CNN), Computer-aided Diagnosis (CAD) Supplemental material is available for this article. © RSNA, 2024 See also the commentary by Johnson in this issue.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 开发一种人工智能模型,利用有监督的对比学习最大程度地减少胸片(CXR)诊断中的偏差。材料与方法 在这项回顾性研究中,我们在两个数据集上对所提出的方法进行了评估:医学影像和数据资源中心(MIDRC)数据集,其中包含截至 2023 年 4 月 20 日为 COVID-19 诊断收集的 27,796 名患者的 77,887 张 CXR;以及美国国立卫生研究院胸部 X 光 14(NIH-CXR)数据集,其中包含 1992 年至 2015 年收集的 30,805 名患者的 112,120 张 CXR。在 NIH-CXR 数据集中,胸部异常包括肺不张、心脏肿大、渗出、浸润、肿块、结节、肺炎、气胸、合并症、水肿、肺气肿、纤维化、胸膜增厚或疝气。所提出的方法利用监督对比学习和精心挑选的正负样本生成公平的图像嵌入,并在后续任务中对其进行微调,以减少 CXR 诊断中的偏差。使用接收者工作特征曲线下的边际面积(AUC)差值(ΔmAUC)对该方法进行了评估。结果 经配对 T 检验(P < .001)显示,与基线模型相比,所提出的模型在所有亚组中的偏倚率均显著降低。在 MIDRC 上,所提方法获得的性别、种族和年龄分组的 ΔmAUCs 分别为 0.01(95% CI,0.01-0.01)、0.21(95% CI,0.21-0.21)和 0.10(95% CI,0.10-0.10);在 NIH-CXR 上,性别和年龄分组的 ΔmAUCs 分别为 0.01(95% CI,0.01-0.01)和 0.05(95% CI,0.05-0.05)。结论 采用有监督的对比学习可以减轻 CXR 诊断中的偏差,解决基于深度学习的诊断方法的公平性和可靠性问题。©RSNA,2024。
{"title":"Improving Fairness of Automated Chest Radiograph Diagnosis by Contrastive Learning.","authors":"Mingquan Lin, Tianhao Li, Zhaoyi Sun, Gregory Holste, Ying Ding, Fei Wang, George Shih, Yifan Peng","doi":"10.1148/ryai.230342","DOIUrl":"10.1148/ryai.230342","url":null,"abstract":"<p><p>Purpose To develop an artificial intelligence model that uses supervised contrastive learning (SCL) to minimize bias in chest radiograph diagnosis. Materials and Methods In this retrospective study, the proposed method was evaluated on two datasets: the Medical Imaging and Data Resource Center (MIDRC) dataset with 77 887 chest radiographs in 27 796 patients collected as of April 20, 2023, for COVID-19 diagnosis and the National Institutes of Health ChestX-ray14 dataset with 112 120 chest radiographs in 30 805 patients collected between 1992 and 2015. In the ChestX-ray14 dataset, thoracic abnormalities included atelectasis, cardiomegaly, effusion, infiltration, mass, nodule, pneumonia, pneumothorax, consolidation, edema, emphysema, fibrosis, pleural thickening, and hernia. The proposed method used SCL with carefully selected positive and negative samples to generate fair image embeddings, which were fine-tuned for subsequent tasks to reduce bias in chest radiograph diagnosis. The method was evaluated using the marginal area under the receiver operating characteristic curve difference (∆mAUC). Results The proposed model showed a significant decrease in bias across all subgroups compared with the baseline models, as evidenced by a paired <i>t</i> test (<i>P</i> < .001). The ∆mAUCs obtained by the proposed method were 0.01 (95% CI: 0.01, 0.01), 0.21 (95% CI: 0.21, 0.21), and 0.10 (95% CI: 0.10, 0.10) for sex, race, and age subgroups, respectively, on the MIDRC dataset and 0.01 (95% CI: 0.01, 0.01) and 0.05 (95% CI: 0.05, 0.05) for sex and age subgroups, respectively, on the ChestX-ray14 dataset. Conclusion Employing SCL can mitigate bias in chest radiograph diagnosis, addressing concerns of fairness and reliability in deep learning-based diagnostic methods. <b>Keywords:</b> Thorax, Diagnosis, Supervised Learning, Convolutional Neural Network (CNN), Computer-aided Diagnosis (CAD) <i>Supplemental material is available for this article.</i> © RSNA, 2024 See also the commentary by Johnson in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230342"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11449211/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142018899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Radiology-Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1