Heidi J Huhtanen, Mikko J Nyman, Antti Karlsson, Jussi Hirvonen
Purpose To develop and evaluate machine learning and deep learning-based models for automated protocoling of emergency brain MRI scans based on clinical referral text. Materials and Methods In this single-institution, retrospective study of 1953 emergency brain MRI referrals from January 2016 to January 2019, two neuroradiologists labeled the imaging protocol and use of contrast agent as the reference standard. Three machine learning algorithms (naive Bayes, support vector machine, and XGBoost) and two pretrained deep learning models (Finnish bidirectional encoder representations from transformers [BERT] and generative pretrained transformer [GPT]-3.5 [GPT-3.5 Turbo; Open AI]) were developed to predict the MRI protocol and need for a contrast agent. Each model was trained with three datasets (100% of training data, 50% of training data, and 50% plus augmented training data). Prediction accuracy was assessed with a test set. Results The GPT-3.5 models trained with 100% of the training data performed best in both tasks, achieving an accuracy of 84% (95% CI: 80, 88) for the correct protocol and 91% (95% CI: 88, 94) for the contrast agent. BERT had an accuracy of 78% (95% CI: 74, 82) for the protocol and 89% (95% CI: 86, 92) for the contrast agent. The best machine learning model in the protocol task was XGBoost (accuracy, 78%; 95% CI: 73, 82), and the best machine learning models in the contrast agent task were support vector machine and XGBoost (accuracy, 88%; 95% CI: 84, 91 for both). The accuracies of two nonneuroradiologists were 80%-83% in the protocol task and 89%-91% in the contrast medium task. Conclusion Machine learning and deep learning models demonstrated high performance in automatic protocoling of emergency brain MRI scans based on text from clinical referrals. Keywords: Natural Language Processing, Automatic Protocoling, Deep Learning, Machine Learning, Emergency Brain MRI Supplemental material is available for this article. Published under a CC BY 4.0 license. See also commentary by Strotzer in this issue.
{"title":"Machine Learning and Deep Learning Models for Automated Protocoling of Emergency Brain MRI Using Text from Clinical Referrals.","authors":"Heidi J Huhtanen, Mikko J Nyman, Antti Karlsson, Jussi Hirvonen","doi":"10.1148/ryai.230620","DOIUrl":"10.1148/ryai.230620","url":null,"abstract":"<p><p>Purpose To develop and evaluate machine learning and deep learning-based models for automated protocoling of emergency brain MRI scans based on clinical referral text. Materials and Methods In this single-institution, retrospective study of 1953 emergency brain MRI referrals from January 2016 to January 2019, two neuroradiologists labeled the imaging protocol and use of contrast agent as the reference standard. Three machine learning algorithms (naive Bayes, support vector machine, and XGBoost) and two pretrained deep learning models (Finnish bidirectional encoder representations from transformers [BERT] and generative pretrained transformer [GPT]-3.5 [GPT-3.5 Turbo; Open AI]) were developed to predict the MRI protocol and need for a contrast agent. Each model was trained with three datasets (100% of training data, 50% of training data, and 50% plus augmented training data). Prediction accuracy was assessed with a test set. Results The GPT-3.5 models trained with 100% of the training data performed best in both tasks, achieving an accuracy of 84% (95% CI: 80, 88) for the correct protocol and 91% (95% CI: 88, 94) for the contrast agent. BERT had an accuracy of 78% (95% CI: 74, 82) for the protocol and 89% (95% CI: 86, 92) for the contrast agent. The best machine learning model in the protocol task was XGBoost (accuracy, 78%; 95% CI: 73, 82), and the best machine learning models in the contrast agent task were support vector machine and XGBoost (accuracy, 88%; 95% CI: 84, 91 for both). The accuracies of two nonneuroradiologists were 80%-83% in the protocol task and 89%-91% in the contrast medium task. Conclusion Machine learning and deep learning models demonstrated high performance in automatic protocoling of emergency brain MRI scans based on text from clinical referrals. <b>Keywords:</b> Natural Language Processing, Automatic Protocoling, Deep Learning, Machine Learning, Emergency Brain MRI <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license. See also commentary by Strotzer in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230620"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143450391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John Brandon Graham-Knight, Pengkun Liang, Wenna Lin, Quinn Wright, Hua Shen, Colin Mar, Janette Sam, Rasika Rajapakshe
Purpose To test a commercial artificial intelligence (AI) system for breast cancer detection at the BC Cancer Breast Screening Program. Materials and Methods In this retrospective study of 136 700 female individuals (mean age, 58.8 years ± 9.4 [SD]; median, 59.0 years; IQR = 14.0) who underwent digital mammography screening in British Columbia, Canada, between February 2019 and January 2020, breast cancer detection performance of a commercial AI algorithm was stratified by demographic, clinical, and imaging features and evaluated using the area under the receiver operating characteristic curve (AUC), and AI performance was compared with radiologists, using sensitivity and specificity. Results At 1-year follow-up, the AUC of the AI algorithm was 0.93 (95% CI: 0.92, 0.94) for breast cancer detection. Statistically significant differences were found for mammograms across radiologist-assigned Breast Imaging Reporting and Data System breast densities: category A, AUC of 0.96 (95% CI: 0.94, 0.99); category B, AUC of 0.94 (95% CI: 0.92, 0.95); category C, AUC of 0.93 (95% CI: 0.91, 0.95), and category D, AUC of 0.84 (95% CI: 0.76, 0.91) (AAUC > DAUC, P = .002; BAUC > DAUC, P = .009; CAUC > DAUC, P = .02). The AI showed higher performance for mammograms with architectural distortion (0.96 [95% CI: 0.94, 0.98]) versus without (0.92 [95% CI: 0.90, 0.93], P = .003) and lower performance for mammograms with calcification (0.87 [95% CI: 0.85, 0.90]) versus without (0.92 [95% CI: 0.91, 0.94], P < .001). Sensitivity of radiologists (92.6% ± 1.0) exceeded the AI algorithm (89.4% ± 1.1, P = .01), but there was no evidence of difference at 2-year follow-up (83.5% ± 1.2 vs 84.3% ± 1.2, P = .69). Conclusion The tested commercial AI algorithm is generalizable for a large external breast cancer screening cohort from Canada but showed different performance for some subgroups, including those with architectural distortion or calcification in the image. Keywords: Mammography, QA/QC, Screening, Technology Assessment, Screening Mammography, Artificial Intelligence, Breast Cancer, Model Testing, Bias and Fairness Supplemental material is available for this article. Published under a CC BY 4.0 license. See also commentary by Milch and Lee in this issue.
{"title":"External Testing of a Commercial AI Algorithm for Breast Cancer Detection at Screening Mammography.","authors":"John Brandon Graham-Knight, Pengkun Liang, Wenna Lin, Quinn Wright, Hua Shen, Colin Mar, Janette Sam, Rasika Rajapakshe","doi":"10.1148/ryai.240287","DOIUrl":"10.1148/ryai.240287","url":null,"abstract":"<p><p>Purpose To test a commercial artificial intelligence (AI) system for breast cancer detection at the BC Cancer Breast Screening Program. Materials and Methods In this retrospective study of 136 700 female individuals (mean age, 58.8 years ± 9.4 [SD]; median, 59.0 years; IQR = 14.0) who underwent digital mammography screening in British Columbia, Canada, between February 2019 and January 2020, breast cancer detection performance of a commercial AI algorithm was stratified by demographic, clinical, and imaging features and evaluated using the area under the receiver operating characteristic curve (AUC), and AI performance was compared with radiologists, using sensitivity and specificity. Results At 1-year follow-up, the AUC of the AI algorithm was 0.93 (95% CI: 0.92, 0.94) for breast cancer detection. Statistically significant differences were found for mammograms across radiologist-assigned Breast Imaging Reporting and Data System breast densities: category A, AUC of 0.96 (95% CI: 0.94, 0.99); category B, AUC of 0.94 (95% CI: 0.92, 0.95); category C, AUC of 0.93 (95% CI: 0.91, 0.95), and category D, AUC of 0.84 (95% CI: 0.76, 0.91) (A<sub>AUC</sub> > D<sub>AUC</sub>, <i>P</i> = .002; B<sub>AUC</sub> > D<sub>AUC</sub>, <i>P</i> = .009; C<sub>AUC</sub> > D<sub>AUC</sub>, <i>P</i> = .02). The AI showed higher performance for mammograms with architectural distortion (0.96 [95% CI: 0.94, 0.98]) versus without (0.92 [95% CI: 0.90, 0.93], <i>P</i> = .003) and lower performance for mammograms with calcification (0.87 [95% CI: 0.85, 0.90]) versus without (0.92 [95% CI: 0.91, 0.94], <i>P</i> < .001). Sensitivity of radiologists (92.6% ± 1.0) exceeded the AI algorithm (89.4% ± 1.1, <i>P</i> = .01), but there was no evidence of difference at 2-year follow-up (83.5% ± 1.2 vs 84.3% ± 1.2, <i>P</i> = .69). Conclusion The tested commercial AI algorithm is generalizable for a large external breast cancer screening cohort from Canada but showed different performance for some subgroups, including those with architectural distortion or calcification in the image. <b>Keywords:</b> Mammography, QA/QC, Screening, Technology Assessment, Screening Mammography, Artificial Intelligence, Breast Cancer, Model Testing, Bias and Fairness <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license. See also commentary by Milch and Lee in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240287"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143606545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Establishing the Evidence Needed for AI-driven Mammography Screening.","authors":"Hannah S Milch, Christoph I Lee","doi":"10.1148/ryai.250152","DOIUrl":"10.1148/ryai.250152","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 3","pages":"e250152"},"PeriodicalIF":13.2,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12127946/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143764707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eduardo Moreno Júdice de Mattos Farina, Paulo Eduardo de Aguiar Kuriki
{"title":"Predicting Mortality with Deep Learning: Are Metrics Alone Enough?","authors":"Eduardo Moreno Júdice de Mattos Farina, Paulo Eduardo de Aguiar Kuriki","doi":"10.1148/ryai.250224","DOIUrl":"10.1148/ryai.250224","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 3","pages":"e250224"},"PeriodicalIF":13.2,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144062366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Francesco Santini, Jakob Wasserthal, Abramo Agosti, Xeni Deligianni, Kevin R Keene, Hermien E Kan, Stefan Sommer, Fengdan Wang, Claudia Weidensteiner, Giulia Manco, Matteo Paoletti, Valentina Mazzoli, Arjun Desai, Anna Pichiecchio
“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的:开发一种使用增强型生成对抗网络创建伪对比增强超声(CEUS)的方法,并评估其评估肿瘤消融效果的能力。材料和方法本回顾性研究纳入了2020年1月至2023年4月在7个中心接受甲状腺结节消融治疗的1030例患者。开发了基于生成对抗网络的模型,用于从b模式US直接生成伪ceus,并在甲状腺、乳房和肝脏消融数据集上进行了测试。采用结构相似指数(SSIM)、颜色直方图相关性(CHC)和相对于真实CEUS的平均绝对百分比误差(MAPE)来评估伪CEUS的可靠性。此外,还设计了一个主观评价系统来验证其临床价值。采用Wilcoxon符号秩检验分析数据差异。结果纳入1030例患者,平均年龄46.9岁±12.5岁;799名女性和231名男性)。内部测试集1的平均SSIM为0.89±0.05,而外部测试集1-6的平均SSIM值为0.84±0.08至0.88±0.04。主观评价肯定了该方法在评估消融效果方面的稳定性和接近真实的性能。甲状腺消融数据集的平均识别得分为0.49(0.5表示无法区分),而所有数据集的相似度平均得分为4.75(满分5分)。放射科医生对残余血供的评估几乎一致,在确定真超声和假超声消融区域方面没有差异。结论伪超声造影与真超声造影在评价肿瘤消融效果方面具有较高的相似性。在CC BY 4.0许可下发布。