Machine learning and deep learning for classifying the justification of brain CT referrals.

IF 4.7 2区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING European Radiology Pub Date : 2024-12-01 Epub Date: 2024-06-24 DOI:10.1007/s00330-024-10851-z
Jaka Potočnik, Edel Thomas, Aonghus Lawlor, Dearbhla Kearney, Eric J Heffernan, Ronan P Killeen, Shane J Foley
{"title":"Machine learning and deep learning for classifying the justification of brain CT referrals.","authors":"Jaka Potočnik, Edel Thomas, Aonghus Lawlor, Dearbhla Kearney, Eric J Heffernan, Ronan P Killeen, Shane J Foley","doi":"10.1007/s00330-024-10851-z","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>To train the machine and deep learning models to automate the justification analysis of radiology referrals in accordance with iGuide categorisation, and to determine if prediction models can generalise across multiple clinical sites and outperform human experts.</p><p><strong>Methods: </strong>Adult brain computed tomography (CT) referrals from scans performed in three CT centres in Ireland in 2020 and 2021 were retrospectively collected. Two radiographers analysed the justification of 3000 randomly selected referrals using iGuide, with two consultant radiologists analysing the referrals with disagreement. Insufficient or duplicate referrals were discarded. The inter-rater agreement among radiographers and consultants was computed. A random split (4:1) was performed to apply machine learning (ML) and deep learning (DL) techniques to unstructured clinical indications to automate retrospective justification auditing with multi-class classification. The accuracy and macro-averaged F1 score of the best-performing classifier of each type on the training set were computed on the test set.</p><p><strong>Results: </strong>42 referrals were ignored. 1909 (64.5%) referrals were justified, 811 (27.4%) were potentially justified, and 238 (8.1%) were unjustified. The agreement between radiographers (κ = 0.268) was lower than radiologists (κ = 0.460). The best-performing ML model was the bag-of-words-based gradient-boosting classifier achieving a 94.4% accuracy and a macro F1 of 0.94. DL models were inferior, with bi-directional long short-term memory achieving 92.3% accuracy, a macro F1 of 0.92, and outperforming multilayer perceptrons.</p><p><strong>Conclusion: </strong>Interpreting unstructured clinical indications is challenging necessitating clinical decision support. ML and DL can generalise across multiple clinical sites, outperform human experts, and be used as an artificial intelligence-based iGuide interpreter when retrospectively vetting radiology referrals.</p><p><strong>Clinical relevance statement: </strong>Healthcare vendors and clinical sites should consider developing and utilising artificial intelligence-enabled systems for justifying medical exposures. This would enable better implementation of imaging referral guidelines in clinical practices and reduce population dose burden, CT waiting lists, and wasteful use of resources.</p><p><strong>Key points: </strong>Significant variations exist among human experts in interpreting unstructured clinical indications/patient presentations. Machine and deep learning can automate the justification analysis of radiology referrals according to iGuide categorisation. Machine and deep learning can improve retrospective and prospective justification auditing for better implementation of imaging referral guidelines.</p>","PeriodicalId":12076,"journal":{"name":"European Radiology","volume":" ","pages":"7944-7952"},"PeriodicalIF":4.7000,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11557633/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Radiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s00330-024-10851-z","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/6/24 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Objectives: To train the machine and deep learning models to automate the justification analysis of radiology referrals in accordance with iGuide categorisation, and to determine if prediction models can generalise across multiple clinical sites and outperform human experts.

Methods: Adult brain computed tomography (CT) referrals from scans performed in three CT centres in Ireland in 2020 and 2021 were retrospectively collected. Two radiographers analysed the justification of 3000 randomly selected referrals using iGuide, with two consultant radiologists analysing the referrals with disagreement. Insufficient or duplicate referrals were discarded. The inter-rater agreement among radiographers and consultants was computed. A random split (4:1) was performed to apply machine learning (ML) and deep learning (DL) techniques to unstructured clinical indications to automate retrospective justification auditing with multi-class classification. The accuracy and macro-averaged F1 score of the best-performing classifier of each type on the training set were computed on the test set.

Results: 42 referrals were ignored. 1909 (64.5%) referrals were justified, 811 (27.4%) were potentially justified, and 238 (8.1%) were unjustified. The agreement between radiographers (κ = 0.268) was lower than radiologists (κ = 0.460). The best-performing ML model was the bag-of-words-based gradient-boosting classifier achieving a 94.4% accuracy and a macro F1 of 0.94. DL models were inferior, with bi-directional long short-term memory achieving 92.3% accuracy, a macro F1 of 0.92, and outperforming multilayer perceptrons.

Conclusion: Interpreting unstructured clinical indications is challenging necessitating clinical decision support. ML and DL can generalise across multiple clinical sites, outperform human experts, and be used as an artificial intelligence-based iGuide interpreter when retrospectively vetting radiology referrals.

Clinical relevance statement: Healthcare vendors and clinical sites should consider developing and utilising artificial intelligence-enabled systems for justifying medical exposures. This would enable better implementation of imaging referral guidelines in clinical practices and reduce population dose burden, CT waiting lists, and wasteful use of resources.

Key points: Significant variations exist among human experts in interpreting unstructured clinical indications/patient presentations. Machine and deep learning can automate the justification analysis of radiology referrals according to iGuide categorisation. Machine and deep learning can improve retrospective and prospective justification auditing for better implementation of imaging referral guidelines.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用机器学习和深度学习对脑部 CT 转诊病例进行合理性分类。
目标:训练机器学习和深度学习模型,以便根据 iGuide 分类自动对放射科转诊病例进行合理性分析,并确定预测模型是否能在多个临床地点通用,以及是否优于人类专家:回顾性收集了 2020 年和 2021 年在爱尔兰三个 CT 中心进行扫描的成人脑部计算机断层扫描 (CT) 转诊病例。两名放射技师使用 iGuide 对随机抽取的 3000 份转诊病例进行了合理性分析,两名放射科顾问对存在分歧的转诊病例进行了分析。不足或重复的转诊病例将被剔除。计算了放射技师和顾问之间的互评一致性。随机拆分(4:1),将机器学习(ML)和深度学习(DL)技术应用于非结构化临床适应症,通过多类分类自动进行回顾性理由审核。在测试集上计算了每种类型分类器在训练集上表现最好的分类器的准确率和宏观平均 F1 分数:42 份转介被忽略。1909例(64.5%)转诊合理,811例(27.4%)可能合理,238例(8.1%)不合理。放射技师之间的一致性(κ = 0.268)低于放射科医生(κ = 0.460)。表现最好的 ML 模型是基于词袋的梯度提升分类器,准确率达到 94.4%,宏观 F1 为 0.94。DL模型的准确率较低,双向长短期记忆的准确率为92.3%,宏观F1为0.92,表现优于多层感知器:结论:解释非结构化临床适应症具有挑战性,需要临床决策支持。ML 和 DL 可以在多个临床场所通用,性能优于人类专家,并可在回顾性审查放射学转诊时用作基于人工智能的 iGuide 解释器:医疗保健供应商和临床机构应考虑开发和利用人工智能系统来证明医疗暴露的合理性。这将有助于在临床实践中更好地执行影像学转诊指南,减少人群剂量负担、CT 等候名单和资源浪费:人类专家在解释非结构化临床适应症/患者表现时存在很大差异。机器学习和深度学习可根据 iGuide 分类自动分析放射科转诊的合理性。机器学习和深度学习可改善回顾性和前瞻性理由审核,从而更好地实施影像学转诊指南。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
European Radiology
European Radiology 医学-核医学
CiteScore
11.60
自引率
8.50%
发文量
874
审稿时长
2-4 weeks
期刊介绍: European Radiology (ER) continuously updates scientific knowledge in radiology by publication of strong original articles and state-of-the-art reviews written by leading radiologists. A well balanced combination of review articles, original papers, short communications from European radiological congresses and information on society matters makes ER an indispensable source for current information in this field. This is the Journal of the European Society of Radiology, and the official journal of a number of societies. From 2004-2008 supplements to European Radiology were published under its companion, European Radiology Supplements, ISSN 1613-3749.
期刊最新文献
Correction: Comparison between CT volumetry and extracellular volume fraction using liver dynamic CT for the predictive ability of liver fibrosis in patients with hepatocellular carcinoma. Correction: Development and evaluation of two open-source nnU-Net models for automatic segmentation of lung tumors on PET and CT images with and without respiratory motion compensation. Correction: Machine learning detects symptomatic patients with carotid plaques based on 6-type calcium configuration classification on CT angiography. Natural language processing pipeline to extract prostate cancer-related information from clinical notes. ESR Essentials: characterisation and staging of adnexal masses with MRI and CT-practice recommendations by ESUR.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1