Boosting Deep Learning for Interpretable Brain MRI Lesion Detection through the Integration of Radiology Report Information.

IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Radiology-Artificial Intelligence Pub Date : 2024-11-01 DOI:10.1148/ryai.230520
Lisong Dai, Jiayu Lei, Fenglong Ma, Zheng Sun, Haiyan Du, Houwang Zhang, Jingxuan Jiang, Jianyong Wei, Dan Wang, Guang Tan, Xinyu Song, Jinyu Zhu, Qianqian Zhao, Songtao Ai, Ai Shang, Zhaohui Li, Ya Zhang, Yuehua Li
{"title":"Boosting Deep Learning for Interpretable Brain MRI Lesion Detection through the Integration of Radiology Report Information.","authors":"Lisong Dai, Jiayu Lei, Fenglong Ma, Zheng Sun, Haiyan Du, Houwang Zhang, Jingxuan Jiang, Jianyong Wei, Dan Wang, Guang Tan, Xinyu Song, Jinyu Zhu, Qianqian Zhao, Songtao Ai, Ai Shang, Zhaohui Li, Ya Zhang, Yuehua Li","doi":"10.1148/ryai.230520","DOIUrl":null,"url":null,"abstract":"<p><p>Purpose To guide the attention of a deep learning (DL) model toward MRI characteristics of brain lesions by incorporating radiology report-derived textual features to achieve interpretable lesion detection. Materials and Methods In this retrospective study, 35 282 brain MRI scans (January 2018 to June 2023) and corresponding radiology reports from center 1 were used for training, validation, and internal testing. A total of 2655 brain MRI scans (January 2022 to December 2022) from centers 2-5 were reserved for external testing. Textual features were extracted from radiology reports to guide a DL model (ReportGuidedNet) focusing on lesion characteristics. Another DL model (PlainNet) without textual features was developed for comparative analysis. Both models identified 15 conditions, including 14 diseases and normal brains. Performance of each model was assessed by calculating macro-averaged area under the receiver operating characteristic curve (ma-AUC) and micro-averaged AUC (mi-AUC). Attention maps, which visualized model attention, were assessed with a five-point Likert scale. Results ReportGuidedNet outperformed PlainNet for all diagnoses on both internal (ma-AUC, 0.93 [95% CI: 0.91, 0.95] vs 0.85 [95% CI: 0.81, 0.88]; mi-AUC, 0.93 [95% CI: 0.90, 0.95] vs 0.89 [95% CI: 0.83, 0.92]) and external (ma-AUC, 0.91 [95% CI: 0.88, 0.93] vs 0.75 [95% CI: 0.72, 0.79]; mi-AUC, 0.90 [95% CI: 0.87, 0.92] vs 0.76 [95% CI: 0.72, 0.80]) testing sets. The performance difference between internal and external testing sets was smaller for ReportGuidedNet than for PlainNet (Δma-AUC, 0.03 vs 0.10; Δmi-AUC, 0.02 vs 0.13). The Likert scale score of ReportGuidedNet was higher than that of PlainNet (mean ± SD: 2.50 ± 1.09 vs 1.32 ± 1.20; <i>P</i> < .001). Conclusion The integration of radiology report textual features improved the ability of the DL model to detect brain lesions, thereby enhancing interpretability and generalizability. <b>Keywords:</b> Deep Learning, Computer-aided Diagnosis, Knowledge-driven Model, Radiology Report, Brain MRI <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230520"},"PeriodicalIF":8.1000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiology-Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1148/ryai.230520","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose To guide the attention of a deep learning (DL) model toward MRI characteristics of brain lesions by incorporating radiology report-derived textual features to achieve interpretable lesion detection. Materials and Methods In this retrospective study, 35 282 brain MRI scans (January 2018 to June 2023) and corresponding radiology reports from center 1 were used for training, validation, and internal testing. A total of 2655 brain MRI scans (January 2022 to December 2022) from centers 2-5 were reserved for external testing. Textual features were extracted from radiology reports to guide a DL model (ReportGuidedNet) focusing on lesion characteristics. Another DL model (PlainNet) without textual features was developed for comparative analysis. Both models identified 15 conditions, including 14 diseases and normal brains. Performance of each model was assessed by calculating macro-averaged area under the receiver operating characteristic curve (ma-AUC) and micro-averaged AUC (mi-AUC). Attention maps, which visualized model attention, were assessed with a five-point Likert scale. Results ReportGuidedNet outperformed PlainNet for all diagnoses on both internal (ma-AUC, 0.93 [95% CI: 0.91, 0.95] vs 0.85 [95% CI: 0.81, 0.88]; mi-AUC, 0.93 [95% CI: 0.90, 0.95] vs 0.89 [95% CI: 0.83, 0.92]) and external (ma-AUC, 0.91 [95% CI: 0.88, 0.93] vs 0.75 [95% CI: 0.72, 0.79]; mi-AUC, 0.90 [95% CI: 0.87, 0.92] vs 0.76 [95% CI: 0.72, 0.80]) testing sets. The performance difference between internal and external testing sets was smaller for ReportGuidedNet than for PlainNet (Δma-AUC, 0.03 vs 0.10; Δmi-AUC, 0.02 vs 0.13). The Likert scale score of ReportGuidedNet was higher than that of PlainNet (mean ± SD: 2.50 ± 1.09 vs 1.32 ± 1.20; P < .001). Conclusion The integration of radiology report textual features improved the ability of the DL model to detect brain lesions, thereby enhancing interpretability and generalizability. Keywords: Deep Learning, Computer-aided Diagnosis, Knowledge-driven Model, Radiology Report, Brain MRI Supplemental material is available for this article. Published under a CC BY 4.0 license.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过整合放射学报告信息促进深度学习,实现可解释的脑磁共振成像病灶检测。
"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 通过结合放射学报告衍生的文本特征,引导深度学习(DL)模型关注脑部病变 MRI 特征,从而实现可解释的病变检测。材料与方法 在这项回顾性研究中,来自 1 号中心的 35282 份脑 MRI 扫描(2018 年 1 月至 2023 年 6 月)和相应的放射学报告被用于训练、验证和内部测试。第 2-5 中心的 2655 份脑部 MRI 扫描(2022 年 1 月至 2022 年 12 月)保留用于外部测试。从放射学报告中提取了文本特征,以指导一个侧重于病变特征的 DL 模型(ReportGuidedNet)。为进行比较分析,还开发了另一个不含文本特征的 DL 模型(PlainNet)。两个模型都诊断了 15 种情况,包括 14 种疾病和正常大脑。每个模型的性能通过计算接收者工作特征曲线下的宏观和微观平均面积(ma-AUC、mi-AUC)进行评估。注意力图是模型注意力的可视化,采用 5 点李克特量表进行评估。结果 在所有诊断中,ReportGuidedNet 的内部表现均优于 PlainNet(ma-AUC:0.93 [95% CI: 0.91- 0.95] 对 0.85 [95% CI: 0.81-0.88]; mi-AUC:0.93[95%CI:0.90-0.95] 对 0.89 [95% CI:0.83-0.92])和外部(ma-AUC:0.91 [95% CI: 0.88-0.93] 对 0.75 [95% CI: 0.72-0.79]; mi-AUC:0.90 [95% CI: 0.87-0.92] 对 0.76 [95% CI: 0.72-0.80]) 测试集。内部和外部测试集之间的性能差异,ReportGuidedNet 小于 PlainNet(Δma-AUC:0.03 对 0.10;Δmi-AUC:0.02 对 0.13)。ReportGuidedNet的Likert量表评分高于PlainNet(平均±标准差:2.50±1.09对1.32±1.20;P < .001)。结论 整合放射报告文本特征提高了 DL 模型检测脑部病变的能力,增强了可解释性和可推广性。以 CC BY 4.0 许可发布。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
16.20
自引率
1.00%
发文量
0
期刊介绍: Radiology: Artificial Intelligence is a bi-monthly publication that focuses on the emerging applications of machine learning and artificial intelligence in the field of imaging across various disciplines. This journal is available online and accepts multiple manuscript types, including Original Research, Technical Developments, Data Resources, Review articles, Editorials, Letters to the Editor and Replies, Special Reports, and AI in Brief.
期刊最新文献
Deep Learning Applied to Diffusion-weighted Imaging for Differentiating Malignant from Benign Breast Tumors without Lesion Segmentation. Integrated Deep Learning Model for the Detection, Segmentation, and Morphologic Analysis of Intracranial Aneurysms Using CT Angiography. RSNA 2023 Abdominal Trauma AI Challenge Review and Outcomes Analysis. SCIseg: Automatic Segmentation of Intramedullary Lesions in Spinal Cord Injury on T2-weighted MRI Scans. Combining Biology-based and MRI Data-driven Modeling to Predict Response to Neoadjuvant Chemotherapy in Patients with Triple-Negative Breast Cancer.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1