首页 > 最新文献

Radiology-Artificial Intelligence最新文献

英文 中文
Deep Learning Applied to Diffusion-weighted Imaging for Differentiating Malignant from Benign Breast Tumors without Lesion Segmentation. 深度学习应用于扩散加权成像,无需病灶分割即可区分恶性与良性乳腺肿瘤
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-20 DOI: 10.1148/ryai.240206
Mami Iima, Ryosuke Mizuno, Masako Kataoka, Kazuki Tsuji, Toshiki Yamazaki, Akihiko Minami, Maya Honda, Keiho Imanishi, Masahiro Takada, Yuji Nakamoto

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To evaluate and compare performance of different artificial intelligence (AI) models in differentiating between benign and malignant breast tumors on diffusion-weighted imaging (DWI), including comparison with radiologist assessments. Materials and Methods In this retrospective study, patients with breast lesions underwent 3T breast MRI from May 2019 to March 2022. In addition to T1-weighted imaging, T2-weighted imaging, and contrast-enhanced imaging, DWI was acquired five b-values (0, 200, 800, 1000, and 1500 s/mm2). DWI data split into training and tuning and test sets were used for the development and assessment of AI models, including a small 2D convolutional neural network (CNN), ResNet18, EfficientNet-B0, and a 3D CNN. Performance of the DWI-based models in differentiating between benign and malignant breast tumors was compared with that of radiologists assessing standard breast MRI, with diagnostic performance assessed using receiver operating characteristic analysis. The study also examined data augmentation effects (A: random elastic deformation, B: random affine transformation/random noise, and C: mixup) on model performance. Results A total of 334 breast lesions in 293 patients (mean age [SD], 56.5 [15.1] years; all female) were analyzed. 2D CNN models outperformed the 3D CNN on the test dataset (area under the receiver operating characteristic curve [AUC] with different data augmentation methods: 0.83-0.88 versus 0.75-0.76). There was no evidence of a difference in performance between the small 2D CNN with augmentations A and B (AUC 0.88) and the radiologists (AUC 0.86) on the test dataset (P = .64). When comparing the small 2D CNN to radiologists, there was no evidence of a difference in specificity (81.4% versus 72.1%; P = .64) or sensitivity (85.9% versus 98.8%; P = .64). Conclusion AI models, particularly a small 2D CNN, showed good performance in differentiating between malignant and benign breast tumors using DWI, without needing manual segmentation. ©RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校样审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 评估和比较不同人工智能(AI)模型在弥散加权成像(DWI)上区分良性和恶性乳腺肿瘤的性能,包括与放射科医生评估结果的比较。材料与方法 在这项回顾性研究中,乳腺病变患者在2019年5月至2022年3月期间接受了3T乳腺磁共振成像检查。除了 T1 加权成像、T2 加权成像和对比增强成像外,还采集了五个 b 值(0、200、800、1000 和 1500 s/mm2)的 DWI。DWI 数据分为训练集、调整集和测试集,用于开发和评估人工智能模型,包括小型 2D 卷积神经网络 (CNN)、ResNet18、EfficientNet-B0 和 3D CNN。将基于 DWI 的模型在区分良性和恶性乳腺肿瘤方面的性能与放射科医生评估标准乳腺 MRI 的性能进行了比较,并使用接收器操作特性分析对诊断性能进行了评估。研究还考察了数据增强对模型性能的影响(A:随机弹性变形;B:随机仿射变换/随机噪声;C:混杂)。结果 共分析了 293 名患者(平均年龄 [SD] 56.5 [15.1] 岁;均为女性)的 334 个乳腺病变。二维 CNN 模型在测试数据集上的表现优于三维 CNN(采用不同数据增强方法的接收者工作特征曲线下面积 [AUC]:0.83-0.88 对 0.75-0.76)。在测试数据集上,采用 A 和 B 增强方法的小型 2D CNN(AUC 0.88)与放射医师(AUC 0.86)的性能没有差异(P = .64)。在将小型二维 CNN 与放射医师进行比较时,没有证据表明两者在特异性(81.4% 对 72.1%;P = .64)或灵敏度(85.9% 对 98.8%;P = .64)方面存在差异。结论 人工智能模型,尤其是小型二维 CNN,在使用 DWI 区分乳腺恶性肿瘤和良性肿瘤方面表现出色,无需人工分割。©RSNA, 2024.
{"title":"Deep Learning Applied to Diffusion-weighted Imaging for Differentiating Malignant from Benign Breast Tumors without Lesion Segmentation.","authors":"Mami Iima, Ryosuke Mizuno, Masako Kataoka, Kazuki Tsuji, Toshiki Yamazaki, Akihiko Minami, Maya Honda, Keiho Imanishi, Masahiro Takada, Yuji Nakamoto","doi":"10.1148/ryai.240206","DOIUrl":"https://doi.org/10.1148/ryai.240206","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To evaluate and compare performance of different artificial intelligence (AI) models in differentiating between benign and malignant breast tumors on diffusion-weighted imaging (DWI), including comparison with radiologist assessments. Materials and Methods In this retrospective study, patients with breast lesions underwent 3T breast MRI from May 2019 to March 2022. In addition to T1-weighted imaging, T2-weighted imaging, and contrast-enhanced imaging, DWI was acquired five b-values (0, 200, 800, 1000, and 1500 s/mm<sup>2</sup>). DWI data split into training and tuning and test sets were used for the development and assessment of AI models, including a small 2D convolutional neural network (CNN), ResNet18, EfficientNet-B0, and a 3D CNN. Performance of the DWI-based models in differentiating between benign and malignant breast tumors was compared with that of radiologists assessing standard breast MRI, with diagnostic performance assessed using receiver operating characteristic analysis. The study also examined data augmentation effects (A: random elastic deformation, B: random affine transformation/random noise, and C: mixup) on model performance. Results A total of 334 breast lesions in 293 patients (mean age [SD], 56.5 [15.1] years; all female) were analyzed. 2D CNN models outperformed the 3D CNN on the test dataset (area under the receiver operating characteristic curve [AUC] with different data augmentation methods: 0.83-0.88 versus 0.75-0.76). There was no evidence of a difference in performance between the small 2D CNN with augmentations A and B (AUC 0.88) and the radiologists (AUC 0.86) on the test dataset (<i>P</i> = .64). When comparing the small 2D CNN to radiologists, there was no evidence of a difference in specificity (81.4% versus 72.1%; <i>P</i> = .64) or sensitivity (85.9% versus 98.8%; <i>P</i> = .64). Conclusion AI models, particularly a small 2D CNN, showed good performance in differentiating between malignant and benign breast tumors using DWI, without needing manual segmentation. ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240206"},"PeriodicalIF":8.1,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142677229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RSNA 2023 Abdominal Trauma AI Challenge Review and Outcomes Analysis. RSNA 2023 腹部创伤人工智能挑战回顾与结果分析。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-06 DOI: 10.1148/ryai.240334
Sebastiaan Hermans, Zixuan Hu, Robyn L Ball, Hui Ming Lin, Luciano M Prevedello, Ferco H Berger, Ibrahim Yusuf, Jeffrey D Rudie, Maryam Vazirabad, Adam E Flanders, George Shih, John Mongan, Savvas Nicolaou, Brett S Marinelli, Melissa A Davis, Kirti Magudia, Ervin Sejdić, Errol Colak

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To evaluate the performance of the winning machine learning (ML) models from the 2023 RSNA Abdominal Trauma Detection Artificial Intelligence Challenge. Materials and Methods The competition was hosted on Kaggle and took place between July 26, 2023, to October 15, 2023. The multicenter competition dataset consisted of 4,274 abdominal trauma CT scans in which solid organs (liver, spleen and kidneys) were annotated as healthy, low-grade or high-grade injury. Studies were labeled as positive or negative for the presence of bowel/mesenteric injury and active extravasation. In this study, performances of the 8 award-winning models were retrospectively assessed and compared using various metrics, including the area under the receiver operating characteristic curve (AUC), for each injury category. The reported mean values of these metrics were calculated by averaging the performance across all models for each specified injury type. Results The models exhibited strong performance in detecting solid organ injuries, particularly high-grade injuries. For binary detection of injuries, the models demonstrated mean AUC values of 0.92 (range:0.91-0.94) for liver, 0.91 (range:0.87-0.93) for splenic, and 0.94 (range:0.93-0.95) for kidney injuries. The models achieved mean AUC values of 0.98 (range:0.96-0.98) for high-grade liver, 0.98 (range:0.97-0.99) for high-grade splenic, and 0.98 (range:0.97-0.98) for high-grade kidney injuries. For the detection of bowel/mesenteric injuries and active extravasation, the models demonstrated mean AUC values of 0.85 (range:0.74-0.73) and 0.85 (range:0.79-0.89) respectively. Conclusion The award-winning models from the AI challenge demonstrated strong performance in the detection of traumatic abdominal injuries on CT scans, particularly high-grade injuries. These models may serve as a performance baseline for future investigations and algorithms. ©RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 评估 2023 年 RSNA 腹部创伤检测人工智能挑战赛获奖机器学习(ML)模型的性能。材料与方法 比赛在 Kaggle 上举办,时间为 2023 年 7 月 26 日至 2023 年 10 月 15 日。多中心竞赛数据集包括 4,274 份腹部创伤 CT 扫描,其中实体器官(肝脏、脾脏和肾脏)被标注为健康、低度或高度损伤。对于肠/括约肌损伤和活动性外渗,研究结果被标记为阳性或阴性。在本研究中,对 8 个获奖模型的性能进行了回顾性评估,并使用各种指标(包括接收器操作特征曲线下面积 (AUC))对每个损伤类别进行了比较。所报告的这些指标的平均值是通过对每种特定损伤类型的所有模型的性能进行平均计算得出的。结果 这些模型在检测实体器官损伤,尤其是高级别损伤方面表现出很强的性能。在损伤的二元检测中,模型对肝脏损伤的平均 AUC 值为 0.92(范围:0.91-0.94),对脾脏损伤的平均 AUC 值为 0.91(范围:0.87-0.93),对肾脏损伤的平均 AUC 值为 0.94(范围:0.93-0.95)。这些模型的平均 AUC 值分别为:高级别肝损伤 0.98(范围:0.96-0.98),高级别脾损伤 0.98(范围:0.97-0.99),高级别肾损伤 0.98(范围:0.97-0.98)。在检测肠道/肠膜损伤和活动性外渗方面,模型的平均 AUC 值分别为 0.85(范围:0.74-0.73)和 0.85(范围:0.79-0.89)。结论 在人工智能挑战赛中获奖的模型在检测 CT 扫描中的腹部创伤,尤其是高级别创伤方面表现出了很强的性能。这些模型可作为未来研究和算法的性能基线。©RSNA,2024。
{"title":"RSNA 2023 Abdominal Trauma AI Challenge Review and Outcomes Analysis.","authors":"Sebastiaan Hermans, Zixuan Hu, Robyn L Ball, Hui Ming Lin, Luciano M Prevedello, Ferco H Berger, Ibrahim Yusuf, Jeffrey D Rudie, Maryam Vazirabad, Adam E Flanders, George Shih, John Mongan, Savvas Nicolaou, Brett S Marinelli, Melissa A Davis, Kirti Magudia, Ervin Sejdić, Errol Colak","doi":"10.1148/ryai.240334","DOIUrl":"https://doi.org/10.1148/ryai.240334","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To evaluate the performance of the winning machine learning (ML) models from the 2023 RSNA Abdominal Trauma Detection Artificial Intelligence Challenge. Materials and Methods The competition was hosted on Kaggle and took place between July 26, 2023, to October 15, 2023. The multicenter competition dataset consisted of 4,274 abdominal trauma CT scans in which solid organs (liver, spleen and kidneys) were annotated as healthy, low-grade or high-grade injury. Studies were labeled as positive or negative for the presence of bowel/mesenteric injury and active extravasation. In this study, performances of the 8 award-winning models were retrospectively assessed and compared using various metrics, including the area under the receiver operating characteristic curve (AUC), for each injury category. The reported mean values of these metrics were calculated by averaging the performance across all models for each specified injury type. Results The models exhibited strong performance in detecting solid organ injuries, particularly high-grade injuries. For binary detection of injuries, the models demonstrated mean AUC values of 0.92 (range:0.91-0.94) for liver, 0.91 (range:0.87-0.93) for splenic, and 0.94 (range:0.93-0.95) for kidney injuries. The models achieved mean AUC values of 0.98 (range:0.96-0.98) for high-grade liver, 0.98 (range:0.97-0.99) for high-grade splenic, and 0.98 (range:0.97-0.98) for high-grade kidney injuries. For the detection of bowel/mesenteric injuries and active extravasation, the models demonstrated mean AUC values of 0.85 (range:0.74-0.73) and 0.85 (range:0.79-0.89) respectively. Conclusion The award-winning models from the AI challenge demonstrated strong performance in the detection of traumatic abdominal injuries on CT scans, particularly high-grade injuries. These models may serve as a performance baseline for future investigations and algorithms. ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240334"},"PeriodicalIF":8.1,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining Biology-based and MRI Data-driven Modeling to Predict Response to Neoadjuvant Chemotherapy in Patients with Triple-Negative Breast Cancer. 结合生物学和磁共振成像数据驱动模型预测三阴性乳腺癌患者对新辅助化疗的反应
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-06 DOI: 10.1148/ryai.240124
Casey E Stowers, Chengyue Wu, Zhan Xu, Sidharth Kumar, Clinton Yam, Jong Bum Son, Jingfei Ma, Jonathan I Tamir, Gaiane M Rauch, Thomas E Yankeelov

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To combine deep learning and biology-based modeling to predict the response of locally advanced, triple negative breast cancer before initiating neoadjuvant chemotherapy (NAC). Materials and Methods In this retrospective study, a biology-based mathematical model of tumor response to NAC was constructed and calibrated on a patient-specific basis using imaging data from patients enrolled in the MD Anderson ARTEMIS trial (ClinicalTrials.gov, NCT02276443) between April 2018 and May 2021. To relate the calibrated parameters in the biology-based model and pretreatment MRI data, a convolutional neural network (CNN) was employed. The CNN predictions of the calibrated model parameters were used to estimate tumor response at the end of NAC. CNN performance in the estimations of total tumor volume (TTV), total tumor cellularity (TTC), and tumor status was evaluated. Model-predicted TTC and TTV measurements were compared with MRI-based measurements using the concordance correlation coefficient (CCC), and area under the receiver operating characteristic curve (for predicting pathologic complete response at the end of NAC). Results The study included 118 female patients (median age, 51 [range, 29-78] years). For comparison of CNN predicted to measured change in TTC and TTV over the course of NAC, the CCCs were 0.95 (95% CI: 0.90-0.98) and 0.94 (95% CI: 0.87-0.97), respectively. CNN-predicted TTC and TTV had an AUC of 0.72 (95% CI: 0.34-0.94) and 0.72 (95% CI: 0.40-0.95) for predicting tumor status at the time of surgery, respectively. Conclusion Deep learning integrated with a biology-based mathematical model showed good performance in predicting the spatial and temporal evolution of a patient's tumor during NAC using only pre-NAC MRI data. ©RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 结合深度学习和基于生物学的建模,在开始新辅助化疗(NAC)前预测局部晚期三阴性乳腺癌的反应。材料与方法 在这项回顾性研究中,利用2018年4月至2021年5月期间MD安德森ARTEMIS试验(ClinicalTrials.gov,NCT02276443)入组患者的成像数据,构建了基于生物学的NAC肿瘤反应数学模型,并在患者特异性的基础上进行了校准。为了将基于生物学的模型中的校准参数与治疗前核磁共振成像数据联系起来,采用了卷积神经网络(CNN)。CNN 对校准模型参数的预测用于估计 NAC 结束时的肿瘤反应。评估了 CNN 在估计肿瘤总体积(TTV)、肿瘤细胞总数(TTC)和肿瘤状态方面的性能。使用一致性相关系数(CCC)和接收者操作特征曲线下面积(用于预测 NAC 结束时的病理完全反应)将模型预测的 TTC 和 TTV 测量值与基于 MRI 的测量值进行比较。结果 研究纳入了 118 名女性患者(中位年龄 51 [范围 29-78] 岁)。比较 CNN 预测与测量的 TTC 和 TTV 在 NAC 疗程中的变化,CCC 分别为 0.95(95% CI:0.90-0.98)和 0.94(95% CI:0.87-0.97)。CNN 预测的 TTC 和 TTV 预测手术时肿瘤状态的 AUC 分别为 0.72(95% CI:0.34-0.94)和 0.72(95% CI:0.40-0.95)。结论 深度学习与基于生物学的数学模型相结合,在仅使用 NAC 前的 MRI 数据预测 NAC 期间患者肿瘤的空间和时间演变方面表现出色。©RSNA, 2024.
{"title":"Combining Biology-based and MRI Data-driven Modeling to Predict Response to Neoadjuvant Chemotherapy in Patients with Triple-Negative Breast Cancer.","authors":"Casey E Stowers, Chengyue Wu, Zhan Xu, Sidharth Kumar, Clinton Yam, Jong Bum Son, Jingfei Ma, Jonathan I Tamir, Gaiane M Rauch, Thomas E Yankeelov","doi":"10.1148/ryai.240124","DOIUrl":"10.1148/ryai.240124","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To combine deep learning and biology-based modeling to predict the response of locally advanced, triple negative breast cancer before initiating neoadjuvant chemotherapy (NAC). Materials and Methods In this retrospective study, a biology-based mathematical model of tumor response to NAC was constructed and calibrated on a patient-specific basis using imaging data from patients enrolled in the MD Anderson ARTEMIS trial (ClinicalTrials.gov, NCT02276443) between April 2018 and May 2021. To relate the calibrated parameters in the biology-based model and pretreatment MRI data, a convolutional neural network (CNN) was employed. The CNN predictions of the calibrated model parameters were used to estimate tumor response at the end of NAC. CNN performance in the estimations of total tumor volume (TTV), total tumor cellularity (TTC), and tumor status was evaluated. Model-predicted TTC and TTV measurements were compared with MRI-based measurements using the concordance correlation coefficient (CCC), and area under the receiver operating characteristic curve (for predicting pathologic complete response at the end of NAC). Results The study included 118 female patients (median age, 51 [range, 29-78] years). For comparison of CNN predicted to measured change in TTC and TTV over the course of NAC, the CCCs were 0.95 (95% CI: 0.90-0.98) and 0.94 (95% CI: 0.87-0.97), respectively. CNN-predicted TTC and TTV had an AUC of 0.72 (95% CI: 0.34-0.94) and 0.72 (95% CI: 0.40-0.95) for predicting tumor status at the time of surgery, respectively. Conclusion Deep learning integrated with a biology-based mathematical model showed good performance in predicting the spatial and temporal evolution of a patient's tumor during NAC using only pre-NAC MRI data. ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240124"},"PeriodicalIF":8.1,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrated Deep Learning Model for the Detection, Segmentation, and Morphologic Analysis of Intracranial Aneurysms Using CT Angiography. 利用 CT 血管造影检测、分割和形态分析颅内动脉瘤的集成深度学习模型。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-06 DOI: 10.1148/ryai.240017
Yi Yang, Zhenyao Chang, Xin Nie, Jun Wu, Jingang Chen, Weiqi Liu, Hongwei He, Shuo Wang, Chengcheng Zhu, Qingyuan Liu

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To develop a deep learning model for the morphologic measurement of unruptured intracranial aneurysms (UIAs) based on CT angiography (CTA) data and validate its performance using a multicenter dataset. Materials and Methods In this retrospective study, patients with CTA examinations, including those with and without UIAs, in a tertiary referral hospital from February 2018 to February 2021 were included as the training dataset. Patients with UIAs who underwent CTA at multiple centers between April 2021 to December 2022 were included as the multicenter external testing set. An integrated deep-learning (IDL) model was developed for UIA detection, segmentation and morphologic measurement using an nnU-net algorithm. Model performance was evaluated using the Dice similarity coefficient (DSC) and intraclass correlation coefficient (ICC), with measurements by senior radiologists serving as the reference standard. The ability of the IDL model to improve performance of junior radiologists in measuring morphologic UIA features was assessed. Results The study included 1182 patients with UIAs and 578 controls without UIAs as the training dataset (55 years [IQR, 47-62], 1,012 [57.5%] females) and 535 patients with UIAs as the multicenter external testing set (57 years [IQR, 50-63], 353 [66.0%] females). The IDL model achieved 97% accuracy in detecting UIAs and achieved a DSC of 0.90 (95%CI, 0.88-0.92) for UIA segmentation. Model-based morphologic measurements showed good agreement with reference standard measurements (all ICCs > 0.85). Within the multicenter external testing set, the IDL model also showed agreement with reference standard measurements (all ICCs > 0.80). Junior radiologists assisted by the IDL model showed significantly improved performance in measuring UIA size (ICC improved from 0.88 [0.80-0.92] to 0.96 [0.92-0.97], P < .001). Conclusion The developed integrated deep learning model using CTA data showed good performance in UIA detection, segmentation and morphologic measurement and may be used to assist less experienced radiologists in morphologic analysis of UIAs. ©RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校样审核。请注意,在制作最终校对稿的过程中,可能会发现一些可能影响内容的错误。目的 基于 CT 血管造影(CTA)数据,开发一种用于未破裂颅内动脉瘤(UIAs)形态测量的深度学习模型,并使用多中心数据集验证其性能。材料与方法 在这项回顾性研究中,将 2018 年 2 月至 2021 年 2 月在一家三级转诊医院接受 CTA 检查的患者(包括有 UIA 和无 UIA 的患者)作为训练数据集。2021 年 4 月至 2022 年 12 月期间在多个中心接受 CTA 检查的 UIA 患者作为多中心外部测试集。利用 nnU-net 算法开发了一个集成深度学习(IDL)模型,用于 UIA 检测、分割和形态测量。使用狄斯相似系数(DSC)和类内相关系数(ICC)对模型性能进行了评估,并将资深放射科医生的测量结果作为参考标准。评估了 IDL 模型提高初级放射医师测量 UIA 形态特征的能力。结果 研究纳入了 1182 名 UIA 患者和 578 名无 UIA 的对照组作为训练数据集(55 岁 [IQR,47-62],1,012 [57.5%] 女性),并纳入了 535 名 UIA 患者作为多中心外部测试集(57 岁 [IQR,50-63],353 [66.0%] 女性)。IDL 模型检测 UIA 的准确率达到 97%,UIA 分割的 DSC 为 0.90(95%CI,0.88-0.92)。基于模型的形态测量结果与参考标准测量结果显示出良好的一致性(所有 ICC 均大于 0.85)。在多中心外部测试集中,IDL 模型也与参考标准测量结果一致(所有 ICC 均大于 0.80)。由 IDL 模型辅助的初级放射医师在测量 UIA 大小方面的表现明显提高(ICC 从 0.88 [0.80-0.92] 提高到 0.96 [0.92-0.97],P < .001)。结论 利用 CTA 数据开发的集成深度学习模型在 UIA 检测、分割和形态测量方面表现出色,可用于协助经验不足的放射科医生对 UIA 进行形态分析。©RSNA,2024。
{"title":"Integrated Deep Learning Model for the Detection, Segmentation, and Morphologic Analysis of Intracranial Aneurysms Using CT Angiography.","authors":"Yi Yang, Zhenyao Chang, Xin Nie, Jun Wu, Jingang Chen, Weiqi Liu, Hongwei He, Shuo Wang, Chengcheng Zhu, Qingyuan Liu","doi":"10.1148/ryai.240017","DOIUrl":"https://doi.org/10.1148/ryai.240017","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop a deep learning model for the morphologic measurement of unruptured intracranial aneurysms (UIAs) based on CT angiography (CTA) data and validate its performance using a multicenter dataset. Materials and Methods In this retrospective study, patients with CTA examinations, including those with and without UIAs, in a tertiary referral hospital from February 2018 to February 2021 were included as the training dataset. Patients with UIAs who underwent CTA at multiple centers between April 2021 to December 2022 were included as the multicenter external testing set. An integrated deep-learning (IDL) model was developed for UIA detection, segmentation and morphologic measurement using an nnU-net algorithm. Model performance was evaluated using the Dice similarity coefficient (DSC) and intraclass correlation coefficient (ICC), with measurements by senior radiologists serving as the reference standard. The ability of the IDL model to improve performance of junior radiologists in measuring morphologic UIA features was assessed. Results The study included 1182 patients with UIAs and 578 controls without UIAs as the training dataset (55 years [IQR, 47-62], 1,012 [57.5%] females) and 535 patients with UIAs as the multicenter external testing set (57 years [IQR, 50-63], 353 [66.0%] females). The IDL model achieved 97% accuracy in detecting UIAs and achieved a DSC of 0.90 (95%CI, 0.88-0.92) for UIA segmentation. Model-based morphologic measurements showed good agreement with reference standard measurements (all ICCs > 0.85). Within the multicenter external testing set, the IDL model also showed agreement with reference standard measurements (all ICCs > 0.80). Junior radiologists assisted by the IDL model showed significantly improved performance in measuring UIA size (ICC improved from 0.88 [0.80-0.92] to 0.96 [0.92-0.97], <i>P</i> < .001). Conclusion The developed integrated deep learning model using CTA data showed good performance in UIA detection, segmentation and morphologic measurement and may be used to assist less experienced radiologists in morphologic analysis of UIAs. ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240017"},"PeriodicalIF":8.1,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SCIseg: Automatic Segmentation of Intramedullary Lesions in Spinal Cord Injury on T2-weighted MRI Scans. SCIseg:在 T2 加权磁共振成像扫描中自动分割脊髓损伤的髓内病变。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-06 DOI: 10.1148/ryai.240005
Enamundram Naga Karthik, Jan Valošek, Andrew C Smith, Dario Pfyffer, Simon Schading-Sassenhausen, Lynn Farner, Kenneth A Weber, Patrick Freund, Julien Cohen-Adad

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To develop a deep learning tool for the automatic segmentation of the spinal cord and intramedullary lesions in spinal cord injury (SCI) on T2-weighted MRI scans. Materials and Methods This retrospective study included MRI data acquired between July 2002 and February 2023 from 191 patients with SCI (mean age, 48.1 years ± 17.9 [SD]; 142 males). The data consisted of T2-weighted MRI acquired using different scanner manufacturers with various image resolutions (isotropic and anisotropic) and orientations (axial and sagittal). Patients had different lesion etiologies (traumatic, ischemic, and hemorrhagic) and lesion locations across the cervical, thoracic and lumbar spine. A deep learning model, SCIseg, was trained in a three-phase process involving active learning for the automatic segmentation of intramedullary SCI lesions and the spinal cord. The segmentations from the proposed model were visually and quantitatively compared with those from three other open-source methods (PropSeg, DeepSeg and contrast-agnostic, all part of the Spinal Cord Toolbox). Wilcoxon signed-rank test was used to compare quantitative MRI biomarkers of SCI (lesion volume, lesion length, and maximal axial damage ratio) derived from the manual reference standard lesion masks and biomarkers obtained automatically with SCIseg segmentations. Results SCIseg achieved a Dice score of 0.92 ± 0.07 (mean ± SD) and 0.61 ± 0.27 for spinal cord and SCI lesion segmentation, respectively. There was no evidence of a difference between lesion length (P = .42) and maximal axial damage ratio (P = .16) computed from manually annotated lesions and the lesion segmentations obtained using SCIseg. Conclusion SCIseg accurately segmented intramedullary lesions on a diverse dataset of T2-weighted MRI scans and extracted relevant lesion biomarkers (namely, lesion volume, lesion length, and maximal axial damage ratio). SCIseg is open-source and accessible through the Spinal Cord Toolbox (v6.2 and above). Published under a CC BY 4.0 license.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现一些可能影响内容的错误。目的 开发一种深度学习工具,用于在 T2 加权磁共振成像扫描中自动分割脊髓损伤(SCI)的脊髓和髓内病变。材料与方法 这项回顾性研究纳入了 2002 年 7 月至 2023 年 2 月期间从 191 名 SCI 患者(平均年龄为 48.1 岁 ± 17.9 [SD];142 名男性)处获取的 MRI 数据。数据包括使用不同扫描仪制造商、不同图像分辨率(各向同性和各向异性)和方向(轴向和矢状)采集的 T2 加权 MRI。患者的病因(外伤性、缺血性和出血性)和病变位置各不相同,遍及颈椎、胸椎和腰椎。深度学习模型 SCIseg 的训练分为三个阶段,其中包括主动学习,用于自动分割髓内 SCI 病变和脊髓。将所提模型的分割结果与其他三种开源方法(PropSeg、DeepSeg 和 contrast-agnostic,均为脊髓工具箱的一部分)的分割结果进行了直观和定量比较。使用Wilcoxon符号秩检验比较人工参考标准病变掩膜和SCIseg分割自动获得的SCI定量MRI生物标志物(病变体积、病变长度和最大轴向损伤比)。结果 SCIseg 对脊髓和 SCI 病灶分割的 Dice 评分分别为 0.92 ± 0.07(平均 ± SD)和 0.61 ± 0.27。根据人工标注的病灶计算出的病灶长度(P = .42)和最大轴向损伤率(P = .16)与使用 SCIseg 获得的病灶分割结果之间没有差异。结论 SCIseg 能在不同的 T2 加权磁共振成像扫描数据集上准确分割髓内病变,并提取相关的病变生物标志物(即病变体积、病变长度和最大轴向损伤比)。SCIseg 是开源的,可通过脊髓工具箱(v6.2 及以上版本)访问。以 CC BY 4.0 许可发布。
{"title":"SCIseg: Automatic Segmentation of Intramedullary Lesions in Spinal Cord Injury on T2-weighted MRI Scans.","authors":"Enamundram Naga Karthik, Jan Valošek, Andrew C Smith, Dario Pfyffer, Simon Schading-Sassenhausen, Lynn Farner, Kenneth A Weber, Patrick Freund, Julien Cohen-Adad","doi":"10.1148/ryai.240005","DOIUrl":"10.1148/ryai.240005","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop a deep learning tool for the automatic segmentation of the spinal cord and intramedullary lesions in spinal cord injury (SCI) on T2-weighted MRI scans. Materials and Methods This retrospective study included MRI data acquired between July 2002 and February 2023 from 191 patients with SCI (mean age, 48.1 years ± 17.9 [SD]; 142 males). The data consisted of T2-weighted MRI acquired using different scanner manufacturers with various image resolutions (isotropic and anisotropic) and orientations (axial and sagittal). Patients had different lesion etiologies (traumatic, ischemic, and hemorrhagic) and lesion locations across the cervical, thoracic and lumbar spine. A deep learning model, SCIseg, was trained in a three-phase process involving active learning for the automatic segmentation of intramedullary SCI lesions and the spinal cord. The segmentations from the proposed model were visually and quantitatively compared with those from three other open-source methods (PropSeg, DeepSeg and contrast-agnostic, all part of the Spinal Cord Toolbox). Wilcoxon signed-rank test was used to compare quantitative MRI biomarkers of SCI (lesion volume, lesion length, and maximal axial damage ratio) derived from the manual reference standard lesion masks and biomarkers obtained automatically with SCIseg segmentations. Results SCIseg achieved a Dice score of 0.92 ± 0.07 (mean ± SD) and 0.61 ± 0.27 for spinal cord and SCI lesion segmentation, respectively. There was no evidence of a difference between lesion length (<i>P</i> = .42) and maximal axial damage ratio (<i>P</i> = .16) computed from manually annotated lesions and the lesion segmentations obtained using SCIseg. Conclusion SCIseg accurately segmented intramedullary lesions on a diverse dataset of T2-weighted MRI scans and extracted relevant lesion biomarkers (namely, lesion volume, lesion length, and maximal axial damage ratio). SCIseg is open-source and accessible through the Spinal Cord Toolbox (v6.2 and above). Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240005"},"PeriodicalIF":8.1,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Achieving More with Less: Combining Strong and Weak Labels for Intracranial Hemorrhage Detection. 事半功倍:结合强弱标签检测颅内出血。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-01 DOI: 10.1148/ryai.240670
Tugba Akinci D'Antonoli, Jeffrey D Rudie
{"title":"Achieving More with Less: Combining Strong and Weak Labels for Intracranial Hemorrhage Detection.","authors":"Tugba Akinci D'Antonoli, Jeffrey D Rudie","doi":"10.1148/ryai.240670","DOIUrl":"https://doi.org/10.1148/ryai.240670","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 6","pages":"e240670"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Addressing the Generalizability of AI in Radiology Using a Novel Data Augmentation Framework with Synthetic Patient Image Data: Proof-of-Concept and External Validation for Classification Tasks in Multiple Sclerosis. 利用合成患者图像数据的新型数据增强框架解决放射学中人工智能的通用性问题:多发性硬化症的概念验证和外部验证分类任务。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-01 DOI: 10.1148/ryai.230514
Gianluca Brugnara, Chandrakanth Jayachandran Preetha, Katerina Deike, Robert Haase, Thomas Pinetz, Martha Foltyn-Dumitru, Mustafa A Mahmutoglu, Brigitte Wildemann, Ricarda Diem, Wolfgang Wick, Alexander Radbruch, Martin Bendszus, Hagen Meredig, Aditya Rastogi, Philipp Vollmuth

Artificial intelligence (AI) models often face performance drops after deployment to external datasets. This study evaluated the potential of a novel data augmentation framework based on generative adversarial networks (GANs) that creates synthetic patient image data for model training to improve model generalizability. Model development and external testing were performed for a given classification task, namely the detection of new fluid-attenuated inversion recovery lesions at MRI during longitudinal follow-up of patients with multiple sclerosis (MS). An internal dataset of 669 patients with MS (n = 3083 examinations) was used to develop an attention-based network, trained both with and without the inclusion of the GAN-based synthetic data augmentation framework. External testing was performed on 134 patients with MS from a different institution, with MR images acquired using different scanners and protocols than images used during training. Models trained using synthetic data augmentation showed a significant performance improvement when applied on external data (area under the receiver operating characteristic curve [AUC], 83.6% without synthetic data vs 93.3% with synthetic data augmentation; P = .03), achieving comparable results to the internal test set (AUC, 95.0%; P = .53), whereas models without synthetic data augmentation demonstrated a performance drop upon external testing (AUC, 93.8% on internal dataset vs 83.6% on external data; P = .03). Data augmentation with synthetic patient data substantially improved performance of AI models on unseen MRI data and may be extended to other clinical conditions or tasks to mitigate domain shift, limit class imbalance, and enhance the robustness of AI applications in medical imaging. Keywords: Brain, Brain Stem, Multiple Sclerosis, Synthetic Data Augmentation, Generative Adversarial Network Supplemental material is available for this article. © RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现一些可能影响内容的错误。人工智能(AI)模型在部署到外部数据集后往往会面临性能下降的问题。本研究评估了基于生成式对抗网络(GAN)的新型数据增强框架的潜力,该框架可在模型训练期间创建合成患者图像数据,从而提高模型的通用性。研究针对一项特定的分类任务进行了模型开发和外部测试,该任务是在多发性硬化症(MS)患者的纵向随访过程中检测磁共振成像上的新流体增强反转恢复(FLAIR)病灶。669 名多发性硬化症患者(n = 3083 次检查)的内部数据集被用于开发基于注意力的网络,该网络在使用或未使用基于 GAN 的合成数据增强框架的情况下均得到了训练。外部测试是在来自不同机构的 134 名多发性硬化症患者身上进行的,他们使用不同的扫描仪和方案获取磁共振图像,与训练时使用的图像不同。使用合成数据增强训练的模型在应用于外部数据时表现出显著的性能提升(无合成数据时的AUC为83.6%,有合成数据增强时的AUC为93.3%,P = .03),达到了与内部测试集相当的结果(AUC为95.5%,P = .53),而无合成数据增强的模型在外部测试时表现出性能下降(内部数据集的AUC为93.8%,外部数据集的AUC为83.6%,P = .03)。用合成患者数据增强数据大大提高了人工智能模型在未见核磁共振成像数据上的性能,并可扩展到其他临床条件或任务,以减轻领域偏移、限制类不平衡,并增强人工智能在医学成像应用中的稳健性。©RSNA,2024。
{"title":"Addressing the Generalizability of AI in Radiology Using a Novel Data Augmentation Framework with Synthetic Patient Image Data: Proof-of-Concept and External Validation for Classification Tasks in Multiple Sclerosis.","authors":"Gianluca Brugnara, Chandrakanth Jayachandran Preetha, Katerina Deike, Robert Haase, Thomas Pinetz, Martha Foltyn-Dumitru, Mustafa A Mahmutoglu, Brigitte Wildemann, Ricarda Diem, Wolfgang Wick, Alexander Radbruch, Martin Bendszus, Hagen Meredig, Aditya Rastogi, Philipp Vollmuth","doi":"10.1148/ryai.230514","DOIUrl":"10.1148/ryai.230514","url":null,"abstract":"<p><p>Artificial intelligence (AI) models often face performance drops after deployment to external datasets. This study evaluated the potential of a novel data augmentation framework based on generative adversarial networks (GANs) that creates synthetic patient image data for model training to improve model generalizability. Model development and external testing were performed for a given classification task, namely the detection of new fluid-attenuated inversion recovery lesions at MRI during longitudinal follow-up of patients with multiple sclerosis (MS). An internal dataset of 669 patients with MS (<i>n</i> = 3083 examinations) was used to develop an attention-based network, trained both with and without the inclusion of the GAN-based synthetic data augmentation framework. External testing was performed on 134 patients with MS from a different institution, with MR images acquired using different scanners and protocols than images used during training. Models trained using synthetic data augmentation showed a significant performance improvement when applied on external data (area under the receiver operating characteristic curve [AUC], 83.6% without synthetic data vs 93.3% with synthetic data augmentation; <i>P</i> = .03), achieving comparable results to the internal test set (AUC, 95.0%; <i>P</i> = .53), whereas models without synthetic data augmentation demonstrated a performance drop upon external testing (AUC, 93.8% on internal dataset vs 83.6% on external data; <i>P</i> = .03). Data augmentation with synthetic patient data substantially improved performance of AI models on unseen MRI data and may be extended to other clinical conditions or tasks to mitigate domain shift, limit class imbalance, and enhance the robustness of AI applications in medical imaging. <b>Keywords:</b> Brain, Brain Stem, Multiple Sclerosis, Synthetic Data Augmentation, Generative Adversarial Network <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230514"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142476382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Boosting Deep Learning for Interpretable Brain MRI Lesion Detection through the Integration of Radiology Report Information. 通过整合放射学报告信息促进深度学习,实现可解释的脑磁共振成像病灶检测。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-01 DOI: 10.1148/ryai.230520
Lisong Dai, Jiayu Lei, Fenglong Ma, Zheng Sun, Haiyan Du, Houwang Zhang, Jingxuan Jiang, Jianyong Wei, Dan Wang, Guang Tan, Xinyu Song, Jinyu Zhu, Qianqian Zhao, Songtao Ai, Ai Shang, Zhaohui Li, Ya Zhang, Yuehua Li

Purpose To guide the attention of a deep learning (DL) model toward MRI characteristics of brain lesions by incorporating radiology report-derived textual features to achieve interpretable lesion detection. Materials and Methods In this retrospective study, 35 282 brain MRI scans (January 2018 to June 2023) and corresponding radiology reports from center 1 were used for training, validation, and internal testing. A total of 2655 brain MRI scans (January 2022 to December 2022) from centers 2-5 were reserved for external testing. Textual features were extracted from radiology reports to guide a DL model (ReportGuidedNet) focusing on lesion characteristics. Another DL model (PlainNet) without textual features was developed for comparative analysis. Both models identified 15 conditions, including 14 diseases and normal brains. Performance of each model was assessed by calculating macro-averaged area under the receiver operating characteristic curve (ma-AUC) and micro-averaged AUC (mi-AUC). Attention maps, which visualized model attention, were assessed with a five-point Likert scale. Results ReportGuidedNet outperformed PlainNet for all diagnoses on both internal (ma-AUC, 0.93 [95% CI: 0.91, 0.95] vs 0.85 [95% CI: 0.81, 0.88]; mi-AUC, 0.93 [95% CI: 0.90, 0.95] vs 0.89 [95% CI: 0.83, 0.92]) and external (ma-AUC, 0.91 [95% CI: 0.88, 0.93] vs 0.75 [95% CI: 0.72, 0.79]; mi-AUC, 0.90 [95% CI: 0.87, 0.92] vs 0.76 [95% CI: 0.72, 0.80]) testing sets. The performance difference between internal and external testing sets was smaller for ReportGuidedNet than for PlainNet (Δma-AUC, 0.03 vs 0.10; Δmi-AUC, 0.02 vs 0.13). The Likert scale score of ReportGuidedNet was higher than that of PlainNet (mean ± SD: 2.50 ± 1.09 vs 1.32 ± 1.20; P < .001). Conclusion The integration of radiology report textual features improved the ability of the DL model to detect brain lesions, thereby enhancing interpretability and generalizability. Keywords: Deep Learning, Computer-aided Diagnosis, Knowledge-driven Model, Radiology Report, Brain MRI Supplemental material is available for this article. Published under a CC BY 4.0 license.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 通过结合放射学报告衍生的文本特征,引导深度学习(DL)模型关注脑部病变 MRI 特征,从而实现可解释的病变检测。材料与方法 在这项回顾性研究中,来自 1 号中心的 35282 份脑 MRI 扫描(2018 年 1 月至 2023 年 6 月)和相应的放射学报告被用于训练、验证和内部测试。第 2-5 中心的 2655 份脑部 MRI 扫描(2022 年 1 月至 2022 年 12 月)保留用于外部测试。从放射学报告中提取了文本特征,以指导一个侧重于病变特征的 DL 模型(ReportGuidedNet)。为进行比较分析,还开发了另一个不含文本特征的 DL 模型(PlainNet)。两个模型都诊断了 15 种情况,包括 14 种疾病和正常大脑。每个模型的性能通过计算接收者工作特征曲线下的宏观和微观平均面积(ma-AUC、mi-AUC)进行评估。注意力图是模型注意力的可视化,采用 5 点李克特量表进行评估。结果 在所有诊断中,ReportGuidedNet 的内部表现均优于 PlainNet(ma-AUC:0.93 [95% CI: 0.91- 0.95] 对 0.85 [95% CI: 0.81-0.88]; mi-AUC:0.93[95%CI:0.90-0.95] 对 0.89 [95% CI:0.83-0.92])和外部(ma-AUC:0.91 [95% CI: 0.88-0.93] 对 0.75 [95% CI: 0.72-0.79]; mi-AUC:0.90 [95% CI: 0.87-0.92] 对 0.76 [95% CI: 0.72-0.80]) 测试集。内部和外部测试集之间的性能差异,ReportGuidedNet 小于 PlainNet(Δma-AUC:0.03 对 0.10;Δmi-AUC:0.02 对 0.13)。ReportGuidedNet的Likert量表评分高于PlainNet(平均±标准差:2.50±1.09对1.32±1.20;P < .001)。结论 整合放射报告文本特征提高了 DL 模型检测脑部病变的能力,增强了可解释性和可推广性。以 CC BY 4.0 许可发布。
{"title":"Boosting Deep Learning for Interpretable Brain MRI Lesion Detection through the Integration of Radiology Report Information.","authors":"Lisong Dai, Jiayu Lei, Fenglong Ma, Zheng Sun, Haiyan Du, Houwang Zhang, Jingxuan Jiang, Jianyong Wei, Dan Wang, Guang Tan, Xinyu Song, Jinyu Zhu, Qianqian Zhao, Songtao Ai, Ai Shang, Zhaohui Li, Ya Zhang, Yuehua Li","doi":"10.1148/ryai.230520","DOIUrl":"10.1148/ryai.230520","url":null,"abstract":"<p><p>Purpose To guide the attention of a deep learning (DL) model toward MRI characteristics of brain lesions by incorporating radiology report-derived textual features to achieve interpretable lesion detection. Materials and Methods In this retrospective study, 35 282 brain MRI scans (January 2018 to June 2023) and corresponding radiology reports from center 1 were used for training, validation, and internal testing. A total of 2655 brain MRI scans (January 2022 to December 2022) from centers 2-5 were reserved for external testing. Textual features were extracted from radiology reports to guide a DL model (ReportGuidedNet) focusing on lesion characteristics. Another DL model (PlainNet) without textual features was developed for comparative analysis. Both models identified 15 conditions, including 14 diseases and normal brains. Performance of each model was assessed by calculating macro-averaged area under the receiver operating characteristic curve (ma-AUC) and micro-averaged AUC (mi-AUC). Attention maps, which visualized model attention, were assessed with a five-point Likert scale. Results ReportGuidedNet outperformed PlainNet for all diagnoses on both internal (ma-AUC, 0.93 [95% CI: 0.91, 0.95] vs 0.85 [95% CI: 0.81, 0.88]; mi-AUC, 0.93 [95% CI: 0.90, 0.95] vs 0.89 [95% CI: 0.83, 0.92]) and external (ma-AUC, 0.91 [95% CI: 0.88, 0.93] vs 0.75 [95% CI: 0.72, 0.79]; mi-AUC, 0.90 [95% CI: 0.87, 0.92] vs 0.76 [95% CI: 0.72, 0.80]) testing sets. The performance difference between internal and external testing sets was smaller for ReportGuidedNet than for PlainNet (Δma-AUC, 0.03 vs 0.10; Δmi-AUC, 0.02 vs 0.13). The Likert scale score of ReportGuidedNet was higher than that of PlainNet (mean ± SD: 2.50 ± 1.09 vs 1.32 ± 1.20; <i>P</i> < .001). Conclusion The integration of radiology report textual features improved the ability of the DL model to detect brain lesions, thereby enhancing interpretability and generalizability. <b>Keywords:</b> Deep Learning, Computer-aided Diagnosis, Knowledge-driven Model, Radiology Report, Brain MRI <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230520"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142393849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The RSNA Abdominal Traumatic Injury CT (RATIC) Dataset. RSNA 腹部创伤 CT (RATIC) 数据集。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-01 DOI: 10.1148/ryai.240101
Jeffrey D Rudie, Hui-Ming Lin, Robyn L Ball, Sabeena Jalal, Luciano M Prevedello, Savvas Nicolaou, Brett S Marinelli, Adam E Flanders, Kirti Magudia, George Shih, Melissa A Davis, John Mongan, Peter D Chang, Ferco H Berger, Sebastiaan Hermans, Meng Law, Tyler Richards, Jan-Peter Grunz, Andreas Steven Kunz, Shobhit Mathur, Sandro Galea-Soler, Andrew D Chung, Saif Afat, Chin-Chi Kuo, Layal Aweidah, Ana Villanueva Campos, Arjuna Somasundaram, Felipe Antonio Sanchez Tijmes, Attaporn Jantarangkoon, Leonardo Kayat Bittencourt, Michael Brassil, Ayoub El Hajjami, Hakan Dogan, Muris Becircic, Agrahara G Bharatkumar, Eduardo Moreno Júdice de Mattos Farina, Errol Colak

Supplemental material is available for this article.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。RSNA 腹部创伤 CT (RATIC) 数据集包含 4,274 项与创伤相关的腹部 CT 研究注释,可在 https://imaging.rsna.org/dataset/5 和 https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection 上查阅。©RSNA,2024。
{"title":"The RSNA Abdominal Traumatic Injury CT (RATIC) Dataset.","authors":"Jeffrey D Rudie, Hui-Ming Lin, Robyn L Ball, Sabeena Jalal, Luciano M Prevedello, Savvas Nicolaou, Brett S Marinelli, Adam E Flanders, Kirti Magudia, George Shih, Melissa A Davis, John Mongan, Peter D Chang, Ferco H Berger, Sebastiaan Hermans, Meng Law, Tyler Richards, Jan-Peter Grunz, Andreas Steven Kunz, Shobhit Mathur, Sandro Galea-Soler, Andrew D Chung, Saif Afat, Chin-Chi Kuo, Layal Aweidah, Ana Villanueva Campos, Arjuna Somasundaram, Felipe Antonio Sanchez Tijmes, Attaporn Jantarangkoon, Leonardo Kayat Bittencourt, Michael Brassil, Ayoub El Hajjami, Hakan Dogan, Muris Becircic, Agrahara G Bharatkumar, Eduardo Moreno Júdice de Mattos Farina, Errol Colak","doi":"10.1148/ryai.240101","DOIUrl":"10.1148/ryai.240101","url":null,"abstract":"<p><p>\u0000 <i>Supplemental material is available for this article.</i>\u0000 </p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240101"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142509376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Precise Image-level Localization of Intracranial Hemorrhage on Head CT Scans with Deep Learning Models Trained on Study-level Labels. 利用研究级标签训练的深度学习模型对头部 CT 扫描颅内出血进行图像级精确定位。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-01 DOI: 10.1148/ryai.230296
Yunan Wu, Michael Iorga, Suvarna Badhe, James Zhang, Donald R Cantrell, Elaine J Tanhehco, Nicholas Szrama, Andrew M Naidech, Michael Drakopoulos, Shamis T Hasan, Kunal M Patel, Tarek A Hijaz, Eric J Russell, Shamal Lalvani, Amit Adate, Todd B Parrish, Aggelos K Katsaggelos, Virginia B Hill

Purpose To develop a highly generalizable weakly supervised model to automatically detect and localize image-level intracranial hemorrhage (ICH) by using study-level labels. Materials and Methods In this retrospective study, the proposed model was pretrained on the image-level Radiological Society of North America dataset and fine-tuned on a local dataset by using attention-based bidirectional long short-term memory networks. This local training dataset included 10 699 noncontrast head CT scans in 7469 patients, with ICH study-level labels extracted from radiology reports. Model performance was compared with that of two senior neuroradiologists on 100 random test scans using the McNemar test, and its generalizability was evaluated on an external independent dataset. Results The model achieved a positive predictive value (PPV) of 85.7% (95% CI: 84.0, 87.4) and an area under the receiver operating characteristic curve of 0.96 (95% CI: 0.96, 0.97) on the held-out local test set (n = 7243, 3721 female) and 89.3% (95% CI: 87.8, 90.7) and 0.96 (95% CI: 0.96, 0.97), respectively, on the external test set (n = 491, 178 female). For 100 randomly selected samples, the model achieved performance on par with two neuroradiologists, but with a significantly faster (P < .05) diagnostic time of 5.04 seconds per scan (vs 86 seconds and 22.2 seconds for the two neuroradiologists, respectively). The model's attention weights and heatmaps visually aligned with neuroradiologists' interpretations. Conclusion The proposed model demonstrated high generalizability and high PPVs, offering a valuable tool for expedited ICH detection and prioritization while reducing false-positive interruptions in radiologists' workflows. Keywords: Computer-Aided Diagnosis (CAD), Brain/Brain Stem, Hemorrhage, Convolutional Neural Network (CNN), Transfer Learning Supplemental material is available for this article. © RSNA, 2024 See also the commentary by Akinci D'Antonoli and Rudie in this issue.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响文章内容的错误。目的 建立一个高度通用的弱监督模型,利用研究级标签自动检测和定位图像级颅内出血(ICH)。材料与方法 在这项回顾性研究中,利用基于注意力的双向长短期记忆网络,在图像级 RSNA 数据集上对所提出的模型进行了预训练,并在本地数据集上对其进行了微调。该本地训练数据集包括来自 7469 名患者的 10,699 张非对比头部 CT 扫描图像,这些图像带有从放射学报告中提取的 ICH 研究级标签。使用 McNemar 检验将模型的性能与两位资深神经放射学专家在 100 个随机测试扫描中的性能进行了比较,并在外部独立数据集上评估了模型的普适性。结果 在本地测试集(n = 7243,3721 名女性)上,该模型的阳性预测值(PPV)为 85.7%(95% CI:[84.0%, 87.4%]),AUC 为 0.96(95% CI:[0.96, 0.97]);在外部测试集(n = 491,178 名女性)上,该模型的阳性预测值(PPV)为 89.3%(95% CI:[87.8%, 90.7%]),AUC 为 0.96(95% CI:[0.96, 0.97])。在随机抽取的 100 个样本中,该模型的表现与两名神经放射科医生相当,但诊断时间明显更快(P < .05),每次扫描仅需 5.04 秒(而两名神经放射科医生的诊断时间分别为 86 秒和 22.2 秒)。该模型的注意力权重和热图与神经放射科医生的解释一致。结论 所提出的模型具有很高的普适性和 PPV 值,为加快 ICH 检测和优先排序提供了有价值的工具,同时减少了放射医师工作流程中假阳性的中断。©RSNA,2024。
{"title":"Precise Image-level Localization of Intracranial Hemorrhage on Head CT Scans with Deep Learning Models Trained on Study-level Labels.","authors":"Yunan Wu, Michael Iorga, Suvarna Badhe, James Zhang, Donald R Cantrell, Elaine J Tanhehco, Nicholas Szrama, Andrew M Naidech, Michael Drakopoulos, Shamis T Hasan, Kunal M Patel, Tarek A Hijaz, Eric J Russell, Shamal Lalvani, Amit Adate, Todd B Parrish, Aggelos K Katsaggelos, Virginia B Hill","doi":"10.1148/ryai.230296","DOIUrl":"10.1148/ryai.230296","url":null,"abstract":"<p><p>Purpose To develop a highly generalizable weakly supervised model to automatically detect and localize image-level intracranial hemorrhage (ICH) by using study-level labels. Materials and Methods In this retrospective study, the proposed model was pretrained on the image-level Radiological Society of North America dataset and fine-tuned on a local dataset by using attention-based bidirectional long short-term memory networks. This local training dataset included 10 699 noncontrast head CT scans in 7469 patients, with ICH study-level labels extracted from radiology reports. Model performance was compared with that of two senior neuroradiologists on 100 random test scans using the McNemar test, and its generalizability was evaluated on an external independent dataset. Results The model achieved a positive predictive value (PPV) of 85.7% (95% CI: 84.0, 87.4) and an area under the receiver operating characteristic curve of 0.96 (95% CI: 0.96, 0.97) on the held-out local test set (<i>n</i> = 7243, 3721 female) and 89.3% (95% CI: 87.8, 90.7) and 0.96 (95% CI: 0.96, 0.97), respectively, on the external test set (<i>n</i> = 491, 178 female). For 100 randomly selected samples, the model achieved performance on par with two neuroradiologists, but with a significantly faster (<i>P</i> < .05) diagnostic time of 5.04 seconds per scan (vs 86 seconds and 22.2 seconds for the two neuroradiologists, respectively). The model's attention weights and heatmaps visually aligned with neuroradiologists' interpretations. Conclusion The proposed model demonstrated high generalizability and high PPVs, offering a valuable tool for expedited ICH detection and prioritization while reducing false-positive interruptions in radiologists' workflows. <b>Keywords:</b> Computer-Aided Diagnosis (CAD), Brain/Brain Stem, Hemorrhage, Convolutional Neural Network (CNN), Transfer Learning <i>Supplemental material is available for this article.</i> © RSNA, 2024 See also the commentary by Akinci D'Antonoli and Rudie in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230296"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142081915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Radiology-Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1