首页 > 最新文献

Radiology-Artificial Intelligence最新文献

英文 中文
Denoising Multiphase Functional Cardiac CT Angiography Using Deep Learning and Synthetic Data. 利用深度学习和合成数据对多相功能性心脏 CT 血管造影进行去噪。
IF 9.8 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-01 DOI: 10.1148/ryai.230153
Veit Sandfort, Martin J Willemink, Marina Codari, Domenico Mastrodicasa, Dominik Fleischmann

Coronary CT angiography is increasingly used for cardiac diagnosis. Dose modulation techniques can reduce radiation dose, but resulting functional images are noisy and challenging for functional analysis. This retrospective study describes and evaluates a deep learning method for denoising functional cardiac imaging, taking advantage of multiphase information in a three-dimensional convolutional neural network. Coronary CT angiograms (n = 566) were used to derive synthetic data for training. Deep learning-based image denoising was compared with unprocessed images and a standard noise reduction algorithm (block-matching and three-dimensional filtering [BM3D]). Noise and signal-to-noise ratio measurements, as well as expert evaluation of image quality, were performed. To validate the use of the denoised images for cardiac quantification, threshold-based segmentation was performed, and results were compared with manual measurements on unprocessed images. Deep learning-based denoised images showed significantly improved noise compared with standard denoising-based images (SD of left ventricular blood pool, 20.3 HU ± 42.5 [SD] vs 33.4 HU ± 39.8 for deep learning-based image denoising vs BM3D; P < .0001). Expert evaluations of image quality were significantly higher in deep learning-based denoised images compared with standard denoising. Semiautomatic left ventricular size measurements on deep learning-based denoised images showed excellent correlation with expert quantification on unprocessed images (intraclass correlation coefficient, 0.97). Deep learning-based denoising using a three-dimensional approach resulted in excellent denoising performance and facilitated valid automatic processing of cardiac functional imaging. Keywords: Cardiac CT Angiography, Deep Learning, Image Denoising Supplemental material is available for this article. © RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。冠状动脉 CT 血管造影 (CTA) 越来越多地用于心脏诊断。剂量调制技术可减少辐射剂量,但产生的功能图像噪声大,对功能分析具有挑战性。这项回顾性研究介绍并评估了一种用于心脏功能成像去噪的深度学习方法,该方法利用了三维卷积神经网络中的多相信息。冠状动脉 CT 血管造影(n = 566)用于生成合成数据进行训练。基于深度学习的图像去噪(DLID)与未经处理的图像和标准降噪算法(BM3D)进行了比较。对图像质量进行了噪声和信噪比测量以及专家评估。为了验证去噪图像是否可用于心脏量化,进行了基于阈值的分割,并将结果与未处理图像的人工测量结果进行了比较。与基于标准去噪的图像相比,基于深度学习的去噪图像明显改善了噪声(左心室血池的 SD 值为 20.3 ± 42.5 HU,DLID 为 33.4 ± 39.8 HU,BM3D 为 33.4 ± 39.8 HU,P < .0001)。与标准去噪相比,专家对基于深度学习的去噪图像质量的评价明显更高。基于深度学习的去噪图像上的半自动左心室尺寸测量结果与未处理图像上的专家量化结果显示出极好的相关性(类内相关系数为 0.97)。基于深度学习的三维去噪方法具有出色的去噪性能,有助于对心脏功能成像进行有效的自动处理。©RSNA,2024。
{"title":"Denoising Multiphase Functional Cardiac CT Angiography Using Deep Learning and Synthetic Data.","authors":"Veit Sandfort, Martin J Willemink, Marina Codari, Domenico Mastrodicasa, Dominik Fleischmann","doi":"10.1148/ryai.230153","DOIUrl":"10.1148/ryai.230153","url":null,"abstract":"<p><p>Coronary CT angiography is increasingly used for cardiac diagnosis. Dose modulation techniques can reduce radiation dose, but resulting functional images are noisy and challenging for functional analysis. This retrospective study describes and evaluates a deep learning method for denoising functional cardiac imaging, taking advantage of multiphase information in a three-dimensional convolutional neural network. Coronary CT angiograms (<i>n</i> = 566) were used to derive synthetic data for training. Deep learning-based image denoising was compared with unprocessed images and a standard noise reduction algorithm (block-matching and three-dimensional filtering [BM3D]). Noise and signal-to-noise ratio measurements, as well as expert evaluation of image quality, were performed. To validate the use of the denoised images for cardiac quantification, threshold-based segmentation was performed, and results were compared with manual measurements on unprocessed images. Deep learning-based denoised images showed significantly improved noise compared with standard denoising-based images (SD of left ventricular blood pool, 20.3 HU ± 42.5 [SD] vs 33.4 HU ± 39.8 for deep learning-based image denoising vs BM3D; <i>P</i> < .0001). Expert evaluations of image quality were significantly higher in deep learning-based denoised images compared with standard denoising. Semiautomatic left ventricular size measurements on deep learning-based denoised images showed excellent correlation with expert quantification on unprocessed images (intraclass correlation coefficient, 0.97). Deep learning-based denoising using a three-dimensional approach resulted in excellent denoising performance and facilitated valid automatic processing of cardiac functional imaging. <b>Keywords:</b> Cardiac CT Angiography, Deep Learning, Image Denoising <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230153"},"PeriodicalIF":9.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982910/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139984065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The University of California San Francisco Brain Metastases Stereotactic Radiosurgery (UCSF-BMSR) MRI Dataset. 加州大学旧金山分校脑转移立体定向放射外科(UCSF-BMSR)磁共振成像数据集。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-01 DOI: 10.1148/ryai.230126
Jeffrey D Rudie, Rachit Saluja, David A Weiss, Pierre Nedelec, Evan Calabrese, John B Colby, Benjamin Laguna, John Mongan, Steve Braunstein, Christopher P Hess, Andreas M Rauschecker, Leo P Sugrue, Javier E Villanueva-Meyer

Supplemental material is available for this article.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。公开的加州大学旧金山分校脑转移立体定向放射外科 MRI 数据集包括 412 名患者共 560 张多模态脑 MRI 图像,专家对 5136 个脑转移灶进行了体素标注。©RSNA,2024。
{"title":"The University of California San Francisco Brain Metastases Stereotactic Radiosurgery (UCSF-BMSR) MRI Dataset.","authors":"Jeffrey D Rudie, Rachit Saluja, David A Weiss, Pierre Nedelec, Evan Calabrese, John B Colby, Benjamin Laguna, John Mongan, Steve Braunstein, Christopher P Hess, Andreas M Rauschecker, Leo P Sugrue, Javier E Villanueva-Meyer","doi":"10.1148/ryai.230126","DOIUrl":"10.1148/ryai.230126","url":null,"abstract":"<p><p>\u0000 <i>Supplemental material is available for this article.</i>\u0000 </p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230126"},"PeriodicalIF":8.1,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982817/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139913647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Quality and Diagnostic Performance of Low-Dose Liver CT with Deep Learning Reconstruction versus Standard-Dose CT. 采用深度学习重建技术的低剂量肝脏 CT 与标准剂量 CT 的图像质量和诊断性能对比。
IF 9.8 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-01 DOI: 10.1148/ryai.230192
Dong Ho Lee, Jeong Min Lee, Chang Hee Lee, Saif Afat, Ahmed Othman

Purpose To compare the image quality and diagnostic capability in detecting malignant liver tumors of low-dose CT (LDCT, 33% dose) with deep learning-based denoising (DLD) and standard-dose CT (SDCT, 100% dose) with model-based iterative reconstruction (MBIR). Materials and Methods In this prospective, multicenter, noninferiority study, individuals referred for liver CT scans were enrolled from three tertiary referral hospitals between February 2021 and August 2022. All liver CT scans were conducted using a dual-source scanner with the dose split into tubes A (67% dose) and B (33% dose). Blended images from tubes A and B were created using MBIR to produce SDCT images, whereas LDCT images used data from tube B and were reconstructed with DLD. The noise in liver images was measured and compared between imaging techniques. The diagnostic performance of each technique in detecting malignant liver tumors was evaluated by three independent radiologists using jackknife alternative free-response receiver operating characteristic analysis. Noninferiority of LDCT compared with SDCT was declared when the lower limit of the 95% CI for the difference in figure of merit (FOM) was greater than -0.10. Results A total of 296 participants (196 men, 100 women; mean age, 60.5 years ± 13.3 [SD]) were included. The mean noise level in the liver was significantly lower for LDCT (10.1) compared with SDCT (10.7) (P < .001). Diagnostic performance was assessed in 246 participants (108 malignant tumors in 90 participants). The reader-averaged FOM was 0.880 for SDCT and 0.875 for LDCT (P = .35). The difference fell within the noninferiority margin (difference, -0.005 [95% CI: -0.024, 0.012]). Conclusion Compared with SDCT with MBIR, LDCT using 33% of the standard radiation dose had reduced image noise and comparable diagnostic performance in detecting malignant liver tumors. Keywords: CT, Abdomen/GI, Liver, Comparative Studies, Diagnosis, Reconstruction Algorithms Clinical trial registration no. NCT05804799 © RSNA, 2024 Supplemental material is available for this article.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 比较基于深度学习去噪(DLD)的低剂量 CT(LDCT,33% 剂量)和基于模型迭代重建(MBIR)的标准剂量 CT(SDCT,100% 剂量)在检测恶性肝肿瘤方面的图像质量和诊断能力。材料与方法 在这项前瞻性、多中心、非劣效性研究(研究标识符:NCT05804799)中,三家三级转诊医院在 2021 年 2 月至 2022 年 8 月期间接收了转诊的肝脏 CT 扫描患者。所有肝脏 CT 扫描均使用双源扫描仪进行,剂量分为 A 管(67% 剂量)和 B 管(33% 剂量)。A 管和 B 管的混合图像使用 MBIR 生成 SDCT 图像,而 LDCT 图像使用 B 管的数据,并使用 DLD 进行重建。对肝脏的图像噪声进行了测量,并对不同成像技术进行了比较。由三位独立的放射科医生使用杰克刀替代自由反应接收器操作特征分析法评估了每种技术在检测恶性肝肿瘤方面的诊断性能。当优点系数(FOM)差异的 95% CI 下限大于-0.10 时,即宣布 LDCT 与 SDCT 相比无劣效性。结果 共纳入 296 名参与者(196 名男性,100 名女性;平均年龄为 60.5 ± [SD] 13.3 岁)。LDCT 的肝脏平均噪音水平(10.1)明显低于 SDCT(10.7)(P < .001)。对 246 名参与者(90 名参与者中有 108 个恶性肿瘤)的诊断性能进行了评估。SDCT的读者平均FOM为0.880,LDCT为0.875(P = .35)。差异在非劣性范围内(差异,-0.005;95% CI,-0.024 至 0.012)。结论 与使用 MBIR 的 SDCT 相比,使用 33% 标准辐射剂量的 LDCT 在检测恶性肝脏肿瘤方面具有更低的图像噪声和相当的诊断性能。©RSNA,2024。
{"title":"Image Quality and Diagnostic Performance of Low-Dose Liver CT with Deep Learning Reconstruction versus Standard-Dose CT.","authors":"Dong Ho Lee, Jeong Min Lee, Chang Hee Lee, Saif Afat, Ahmed Othman","doi":"10.1148/ryai.230192","DOIUrl":"10.1148/ryai.230192","url":null,"abstract":"<p><p>Purpose To compare the image quality and diagnostic capability in detecting malignant liver tumors of low-dose CT (LDCT, 33% dose) with deep learning-based denoising (DLD) and standard-dose CT (SDCT, 100% dose) with model-based iterative reconstruction (MBIR). Materials and Methods In this prospective, multicenter, noninferiority study, individuals referred for liver CT scans were enrolled from three tertiary referral hospitals between February 2021 and August 2022. All liver CT scans were conducted using a dual-source scanner with the dose split into tubes A (67% dose) and B (33% dose). Blended images from tubes A and B were created using MBIR to produce SDCT images, whereas LDCT images used data from tube B and were reconstructed with DLD. The noise in liver images was measured and compared between imaging techniques. The diagnostic performance of each technique in detecting malignant liver tumors was evaluated by three independent radiologists using jackknife alternative free-response receiver operating characteristic analysis. Noninferiority of LDCT compared with SDCT was declared when the lower limit of the 95% CI for the difference in figure of merit (FOM) was greater than -0.10. Results A total of 296 participants (196 men, 100 women; mean age, 60.5 years ± 13.3 [SD]) were included. The mean noise level in the liver was significantly lower for LDCT (10.1) compared with SDCT (10.7) (<i>P</i> < .001). Diagnostic performance was assessed in 246 participants (108 malignant tumors in 90 participants). The reader-averaged FOM was 0.880 for SDCT and 0.875 for LDCT (<i>P</i> = .35). The difference fell within the noninferiority margin (difference, -0.005 [95% CI: -0.024, 0.012]). Conclusion Compared with SDCT with MBIR, LDCT using 33% of the standard radiation dose had reduced image noise and comparable diagnostic performance in detecting malignant liver tumors. <b>Keywords:</b> CT, Abdomen/GI, Liver, Comparative Studies, Diagnosis, Reconstruction Algorithms Clinical trial registration no. NCT05804799 © RSNA, 2024 <i>Supplemental material is available for this article.</i></p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230192"},"PeriodicalIF":9.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982822/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139478829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Finding the Pieces to Treat the Whole: Using Radiomics to Identify Tumor Habitats. 寻找治疗整体的碎片:利用放射组学识别肿瘤栖息地。
IF 9.8 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-01 DOI: 10.1148/ryai.230547
Hersh Sagreiya
{"title":"Finding the Pieces to Treat the Whole: Using Radiomics to Identify Tumor Habitats.","authors":"Hersh Sagreiya","doi":"10.1148/ryai.230547","DOIUrl":"10.1148/ryai.230547","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 2","pages":"e230547"},"PeriodicalIF":9.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982906/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139984066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI for Detection of Tuberculosis: Implications for Global Health. 人工智能检测结核病:对全球健康的影响。
IF 9.8 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-01 DOI: 10.1148/ryai.230327
Eui Jin Hwang, Won Gi Jeong, Pierre-Marie David, Matthew Arentz, Morten Ruhwald, Soon Ho Yoon

Tuberculosis, which primarily affects developing countries, remains a significant global health concern. Since the 2010s, the role of chest radiography has expanded in tuberculosis triage and screening beyond its traditional complementary role in the diagnosis of tuberculosis. Computer-aided diagnosis (CAD) systems for tuberculosis detection on chest radiographs have recently made substantial progress in diagnostic performance, thanks to deep learning technologies. The current performance of CAD systems for tuberculosis has approximated that of human experts, presenting a potential solution to the shortage of human readers to interpret chest radiographs in low- or middle-income, high-tuberculosis-burden countries. This article provides a critical appraisal of developmental process reporting in extant CAD software for tuberculosis, based on the Checklist for Artificial Intelligence in Medical Imaging. It also explores several considerations to scale up CAD solutions, encompassing manufacturer-independent CAD validation, economic and political aspects, and ethical concerns, as well as the potential for broadening radiography-based diagnosis to other nontuberculosis diseases. Collectively, CAD for tuberculosis will emerge as a representative deep learning application, catalyzing advances in global health and health equity. Keywords: Computer-aided Diagnosis (CAD), Conventional Radiography, Thorax, Lung, Machine Learning Supplemental material is available for this article. © RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现一些错误,从而影响文章内容。结核病主要影响发展中国家,仍然是一个重大的全球健康问题。自 2010 年代以来,胸部放射摄影在结核病分诊和筛查中的作用不断扩大,已超出其在结核病诊断中的传统补充作用。得益于深度学习技术,用于胸片结核病检测的计算机辅助诊断(CAD)系统最近在诊断性能方面取得了重大进展。目前,肺结核计算机辅助诊断系统的性能已接近人类专家的水平,为中低收入、肺结核高发国家解决胸片解读人员不足的问题提供了一种潜在的解决方案。本文根据医学影像人工智能核对表,对现有肺结核 CAD 软件的开发过程报告进行了批判性评估。文章还探讨了扩大计算机辅助诊断解决方案的几个考虑因素,包括独立于制造商的计算机辅助诊断验证、经济和政治方面、伦理问题,以及将基于射线摄影的诊断扩展到其他非结核病的潜力。总之,结核病的计算机辅助诊断将成为具有代表性的深度学习应用,推动全球健康和健康公平的进步。©RSNA,2024。
{"title":"AI for Detection of Tuberculosis: Implications for Global Health.","authors":"Eui Jin Hwang, Won Gi Jeong, Pierre-Marie David, Matthew Arentz, Morten Ruhwald, Soon Ho Yoon","doi":"10.1148/ryai.230327","DOIUrl":"10.1148/ryai.230327","url":null,"abstract":"<p><p>Tuberculosis, which primarily affects developing countries, remains a significant global health concern. Since the 2010s, the role of chest radiography has expanded in tuberculosis triage and screening beyond its traditional complementary role in the diagnosis of tuberculosis. Computer-aided diagnosis (CAD) systems for tuberculosis detection on chest radiographs have recently made substantial progress in diagnostic performance, thanks to deep learning technologies. The current performance of CAD systems for tuberculosis has approximated that of human experts, presenting a potential solution to the shortage of human readers to interpret chest radiographs in low- or middle-income, high-tuberculosis-burden countries. This article provides a critical appraisal of developmental process reporting in extant CAD software for tuberculosis, based on the Checklist for Artificial Intelligence in Medical Imaging. It also explores several considerations to scale up CAD solutions, encompassing manufacturer-independent CAD validation, economic and political aspects, and ethical concerns, as well as the potential for broadening radiography-based diagnosis to other nontuberculosis diseases. Collectively, CAD for tuberculosis will emerge as a representative deep learning application, catalyzing advances in global health and health equity. <b>Keywords:</b> Computer-aided Diagnosis (CAD), Conventional Radiography, Thorax, Lung, Machine Learning <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230327"},"PeriodicalIF":9.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982823/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139404634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vision Transformer-based Decision Support for Neurosurgical Intervention in Acute Traumatic Brain Injury: Automated Surgical Intervention Support Tool. 基于视觉转换器的急性创伤性脑损伤神经外科干预决策支持:自动外科干预支持工具 (ASIST-TBI)。
IF 9.8 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-01 DOI: 10.1148/ryai.230088
Christopher W Smith, Armaan K Malhotra, Christopher Hammill, Derek Beaton, Erin M Harrington, Yingshi He, Husain Shakil, Amanda McFarlan, Blair Jones, Hui Ming Lin, François Mathieu, Avery B Nathens, Alun D Ackery, Garrick Mok, Muhammad Mamdani, Shobhit Mathur, Jefferson R Wilson, Robert Moreland, Errol Colak, Christopher D Witiw

Purpose To develop an automated triage tool to predict neurosurgical intervention for patients with traumatic brain injury (TBI). Materials and Methods A provincial trauma registry was reviewed to retrospectively identify patients with TBI from 2005 to 2022 treated at a specialized Canadian trauma center. Model training, validation, and testing were performed using head CT scans with binary reference standard patient-level labels corresponding to whether the patient received neurosurgical intervention. Performance and accuracy of the model, the Automated Surgical Intervention Support Tool for TBI (ASIST-TBI), were also assessed using a held-out consecutive test set of all patients with TBI presenting to the center between March 2021 and September 2022. Results Head CT scans from 2806 patients with TBI (mean age, 57 years ± 22 [SD]; 1955 [70%] men) were acquired between 2005 and 2021 and used for training, validation, and testing. Consecutive scans from an additional 612 patients (mean age, 61 years ± 22; 443 [72%] men) were used to assess the performance of ASIST-TBI. There was accurate prediction of neurosurgical intervention with an area under the receiver operating characteristic curve (AUC) of 0.92 (95% CI: 0.88, 0.94), accuracy of 87% (491 of 562), sensitivity of 87% (196 of 225), and specificity of 88% (295 of 337) on the test dataset. Performance on the held-out test dataset remained robust with an AUC of 0.89 (95% CI: 0.85, 0.91), accuracy of 84% (517 of 612), sensitivity of 85% (199 of 235), and specificity of 84% (318 of 377). Conclusion A novel deep learning model was developed that could accurately predict the requirement for neurosurgical intervention using acute TBI CT scans. Keywords: CT, Brain/Brain Stem, Surgery, Trauma, Prognosis, Classification, Application Domain, Traumatic Brain Injury, Triage, Machine Learning, Decision Support Supplemental material is available for this article. © RSNA, 2024 See also commentary by Haller in this issue.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响文章内容的错误。目的 开发一种自动分诊工具,在不使用图像级标签的情况下预测创伤性脑损伤(TBI)患者的神经外科干预情况。材料和方法 对省级创伤登记进行审查,以回顾性地识别 2005-2022 年间在加拿大一家专业创伤中心接受治疗的 TBI 患者。模型的训练、验证和测试使用头部 CT 扫描进行,二进制参考标准患者级别标签与患者是否接受神经外科干预相对应。模型被称为创伤性脑损伤自动外科干预支持工具(ASIST-TBI),它的性能和准确性还使用了 2021 年 3 月至 2022 年 9 月期间在本中心就诊的所有创伤性脑损伤患者的保留连续测试集进行了评估。结果 2005-2021年期间,我们获得了2806名创伤性脑损伤患者(平均年龄57岁(SD,22);1955名(70%)男性)的头部CT扫描,并将其用于训练、验证和测试。另外 612 名患者(平均年龄 61 岁(SD,22);男性 443 名(72%))的连续扫描结果被用于评估 ASIST-TBI 的性能。在测试数据集上,神经外科干预预测准确,接收者操作曲线下面积(AUC)为 0.92 [95% CI:0.88-0.94],准确率为 87%(491/562),灵敏度为 87%(196/225),特异性为 88%(295/337)。测试数据集的 AUC 为 0.89 [95% CI: 0.85-0.91],灵敏度为 85% (199/235),特异度为 84% (318/377),准确度为 84% (517/612)。结论 开发出一种新型深度学习模型,可利用急性创伤性脑损伤 CT 扫描准确预测神经外科干预的需求。©RSNA,2024。
{"title":"Vision Transformer-based Decision Support for Neurosurgical Intervention in Acute Traumatic Brain Injury: Automated Surgical Intervention Support Tool.","authors":"Christopher W Smith, Armaan K Malhotra, Christopher Hammill, Derek Beaton, Erin M Harrington, Yingshi He, Husain Shakil, Amanda McFarlan, Blair Jones, Hui Ming Lin, François Mathieu, Avery B Nathens, Alun D Ackery, Garrick Mok, Muhammad Mamdani, Shobhit Mathur, Jefferson R Wilson, Robert Moreland, Errol Colak, Christopher D Witiw","doi":"10.1148/ryai.230088","DOIUrl":"10.1148/ryai.230088","url":null,"abstract":"<p><p>Purpose To develop an automated triage tool to predict neurosurgical intervention for patients with traumatic brain injury (TBI). Materials and Methods A provincial trauma registry was reviewed to retrospectively identify patients with TBI from 2005 to 2022 treated at a specialized Canadian trauma center. Model training, validation, and testing were performed using head CT scans with binary reference standard patient-level labels corresponding to whether the patient received neurosurgical intervention. Performance and accuracy of the model, the Automated Surgical Intervention Support Tool for TBI (ASIST-TBI), were also assessed using a held-out consecutive test set of all patients with TBI presenting to the center between March 2021 and September 2022. Results Head CT scans from 2806 patients with TBI (mean age, 57 years ± 22 [SD]; 1955 [70%] men) were acquired between 2005 and 2021 and used for training, validation, and testing. Consecutive scans from an additional 612 patients (mean age, 61 years ± 22; 443 [72%] men) were used to assess the performance of ASIST-TBI. There was accurate prediction of neurosurgical intervention with an area under the receiver operating characteristic curve (AUC) of 0.92 (95% CI: 0.88, 0.94), accuracy of 87% (491 of 562), sensitivity of 87% (196 of 225), and specificity of 88% (295 of 337) on the test dataset. Performance on the held-out test dataset remained robust with an AUC of 0.89 (95% CI: 0.85, 0.91), accuracy of 84% (517 of 612), sensitivity of 85% (199 of 235), and specificity of 84% (318 of 377). Conclusion A novel deep learning model was developed that could accurately predict the requirement for neurosurgical intervention using acute TBI CT scans. <b>Keywords:</b> CT, Brain/Brain Stem, Surgery, Trauma, Prognosis, Classification, Application Domain, Traumatic Brain Injury, Triage, Machine Learning, Decision Support <i>Supplemental material is available for this article.</i> © RSNA, 2024 See also commentary by Haller in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230088"},"PeriodicalIF":9.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982820/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139404647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of a Categorical AI System for Digital Breast Tomosynthesis on Breast Cancer Interpretation by Both General Radiologists and Breast Imaging Specialists. 数字乳腺断层合成的分类人工智能系统对普通放射医师和乳腺成像专科医师乳腺癌判读的影响。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-01 DOI: 10.1148/ryai.230137
Jiye G Kim, Bryan Haslam, Abdul Rahman Diab, Ashwin Sakhare, Giorgia Grisot, Hyunkwang Lee, Jacqueline Holt, Christoph I Lee, William Lotter, A Gregory Sorensen

Purpose To evaluate performance improvements of general radiologists and breast imaging specialists when interpreting a set of diverse digital breast tomosynthesis (DBT) examinations with the aid of a custom-built categorical artificial intelligence (AI) system. Materials and Methods A fully balanced multireader, multicase reader study was conducted to compare the performance of 18 radiologists (nine general radiologists and nine breast imaging specialists) reading 240 retrospectively collected screening DBT mammograms (mean patient age, 59.8 years ± 11.3 [SD]; 100% women), acquired between August 2016 and March 2019, with and without the aid of a custom-built categorical AI system. The area under the receiver operating characteristic curve (AUC), sensitivity, and specificity across general radiologists and breast imaging specialists reading with versus without AI were assessed. Reader performance was also analyzed as a function of breast cancer characteristics and patient subgroups. Results Every radiologist demonstrated improved interpretation performance when reading with versus without AI, with an average AUC of 0.93 versus 0.87, demonstrating a difference in AUC of 0.06 (95% CI: 0.04, 0.08; P < .001). Improvement in AUC was observed for both general radiologists (difference of 0.08; P < .001) and breast imaging specialists (difference of 0.04; P < .001) and across all cancer characteristics (lesion type, lesion size, and pathology) and patient subgroups (race and ethnicity, age, and breast density) examined. Conclusion A categorical AI system helped improve overall radiologist interpretation performance of DBT screening mammograms for both general radiologists and breast imaging specialists and across various patient subgroups and breast cancer characteristics. Keywords: Computer-aided Diagnosis, Screening Mammography, Digital Breast Tomosynthesis, Breast Cancer, Screening, Convolutional Neural Network (CNN), Artificial Intelligence Supplemental material is available for this article. © RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现一些可能影响内容的错误。目的 评估普通放射科医生和乳腺成像专家在借助定制的分类人工智能(AI)系统解释一组不同的数字乳腺断层合成(DBT)检查时的性能改进情况。材料与方法 开展了一项完全平衡的多阅读器多病例阅读器研究,比较了 18 位放射科医生(9 位普通放射科医生和 9 位乳腺成像专家)在阅读 240 张回顾性收集的筛查 DBT 乳房 X 光片(患者平均年龄为 59.8(SD,11.3)岁;均为女性)时的表现,这些 X 光片是在 2016 年 8 月至 2019 年 3 月期间获得的,有无借助定制的分类人工智能系统。评估了普通放射科医生和乳腺成像专家在使用和未使用人工智能的情况下的接收器操作特征曲线下面积(AUC)、灵敏度和特异性。此外,还根据乳腺癌特征和患者亚群对读片者的表现进行了分析。结果 每一位放射科医生在使用人工智能与不使用人工智能时的判读性能都有所提高,平均 AUC 为 0.93,而不使用人工智能时为 0.87,AUC 差异为 0.06(95% CI:0.04,0.08;P <.001)。普通放射科医生(差异为 0.08,P < .001)和乳腺成像专家(差异为 0.05,P < .001)的 AUC 均有所改善,而且在所有癌症特征(病变类型、病变大小和病理)和患者亚组(种族和民族、年龄和乳腺密度)检查中均是如此。结论 分类人工智能系统有助于提高普通放射科医生和乳腺成像专家对 DBT 乳房 X 光筛查的整体判读能力,并适用于不同的患者亚群和乳腺癌特征。©RSNA,2024。
{"title":"Impact of a Categorical AI System for Digital Breast Tomosynthesis on Breast Cancer Interpretation by Both General Radiologists and Breast Imaging Specialists.","authors":"Jiye G Kim, Bryan Haslam, Abdul Rahman Diab, Ashwin Sakhare, Giorgia Grisot, Hyunkwang Lee, Jacqueline Holt, Christoph I Lee, William Lotter, A Gregory Sorensen","doi":"10.1148/ryai.230137","DOIUrl":"10.1148/ryai.230137","url":null,"abstract":"<p><p>Purpose To evaluate performance improvements of general radiologists and breast imaging specialists when interpreting a set of diverse digital breast tomosynthesis (DBT) examinations with the aid of a custom-built categorical artificial intelligence (AI) system. Materials and Methods A fully balanced multireader, multicase reader study was conducted to compare the performance of 18 radiologists (nine general radiologists and nine breast imaging specialists) reading 240 retrospectively collected screening DBT mammograms (mean patient age, 59.8 years ± 11.3 [SD]; 100% women), acquired between August 2016 and March 2019, with and without the aid of a custom-built categorical AI system. The area under the receiver operating characteristic curve (AUC), sensitivity, and specificity across general radiologists and breast imaging specialists reading with versus without AI were assessed. Reader performance was also analyzed as a function of breast cancer characteristics and patient subgroups. Results Every radiologist demonstrated improved interpretation performance when reading with versus without AI, with an average AUC of 0.93 versus 0.87, demonstrating a difference in AUC of 0.06 (95% CI: 0.04, 0.08; <i>P</i> < .001). Improvement in AUC was observed for both general radiologists (difference of 0.08; <i>P</i> < .001) and breast imaging specialists (difference of 0.04; <i>P</i> < .001) and across all cancer characteristics (lesion type, lesion size, and pathology) and patient subgroups (race and ethnicity, age, and breast density) examined. Conclusion A categorical AI system helped improve overall radiologist interpretation performance of DBT screening mammograms for both general radiologists and breast imaging specialists and across various patient subgroups and breast cancer characteristics. <b>Keywords:</b> Computer-aided Diagnosis, Screening Mammography, Digital Breast Tomosynthesis, Breast Cancer, Screening, Convolutional Neural Network (CNN), Artificial Intelligence <i>Supplemental material is available for this article</i>. © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230137"},"PeriodicalIF":8.1,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982824/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139698499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and Validation of a Deep Learning Model to Reduce the Interference of Rectal Artifacts in MRI-based Prostate Cancer Diagnosis. 深度学习模型的开发与验证:在基于核磁共振成像的前列腺癌诊断中减少直肠伪影的干扰》(Deep Learning Model of Development and Validation of a Deep Learning Model to Reduce Rectal Artifacts in MRI-based Prostate Cancer Diagnosis)。
IF 9.8 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-01 DOI: 10.1148/ryai.230362
Lei Hu, Xiangyu Guo, Dawei Zhou, Zhen Wang, Lisong Dai, Liang Li, Ying Li, Tian Zhang, Haining Long, Chengxin Yu, Zhen-Wei Shi, Chu Han, Cheng Lu, Jungong Zhao, Yuehua Li, Yunfei Zha, Zaiyi Liu

Purpose To develop an MRI-based model for clinically significant prostate cancer (csPCa) diagnosis that can resist rectal artifact interference. Materials and Methods This retrospective study included 2203 male patients with prostate lesions who underwent biparametric MRI and biopsy between January 2019 and June 2023. Targeted adversarial training with proprietary adversarial samples (TPAS) strategy was proposed to enhance model resistance against rectal artifacts. The automated csPCa diagnostic models trained with and without TPAS were compared using multicenter validation datasets. The impact of rectal artifacts on the diagnostic performance of each model at the patient and lesion levels was compared using the area under the receiver operating characteristic curve (AUC) and the area under the precision-recall curve (AUPRC). The AUC between models was compared using the DeLong test, and the AUPRC was compared using the bootstrap method. Results The TPAS model exhibited diagnostic performance improvements of 6% at the patient level (AUC: 0.87 vs 0.81, P < .001) and 7% at the lesion level (AUPRC: 0.84 vs 0.77, P = .007) compared with the control model. The TPAS model demonstrated less performance decline in the presence of rectal artifact-pattern adversarial noise than the control model (ΔAUC: -17% vs -19%, ΔAUPRC: -18% vs -21%). The TPAS model performed better than the control model in patients with moderate (AUC: 0.79 vs 0.73, AUPRC: 0.68 vs 0.61) and severe (AUC: 0.75 vs 0.57, AUPRC: 0.69 vs 0.59) artifacts. Conclusion This study demonstrates that the TPAS model can reduce rectal artifact interference in MRI-based csPCa diagnosis, thereby improving its performance in clinical applications. Keywords: MR-Diffusion-weighted Imaging, Urinary, Prostate, Comparative Studies, Diagnosis, Transfer Learning Clinical trial registration no. ChiCTR23000069832 Supplemental material is available for this article. Published under a CC BY 4.0 license.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现一些错误,从而影响文章内容。目的 开发一种基于磁共振成像的临床重要前列腺癌(csPCa)诊断模型,该模型可抵御直肠伪影干扰。材料与方法 这项回顾性研究纳入了 2203 名男性前列腺病变患者,他们在 2019 年 1 月至 2023 年 6 月期间接受了双参数 MRI 和活检。为了增强模型对直肠伪影的抵抗力,研究人员提出了使用专有对抗样本(TPAS)进行有针对性对抗训练的策略。使用多中心验证数据集比较了使用和不使用 TPAS 训练的 csPCa 自动诊断模型。使用接收者操作特征曲线下面积(AUC)和精确度-召回曲线下面积(AUPRC)比较了直肠伪影对每个模型在患者和病灶层面诊断性能的影响。模型间的 AUC 采用 Delong 检验进行比较,AUPRC 采用 Bootstrap 方法进行比较。结果 与对照模型相比,TPAS 模型在患者层面的诊断性能提高了 6%(AUC:0.87 对 0.81;P < .001),在病灶层面的诊断性能提高了 7%(AUPRC:0.84 对 0.77;P = .007)。与对照模型相比,TPAS 模型在出现直肠伪影模式对抗噪声时的性能下降较少(ΔAUC:-17% 对 -19%;ΔAUPRC:-18% 对 -21%)。在中度(AUC:0.79 对 0.73;AUPRC:0.68 对 0.61)和重度(AUC:0.75 对 0.57;AUPRC:0.69 对 0.59)伪影患者中,TPAS 模型的表现优于对照模型。结论 本研究表明,TPAS 模型可以减少直肠伪影对基于 MRI 的 PCa 诊断的干扰,从而提高其在临床应用中的性能。以 CC BY 4.0 许可发布。
{"title":"Development and Validation of a Deep Learning Model to Reduce the Interference of Rectal Artifacts in MRI-based Prostate Cancer Diagnosis.","authors":"Lei Hu, Xiangyu Guo, Dawei Zhou, Zhen Wang, Lisong Dai, Liang Li, Ying Li, Tian Zhang, Haining Long, Chengxin Yu, Zhen-Wei Shi, Chu Han, Cheng Lu, Jungong Zhao, Yuehua Li, Yunfei Zha, Zaiyi Liu","doi":"10.1148/ryai.230362","DOIUrl":"10.1148/ryai.230362","url":null,"abstract":"<p><p>Purpose To develop an MRI-based model for clinically significant prostate cancer (csPCa) diagnosis that can resist rectal artifact interference. Materials and Methods This retrospective study included 2203 male patients with prostate lesions who underwent biparametric MRI and biopsy between January 2019 and June 2023. Targeted adversarial training with proprietary adversarial samples (TPAS) strategy was proposed to enhance model resistance against rectal artifacts. The automated csPCa diagnostic models trained with and without TPAS were compared using multicenter validation datasets. The impact of rectal artifacts on the diagnostic performance of each model at the patient and lesion levels was compared using the area under the receiver operating characteristic curve (AUC) and the area under the precision-recall curve (AUPRC). The AUC between models was compared using the DeLong test, and the AUPRC was compared using the bootstrap method. Results The TPAS model exhibited diagnostic performance improvements of 6% at the patient level (AUC: 0.87 vs 0.81, <i>P</i> < .001) and 7% at the lesion level (AUPRC: 0.84 vs 0.77, <i>P</i> = .007) compared with the control model. The TPAS model demonstrated less performance decline in the presence of rectal artifact-pattern adversarial noise than the control model (ΔAUC: -17% vs -19%, ΔAUPRC: -18% vs -21%). The TPAS model performed better than the control model in patients with moderate (AUC: 0.79 vs 0.73, AUPRC: 0.68 vs 0.61) and severe (AUC: 0.75 vs 0.57, AUPRC: 0.69 vs 0.59) artifacts. Conclusion This study demonstrates that the TPAS model can reduce rectal artifact interference in MRI-based csPCa diagnosis, thereby improving its performance in clinical applications. <b>Keywords:</b> MR-Diffusion-weighted Imaging, Urinary, Prostate, Comparative Studies, Diagnosis, Transfer Learning Clinical trial registration no. ChiCTR23000069832 <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230362"},"PeriodicalIF":9.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10985636/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140040503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Expert-centered Evaluation of Deep Learning Algorithms for Brain Tumor Segmentation. 以专家为中心评估用于脑肿瘤分割的深度学习算法。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-01 DOI: 10.1148/ryai.220231
Katharina V Hoebel, Christopher P Bridge, Sara Ahmed, Oluwatosin Akintola, Caroline Chung, Raymond Y Huang, Jason M Johnson, Albert Kim, K Ina Ly, Ken Chang, Jay Patel, Marco Pinho, Tracy T Batchelor, Bruce R Rosen, Elizabeth R Gerstner, Jayashree Kalpathy-Cramer

Purpose To present results from a literature survey on practices in deep learning segmentation algorithm evaluation and perform a study on expert quality perception of brain tumor segmentation. Materials and Methods A total of 180 articles reporting on brain tumor segmentation algorithms were surveyed for the reported quality evaluation. Additionally, ratings of segmentation quality on a four-point scale were collected from medical professionals for 60 brain tumor segmentation cases. Results Of the surveyed articles, Dice score, sensitivity, and Hausdorff distance were the most popular metrics to report segmentation performance. Notably, only 2.8% of the articles included clinical experts' evaluation of segmentation quality. The experimental results revealed a low interrater agreement (Krippendorff α, 0.34) in experts' segmentation quality perception. Furthermore, the correlations between the ratings and commonly used quantitative quality metrics were low (Kendall tau between Dice score and mean rating, 0.23; Kendall tau between Hausdorff distance and mean rating, 0.51), with large variability among the experts. Conclusion The results demonstrate that quality ratings are prone to variability due to the ambiguity of tumor boundaries and individual perceptual differences, and existing metrics do not capture the clinical perception of segmentation quality. Keywords: Brain Tumor Segmentation, Deep Learning Algorithms, Glioblastoma, Cancer, Machine Learning Clinical trial registration nos. NCT00756106 and NCT00662506 Supplemental material is available for this article. © RSNA, 2023.

目的 介绍有关深度学习分割算法评估实践的文献调查结果,并对脑肿瘤分割的专家质量感知进行研究。材料与方法 共调查了 180 篇报道脑肿瘤分割算法的文章,以进行报告质量评估。此外,还收集了医学专家对 60 个脑肿瘤分割病例的分割质量的四级评分。结果 在调查的文章中,骰子得分、灵敏度和豪斯多夫距离是报告分割性能最常用的指标。值得注意的是,只有 2.8% 的文章包含临床专家对分割质量的评价。实验结果表明,专家对分割质量的感知存在较低的互评一致性(Krippendorff α,0.34)。此外,评分与常用定量质量指标之间的相关性也很低(Dice 分数与平均评分之间的 Kendall tau 值为 0.23;Hausdorff 距离与平均评分之间的 Kendall tau 值为 0.51),专家之间的差异也很大。结论 结果表明,由于肿瘤边界的模糊性和个体感知的差异,质量评分容易产生变异,现有的指标不能反映临床对分割质量的感知。关键词脑肿瘤分割 深度学习算法 胶母细胞瘤 癌症 机器学习 临床试验注册号:NCT00756106 和 NCT00756106。NCT00756106 和 NCT00662506 本文有补充材料。© RSNA, 2023.
{"title":"Expert-centered Evaluation of Deep Learning Algorithms for Brain Tumor Segmentation.","authors":"Katharina V Hoebel, Christopher P Bridge, Sara Ahmed, Oluwatosin Akintola, Caroline Chung, Raymond Y Huang, Jason M Johnson, Albert Kim, K Ina Ly, Ken Chang, Jay Patel, Marco Pinho, Tracy T Batchelor, Bruce R Rosen, Elizabeth R Gerstner, Jayashree Kalpathy-Cramer","doi":"10.1148/ryai.220231","DOIUrl":"10.1148/ryai.220231","url":null,"abstract":"<p><p>Purpose To present results from a literature survey on practices in deep learning segmentation algorithm evaluation and perform a study on expert quality perception of brain tumor segmentation. Materials and Methods A total of 180 articles reporting on brain tumor segmentation algorithms were surveyed for the reported quality evaluation. Additionally, ratings of segmentation quality on a four-point scale were collected from medical professionals for 60 brain tumor segmentation cases. Results Of the surveyed articles, Dice score, sensitivity, and Hausdorff distance were the most popular metrics to report segmentation performance. Notably, only 2.8% of the articles included clinical experts' evaluation of segmentation quality. The experimental results revealed a low interrater agreement (Krippendorff α, 0.34) in experts' segmentation quality perception. Furthermore, the correlations between the ratings and commonly used quantitative quality metrics were low (Kendall tau between Dice score and mean rating, 0.23; Kendall tau between Hausdorff distance and mean rating, 0.51), with large variability among the experts. Conclusion The results demonstrate that quality ratings are prone to variability due to the ambiguity of tumor boundaries and individual perceptual differences, and existing metrics do not capture the clinical perception of segmentation quality. <b>Keywords:</b> Brain Tumor Segmentation, Deep Learning Algorithms, Glioblastoma, Cancer, Machine Learning Clinical trial registration nos. NCT00756106 and NCT00662506 <i>Supplemental material is available for this article.</i> © RSNA, 2023.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 1","pages":"e220231"},"PeriodicalIF":8.1,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831514/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139404633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data Liberation and Crowdsourcing in Medical Research: The Intersection of Collective and Artificial Intelligence. 医学研究中的数据解放与众包:集体智能与人工智能的交叉。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-01 DOI: 10.1148/ryai.230006
Jefferson R Wilson, Luciano M Prevedello, Christopher D Witiw, Adam E Flanders, Errol Colak

In spite of an exponential increase in the volume of medical data produced globally, much of these data are inaccessible to those who might best use them to develop improved health care solutions through the application of advanced analytics such as artificial intelligence. Data liberation and crowdsourcing represent two distinct but interrelated approaches to bridging existing data silos and accelerating the pace of innovation internationally. In this article, we examine these concepts in the context of medical artificial intelligence research, summarizing their potential benefits, identifying potential pitfalls, and ultimately making a case for their expanded use going forward. A practical example of a crowdsourced competition using an international medical imaging dataset is provided. Keywords: Artificial Intelligence, Data Liberation, Crowdsourcing © RSNA, 2023.

尽管全球产生的医疗数据量呈指数级增长,但对于那些通过应用人工智能等先进分析技术来开发更好的医疗解决方案的人来说,这些数据中的大部分都无法获得。数据解放和众包代表了两种不同但相互关联的方法,可用于弥合现有的数据孤岛并加快国际创新步伐。在本文中,我们将在医学人工智能研究的背景下审视这些概念,总结它们的潜在益处,找出潜在隐患,并最终为它们在未来的推广使用提供依据。本文提供了一个利用国际医学影像数据集开展众包竞赛的实例。关键词人工智能、数据解放、众包 © RSNA, 2023.
{"title":"Data Liberation and Crowdsourcing in Medical Research: The Intersection of Collective and Artificial Intelligence.","authors":"Jefferson R Wilson, Luciano M Prevedello, Christopher D Witiw, Adam E Flanders, Errol Colak","doi":"10.1148/ryai.230006","DOIUrl":"10.1148/ryai.230006","url":null,"abstract":"<p><p>In spite of an exponential increase in the volume of medical data produced globally, much of these data are inaccessible to those who might best use them to develop improved health care solutions through the application of advanced analytics such as artificial intelligence. Data liberation and crowdsourcing represent two distinct but interrelated approaches to bridging existing data silos and accelerating the pace of innovation internationally. In this article, we examine these concepts in the context of medical artificial intelligence research, summarizing their potential benefits, identifying potential pitfalls, and ultimately making a case for their expanded use going forward. A practical example of a crowdsourced competition using an international medical imaging dataset is provided. <b>Keywords:</b> Artificial Intelligence, Data Liberation, Crowdsourcing © RSNA, 2023.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 1","pages":"e230006"},"PeriodicalIF":8.1,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831522/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139478905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Radiology-Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1