首页 > 最新文献

Radiology-Artificial Intelligence最新文献

英文 中文
Development and Validation of a Deep Learning Model to Reduce the Interference of Rectal Artifacts in MRI-based Prostate Cancer Diagnosis. 深度学习模型的开发与验证:在基于核磁共振成像的前列腺癌诊断中减少直肠伪影的干扰》(Deep Learning Model of Development and Validation of a Deep Learning Model to Reduce Rectal Artifacts in MRI-based Prostate Cancer Diagnosis)。
IF 9.8 Pub Date : 2024-03-01 DOI: 10.1148/ryai.230362
Lei Hu, Xiangyu Guo, Dawei Zhou, Zhen Wang, Lisong Dai, Liang Li, Ying Li, Tian Zhang, Haining Long, Chengxin Yu, Zhen-Wei Shi, Chu Han, Cheng Lu, Jungong Zhao, Yuehua Li, Yunfei Zha, Zaiyi Liu

Purpose To develop an MRI-based model for clinically significant prostate cancer (csPCa) diagnosis that can resist rectal artifact interference. Materials and Methods This retrospective study included 2203 male patients with prostate lesions who underwent biparametric MRI and biopsy between January 2019 and June 2023. Targeted adversarial training with proprietary adversarial samples (TPAS) strategy was proposed to enhance model resistance against rectal artifacts. The automated csPCa diagnostic models trained with and without TPAS were compared using multicenter validation datasets. The impact of rectal artifacts on the diagnostic performance of each model at the patient and lesion levels was compared using the area under the receiver operating characteristic curve (AUC) and the area under the precision-recall curve (AUPRC). The AUC between models was compared using the DeLong test, and the AUPRC was compared using the bootstrap method. Results The TPAS model exhibited diagnostic performance improvements of 6% at the patient level (AUC: 0.87 vs 0.81, P < .001) and 7% at the lesion level (AUPRC: 0.84 vs 0.77, P = .007) compared with the control model. The TPAS model demonstrated less performance decline in the presence of rectal artifact-pattern adversarial noise than the control model (ΔAUC: -17% vs -19%, ΔAUPRC: -18% vs -21%). The TPAS model performed better than the control model in patients with moderate (AUC: 0.79 vs 0.73, AUPRC: 0.68 vs 0.61) and severe (AUC: 0.75 vs 0.57, AUPRC: 0.69 vs 0.59) artifacts. Conclusion This study demonstrates that the TPAS model can reduce rectal artifact interference in MRI-based csPCa diagnosis, thereby improving its performance in clinical applications. Keywords: MR-Diffusion-weighted Imaging, Urinary, Prostate, Comparative Studies, Diagnosis, Transfer Learning Clinical trial registration no. ChiCTR23000069832 Supplemental material is available for this article. Published under a CC BY 4.0 license.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现一些错误,从而影响文章内容。目的 开发一种基于磁共振成像的临床重要前列腺癌(csPCa)诊断模型,该模型可抵御直肠伪影干扰。材料与方法 这项回顾性研究纳入了 2203 名男性前列腺病变患者,他们在 2019 年 1 月至 2023 年 6 月期间接受了双参数 MRI 和活检。为了增强模型对直肠伪影的抵抗力,研究人员提出了使用专有对抗样本(TPAS)进行有针对性对抗训练的策略。使用多中心验证数据集比较了使用和不使用 TPAS 训练的 csPCa 自动诊断模型。使用接收者操作特征曲线下面积(AUC)和精确度-召回曲线下面积(AUPRC)比较了直肠伪影对每个模型在患者和病灶层面诊断性能的影响。模型间的 AUC 采用 Delong 检验进行比较,AUPRC 采用 Bootstrap 方法进行比较。结果 与对照模型相比,TPAS 模型在患者层面的诊断性能提高了 6%(AUC:0.87 对 0.81;P < .001),在病灶层面的诊断性能提高了 7%(AUPRC:0.84 对 0.77;P = .007)。与对照模型相比,TPAS 模型在出现直肠伪影模式对抗噪声时的性能下降较少(ΔAUC:-17% 对 -19%;ΔAUPRC:-18% 对 -21%)。在中度(AUC:0.79 对 0.73;AUPRC:0.68 对 0.61)和重度(AUC:0.75 对 0.57;AUPRC:0.69 对 0.59)伪影患者中,TPAS 模型的表现优于对照模型。结论 本研究表明,TPAS 模型可以减少直肠伪影对基于 MRI 的 PCa 诊断的干扰,从而提高其在临床应用中的性能。以 CC BY 4.0 许可发布。
{"title":"Development and Validation of a Deep Learning Model to Reduce the Interference of Rectal Artifacts in MRI-based Prostate Cancer Diagnosis.","authors":"Lei Hu, Xiangyu Guo, Dawei Zhou, Zhen Wang, Lisong Dai, Liang Li, Ying Li, Tian Zhang, Haining Long, Chengxin Yu, Zhen-Wei Shi, Chu Han, Cheng Lu, Jungong Zhao, Yuehua Li, Yunfei Zha, Zaiyi Liu","doi":"10.1148/ryai.230362","DOIUrl":"10.1148/ryai.230362","url":null,"abstract":"<p><p>Purpose To develop an MRI-based model for clinically significant prostate cancer (csPCa) diagnosis that can resist rectal artifact interference. Materials and Methods This retrospective study included 2203 male patients with prostate lesions who underwent biparametric MRI and biopsy between January 2019 and June 2023. Targeted adversarial training with proprietary adversarial samples (TPAS) strategy was proposed to enhance model resistance against rectal artifacts. The automated csPCa diagnostic models trained with and without TPAS were compared using multicenter validation datasets. The impact of rectal artifacts on the diagnostic performance of each model at the patient and lesion levels was compared using the area under the receiver operating characteristic curve (AUC) and the area under the precision-recall curve (AUPRC). The AUC between models was compared using the DeLong test, and the AUPRC was compared using the bootstrap method. Results The TPAS model exhibited diagnostic performance improvements of 6% at the patient level (AUC: 0.87 vs 0.81, <i>P</i> < .001) and 7% at the lesion level (AUPRC: 0.84 vs 0.77, <i>P</i> = .007) compared with the control model. The TPAS model demonstrated less performance decline in the presence of rectal artifact-pattern adversarial noise than the control model (ΔAUC: -17% vs -19%, ΔAUPRC: -18% vs -21%). The TPAS model performed better than the control model in patients with moderate (AUC: 0.79 vs 0.73, AUPRC: 0.68 vs 0.61) and severe (AUC: 0.75 vs 0.57, AUPRC: 0.69 vs 0.59) artifacts. Conclusion This study demonstrates that the TPAS model can reduce rectal artifact interference in MRI-based csPCa diagnosis, thereby improving its performance in clinical applications. <b>Keywords:</b> MR-Diffusion-weighted Imaging, Urinary, Prostate, Comparative Studies, Diagnosis, Transfer Learning Clinical trial registration no. ChiCTR23000069832 <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10985636/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140040503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Expert-centered Evaluation of Deep Learning Algorithms for Brain Tumor Segmentation. 以专家为中心评估用于脑肿瘤分割的深度学习算法。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-01 DOI: 10.1148/ryai.220231
Katharina V Hoebel, Christopher P Bridge, Sara Ahmed, Oluwatosin Akintola, Caroline Chung, Raymond Y Huang, Jason M Johnson, Albert Kim, K Ina Ly, Ken Chang, Jay Patel, Marco Pinho, Tracy T Batchelor, Bruce R Rosen, Elizabeth R Gerstner, Jayashree Kalpathy-Cramer

Purpose To present results from a literature survey on practices in deep learning segmentation algorithm evaluation and perform a study on expert quality perception of brain tumor segmentation. Materials and Methods A total of 180 articles reporting on brain tumor segmentation algorithms were surveyed for the reported quality evaluation. Additionally, ratings of segmentation quality on a four-point scale were collected from medical professionals for 60 brain tumor segmentation cases. Results Of the surveyed articles, Dice score, sensitivity, and Hausdorff distance were the most popular metrics to report segmentation performance. Notably, only 2.8% of the articles included clinical experts' evaluation of segmentation quality. The experimental results revealed a low interrater agreement (Krippendorff α, 0.34) in experts' segmentation quality perception. Furthermore, the correlations between the ratings and commonly used quantitative quality metrics were low (Kendall tau between Dice score and mean rating, 0.23; Kendall tau between Hausdorff distance and mean rating, 0.51), with large variability among the experts. Conclusion The results demonstrate that quality ratings are prone to variability due to the ambiguity of tumor boundaries and individual perceptual differences, and existing metrics do not capture the clinical perception of segmentation quality. Keywords: Brain Tumor Segmentation, Deep Learning Algorithms, Glioblastoma, Cancer, Machine Learning Clinical trial registration nos. NCT00756106 and NCT00662506 Supplemental material is available for this article. © RSNA, 2023.

目的 介绍有关深度学习分割算法评估实践的文献调查结果,并对脑肿瘤分割的专家质量感知进行研究。材料与方法 共调查了 180 篇报道脑肿瘤分割算法的文章,以进行报告质量评估。此外,还收集了医学专家对 60 个脑肿瘤分割病例的分割质量的四级评分。结果 在调查的文章中,骰子得分、灵敏度和豪斯多夫距离是报告分割性能最常用的指标。值得注意的是,只有 2.8% 的文章包含临床专家对分割质量的评价。实验结果表明,专家对分割质量的感知存在较低的互评一致性(Krippendorff α,0.34)。此外,评分与常用定量质量指标之间的相关性也很低(Dice 分数与平均评分之间的 Kendall tau 值为 0.23;Hausdorff 距离与平均评分之间的 Kendall tau 值为 0.51),专家之间的差异也很大。结论 结果表明,由于肿瘤边界的模糊性和个体感知的差异,质量评分容易产生变异,现有的指标不能反映临床对分割质量的感知。关键词脑肿瘤分割 深度学习算法 胶母细胞瘤 癌症 机器学习 临床试验注册号:NCT00756106 和 NCT00756106。NCT00756106 和 NCT00662506 本文有补充材料。© RSNA, 2023.
{"title":"Expert-centered Evaluation of Deep Learning Algorithms for Brain Tumor Segmentation.","authors":"Katharina V Hoebel, Christopher P Bridge, Sara Ahmed, Oluwatosin Akintola, Caroline Chung, Raymond Y Huang, Jason M Johnson, Albert Kim, K Ina Ly, Ken Chang, Jay Patel, Marco Pinho, Tracy T Batchelor, Bruce R Rosen, Elizabeth R Gerstner, Jayashree Kalpathy-Cramer","doi":"10.1148/ryai.220231","DOIUrl":"10.1148/ryai.220231","url":null,"abstract":"<p><p>Purpose To present results from a literature survey on practices in deep learning segmentation algorithm evaluation and perform a study on expert quality perception of brain tumor segmentation. Materials and Methods A total of 180 articles reporting on brain tumor segmentation algorithms were surveyed for the reported quality evaluation. Additionally, ratings of segmentation quality on a four-point scale were collected from medical professionals for 60 brain tumor segmentation cases. Results Of the surveyed articles, Dice score, sensitivity, and Hausdorff distance were the most popular metrics to report segmentation performance. Notably, only 2.8% of the articles included clinical experts' evaluation of segmentation quality. The experimental results revealed a low interrater agreement (Krippendorff α, 0.34) in experts' segmentation quality perception. Furthermore, the correlations between the ratings and commonly used quantitative quality metrics were low (Kendall tau between Dice score and mean rating, 0.23; Kendall tau between Hausdorff distance and mean rating, 0.51), with large variability among the experts. Conclusion The results demonstrate that quality ratings are prone to variability due to the ambiguity of tumor boundaries and individual perceptual differences, and existing metrics do not capture the clinical perception of segmentation quality. <b>Keywords:</b> Brain Tumor Segmentation, Deep Learning Algorithms, Glioblastoma, Cancer, Machine Learning Clinical trial registration nos. NCT00756106 and NCT00662506 <i>Supplemental material is available for this article.</i> © RSNA, 2023.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831514/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139404633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data Liberation and Crowdsourcing in Medical Research: The Intersection of Collective and Artificial Intelligence. 医学研究中的数据解放与众包:集体智能与人工智能的交叉。
IF 9.8 Pub Date : 2024-01-01 DOI: 10.1148/ryai.230006
Jefferson R Wilson, Luciano M Prevedello, Christopher D Witiw, Adam E Flanders, Errol Colak

In spite of an exponential increase in the volume of medical data produced globally, much of these data are inaccessible to those who might best use them to develop improved health care solutions through the application of advanced analytics such as artificial intelligence. Data liberation and crowdsourcing represent two distinct but interrelated approaches to bridging existing data silos and accelerating the pace of innovation internationally. In this article, we examine these concepts in the context of medical artificial intelligence research, summarizing their potential benefits, identifying potential pitfalls, and ultimately making a case for their expanded use going forward. A practical example of a crowdsourced competition using an international medical imaging dataset is provided. Keywords: Artificial Intelligence, Data Liberation, Crowdsourcing © RSNA, 2023.

尽管全球产生的医疗数据量呈指数级增长,但对于那些通过应用人工智能等先进分析技术来开发更好的医疗解决方案的人来说,这些数据中的大部分都无法获得。数据解放和众包代表了两种不同但相互关联的方法,可用于弥合现有的数据孤岛并加快国际创新步伐。在本文中,我们将在医学人工智能研究的背景下审视这些概念,总结它们的潜在益处,找出潜在隐患,并最终为它们在未来的推广使用提供依据。本文提供了一个利用国际医学影像数据集开展众包竞赛的实例。关键词人工智能、数据解放、众包 © RSNA, 2023.
{"title":"Data Liberation and Crowdsourcing in Medical Research: The Intersection of Collective and Artificial Intelligence.","authors":"Jefferson R Wilson, Luciano M Prevedello, Christopher D Witiw, Adam E Flanders, Errol Colak","doi":"10.1148/ryai.230006","DOIUrl":"10.1148/ryai.230006","url":null,"abstract":"<p><p>In spite of an exponential increase in the volume of medical data produced globally, much of these data are inaccessible to those who might best use them to develop improved health care solutions through the application of advanced analytics such as artificial intelligence. Data liberation and crowdsourcing represent two distinct but interrelated approaches to bridging existing data silos and accelerating the pace of innovation internationally. In this article, we examine these concepts in the context of medical artificial intelligence research, summarizing their potential benefits, identifying potential pitfalls, and ultimately making a case for their expanded use going forward. A practical example of a crowdsourced competition using an international medical imaging dataset is provided. <b>Keywords:</b> Artificial Intelligence, Data Liberation, Crowdsourcing © RSNA, 2023.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831522/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139478905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Deep Learning Pipeline for Assessing Ventricular Volumes from a Cardiac MRI Registry of Patients with Single Ventricle Physiology. 从单室生理学患者心脏磁共振成像注册表中评估心室容积的深度学习管道。
IF 9.8 Pub Date : 2024-01-01 DOI: 10.1148/ryai.230132
Tina Yao, Nicole St Clair, Gabriel F Miller, Adam L Dorfman, Mark A Fogel, Sunil Ghelani, Rajesh Krishnamurthy, Christopher Z Lam, Michael Quail, Joshua D Robinson, David Schidlow, Timothy C Slesnick, Justin Weigand, Jennifer A Steeden, Rahul H Rathod, Vivek Muthurangu

Purpose To develop an end-to-end deep learning (DL) pipeline for automated ventricular segmentation of cardiac MRI data from a multicenter registry of patients with Fontan circulation (Fontan Outcomes Registry Using CMR Examinations [FORCE]). Materials and Methods This retrospective study used 250 cardiac MRI examinations (November 2007-December 2022) from 13 institutions for training, validation, and testing. The pipeline contained three DL models: a classifier to identify short-axis cine stacks and two U-Net 3+ models for image cropping and segmentation. The automated segmentations were evaluated on the test set (n = 50) by using the Dice score. Volumetric and functional metrics derived from DL and ground truth manual segmentations were compared using Bland-Altman and intraclass correlation analysis. The pipeline was further qualitatively evaluated on 475 unseen examinations. Results There were acceptable limits of agreement (LOA) and minimal biases between the ground truth and DL end-diastolic volume (EDV) (bias: -0.6 mL/m2, LOA: -20.6 to 19.5 mL/m2) and end-systolic volume (ESV) (bias: -1.1 mL/m2, LOA: -18.1 to 15.9 mL/m2), with high intraclass correlation coefficients (ICCs > 0.97) and Dice scores (EDV, 0.91 and ESV, 0.86). There was moderate agreement for ventricular mass (bias: -1.9 g/m2, LOA: -17.3 to 13.5 g/m2) and an ICC of 0.94. There was also acceptable agreement for stroke volume (bias: 0.6 mL/m2, LOA: -17.2 to 18.3 mL/m2) and ejection fraction (bias: 0.6%, LOA: -12.2% to 13.4%), with high ICCs (>0.81). The pipeline achieved satisfactory segmentation in 68% of the 475 unseen examinations, while 26% needed minor adjustments, 5% needed major adjustments, and in 0.4%, the cropping model failed. Conclusion The DL pipeline can provide fast standardized segmentation for patients with single ventricle physiology across multiple centers. This pipeline can be applied to all cardiac MRI examinations in the FORCE registry. Keywords: Cardiac, Adults and Pediatrics, MR Imaging, Congenital, Volume Analysis, Segmentation, Quantification Supplemental material is available for this article. © RSNA, 2023.

目的 开发一种端到端的深度学习(DL)管道,用于对来自丰唐循环患者多中心登记处(Fontan Outcomes Registry Using CMR Examinations [FORCE])的心脏 MRI 数据进行自动心室分割。材料与方法 这项回顾性研究使用了 13 家机构的 250 次心脏 MRI 检查(2007 年 11 月至 2022 年 12 月)进行培训、验证和测试。管道包含三个 DL 模型:一个用于识别短轴电影堆叠的分类器和两个用于图像裁剪和分割的 U-Net 3+ 模型。在测试集(n = 50)上使用 Dice 分数对自动分割进行评估。使用 Bland-Altman 和类内相关分析比较了 DL 和地面实况人工分割得出的体积和功能指标。在 475 例未见检查中对管道进行了进一步的定性评估。结果 地面真实值和 DL 舒张末期容积(EDV)(偏差:-0.6 mL/m2,LOA:-20.6 至 19.5 mL/m2)和收缩末期容积(ESV)(偏差:-1.1 mL/m2,LOA:-18.1 至 15.9 mL/m2)之间存在偏差,具有较高的类内相关系数(ICCs > 0.97)和 Dice 评分(EDV,0.91;ESV,0.86)。心室质量(偏差:-1.9 g/m2,LOA:-17.3 至 13.5 g/m2)的一致性适中,ICC 为 0.94。搏出量(偏差:0.6 mL/m2,LOA:-17.2 至 18.3 mL/m2)和射血分数(偏差:0.6%,LOA:-12.2% 至 13.4%)的一致性也可以接受,ICC 较高(>0.81)。在 475 例未见检查中,该管道有 68% 实现了令人满意的分割,26% 需要微调,5% 需要大幅调整,0.4% 的裁剪模型失败。结论 DL 管道可为多个中心的单心室生理学患者提供快速的标准化分割。该管道可应用于 FORCE 注册中心的所有心脏 MRI 检查。关键词心脏、成人和儿科、磁共振成像、先天性、容积分析、分割、量化 本文有补充材料。© RSNA, 2023.
{"title":"A Deep Learning Pipeline for Assessing Ventricular Volumes from a Cardiac MRI Registry of Patients with Single Ventricle Physiology.","authors":"Tina Yao, Nicole St Clair, Gabriel F Miller, Adam L Dorfman, Mark A Fogel, Sunil Ghelani, Rajesh Krishnamurthy, Christopher Z Lam, Michael Quail, Joshua D Robinson, David Schidlow, Timothy C Slesnick, Justin Weigand, Jennifer A Steeden, Rahul H Rathod, Vivek Muthurangu","doi":"10.1148/ryai.230132","DOIUrl":"10.1148/ryai.230132","url":null,"abstract":"<p><p>Purpose To develop an end-to-end deep learning (DL) pipeline for automated ventricular segmentation of cardiac MRI data from a multicenter registry of patients with Fontan circulation (Fontan Outcomes Registry Using CMR Examinations [FORCE]). Materials and Methods This retrospective study used 250 cardiac MRI examinations (November 2007-December 2022) from 13 institutions for training, validation, and testing. The pipeline contained three DL models: a classifier to identify short-axis cine stacks and two U-Net 3+ models for image cropping and segmentation. The automated segmentations were evaluated on the test set (<i>n</i> = 50) by using the Dice score. Volumetric and functional metrics derived from DL and ground truth manual segmentations were compared using Bland-Altman and intraclass correlation analysis. The pipeline was further qualitatively evaluated on 475 unseen examinations. Results There were acceptable limits of agreement (LOA) and minimal biases between the ground truth and DL end-diastolic volume (EDV) (bias: -0.6 mL/m<sup>2</sup>, LOA: -20.6 to 19.5 mL/m<sup>2</sup>) and end-systolic volume (ESV) (bias: -1.1 mL/m<sup>2</sup>, LOA: -18.1 to 15.9 mL/m<sup>2</sup>), with high intraclass correlation coefficients (ICCs > 0.97) and Dice scores (EDV, 0.91 and ESV, 0.86). There was moderate agreement for ventricular mass (bias: -1.9 g/m<sup>2</sup>, LOA: -17.3 to 13.5 g/m<sup>2</sup>) and an ICC of 0.94. There was also acceptable agreement for stroke volume (bias: 0.6 mL/m<sup>2</sup>, LOA: -17.2 to 18.3 mL/m<sup>2</sup>) and ejection fraction (bias: 0.6%, LOA: -12.2% to 13.4%), with high ICCs (>0.81). The pipeline achieved satisfactory segmentation in 68% of the 475 unseen examinations, while 26% needed minor adjustments, 5% needed major adjustments, and in 0.4%, the cropping model failed. Conclusion The DL pipeline can provide fast standardized segmentation for patients with single ventricle physiology across multiple centers. This pipeline can be applied to all cardiac MRI examinations in the FORCE registry. <b>Keywords:</b> Cardiac, Adults and Pediatrics, MR Imaging, Congenital, Volume Analysis, Segmentation, Quantification <i>Supplemental material is available for this article.</i> © RSNA, 2023.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831511/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139080954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accuracy of Radiomics in Predicting IDH Mutation Status in Diffuse Gliomas: A Bivariate Meta-Analysis. 放射组学预测弥漫性胶质瘤 IDH 突变状态的准确性:双变量元分析
IF 9.8 Pub Date : 2024-01-01 DOI: 10.1148/ryai.220257
Gianfranco Di Salle, Lorenzo Tumminello, Maria Elena Laino, Sherif Shalaby, Gayane Aghakhanyan, Salvatore Claudio Fanni, Maria Febi, Jorge Eduardo Shortrede, Mario Miccoli, Lorenzo Faggioni, Mirco Cosottini, Emanuele Neri

Purpose To perform a systematic review and meta-analysis assessing the predictive accuracy of radiomics in the noninvasive determination of isocitrate dehydrogenase (IDH) status in grade 4 and lower-grade diffuse gliomas. Materials and Methods A systematic search was performed in the PubMed, Scopus, Embase, Web of Science, and Cochrane Library databases for relevant articles published between January 1, 2010, and July 7, 2021. Pooled sensitivity and specificity across studies were estimated. Risk of bias was evaluated using Quality Assessment of Diagnostic Accuracy Studies-2, and methods were evaluated using the radiomics quality score (RQS). Additional subgroup analyses were performed according to tumor grade, RQS, and number of sequences used (PROSPERO ID: CRD42021268958). Results Twenty-six studies that included 3280 patients were included for analysis. The pooled sensitivity and specificity of radiomics for the detection of IDH mutation were 79% (95% CI: 76, 83) and 80% (95% CI: 76, 83), respectively. Low RQS scores were found overall for the included works. Subgroup analyses showed lower false-positive rates in very low RQS studies (RQS < 6) (meta-regression, z = -1.9; P = .02) compared with adequate RQS studies. No substantial differences were found in pooled sensitivity and specificity for the pure grade 4 gliomas group compared with the all-grade gliomas group (81% and 86% vs 79% and 79%, respectively) and for studies using single versus multiple sequences (80% and 77% vs 79% and 82%, respectively). Conclusion The pooled data showed that radiomics achieved good accuracy performance in distinguishing IDH mutation status in patients with grade 4 and lower-grade diffuse gliomas. The overall methodologic quality (RQS) was low and introduced potential bias. Keywords: Neuro-Oncology, Radiomics, Integration, Application Domain, Glioblastoma, IDH Mutation, Radiomics Quality Scoring Supplemental material is available for this article. Published under a CC BY 4.0 license.

目的 对放射组学在无创确定 4 级和低级别弥漫性胶质瘤中异柠檬酸脱氢酶(IDH)状态方面的预测准确性进行系统综述和荟萃分析。材料与方法 在 PubMed、Scopus、Embase、Web of Science 和 Cochrane Library 数据库中对 2010 年 1 月 1 日至 2021 年 7 月 7 日期间发表的相关文章进行了系统检索。对各项研究的汇总敏感性和特异性进行了估算。使用诊断准确性研究质量评估-2对偏倚风险进行评估,并使用放射组学质量评分(RQS)对方法进行评估。根据肿瘤分级、RQS和所用序列的数量进行了其他亚组分析(PROSPERO ID:CRD42021268958)。结果 共有26项研究纳入分析,共纳入3280名患者。放射组学检测IDH突变的总体敏感性和特异性分别为79%(95% CI:76,83)和80%(95% CI:76,83)。所纳入研究的总体 RQS 分数较低。亚组分析显示,与RQS足够高的研究相比,RQS非常低的研究(RQS < 6)假阳性率较低(元回归,z = -1.9; P = .02)。纯4级胶质瘤组与所有级别胶质瘤组(分别为81%和86% vs 79%和79%)以及使用单序列与多序列的研究(分别为80%和77% vs 79%和82%)的汇总灵敏度和特异性没有发现实质性差异。结论 汇总数据显示,放射组学在区分4级和低级别弥漫性胶质瘤患者的IDH突变状态方面具有良好的准确性。总体方法学质量(RQS)较低,存在潜在偏倚。关键词神经肿瘤学 放射组学 整合 应用领域 胶质母细胞瘤 IDH突变 放射组学质量评分 本文有补充材料。以 CC BY 4.0 许可发布。
{"title":"Accuracy of Radiomics in Predicting <i>IDH</i> Mutation Status in Diffuse Gliomas: A Bivariate Meta-Analysis.","authors":"Gianfranco Di Salle, Lorenzo Tumminello, Maria Elena Laino, Sherif Shalaby, Gayane Aghakhanyan, Salvatore Claudio Fanni, Maria Febi, Jorge Eduardo Shortrede, Mario Miccoli, Lorenzo Faggioni, Mirco Cosottini, Emanuele Neri","doi":"10.1148/ryai.220257","DOIUrl":"10.1148/ryai.220257","url":null,"abstract":"<p><p>Purpose To perform a systematic review and meta-analysis assessing the predictive accuracy of radiomics in the noninvasive determination of isocitrate dehydrogenase <i>(IDH</i>) status in grade 4 and lower-grade diffuse gliomas. Materials and Methods A systematic search was performed in the PubMed, Scopus, Embase, Web of Science, and Cochrane Library databases for relevant articles published between January 1, 2010, and July 7, 2021. Pooled sensitivity and specificity across studies were estimated. Risk of bias was evaluated using Quality Assessment of Diagnostic Accuracy Studies-2, and methods were evaluated using the radiomics quality score (RQS). Additional subgroup analyses were performed according to tumor grade, RQS, and number of sequences used (PROSPERO ID: CRD42021268958). Results Twenty-six studies that included 3280 patients were included for analysis. The pooled sensitivity and specificity of radiomics for the detection of <i>IDH</i> mutation were 79% (95% CI: 76, 83) and 80% (95% CI: 76, 83), respectively. Low RQS scores were found overall for the included works. Subgroup analyses showed lower false-positive rates in very low RQS studies (RQS < 6) (meta-regression, <i>z</i> = -1.9; <i>P</i> = .02) compared with adequate RQS studies. No substantial differences were found in pooled sensitivity and specificity for the pure grade 4 gliomas group compared with the all-grade gliomas group (81% and 86% vs 79% and 79%, respectively) and for studies using single versus multiple sequences (80% and 77% vs 79% and 82%, respectively). Conclusion The pooled data showed that radiomics achieved good accuracy performance in distinguishing <i>IDH</i> mutation status in patients with grade 4 and lower-grade diffuse gliomas. The overall methodologic quality (RQS) was low and introduced potential bias. <b>Keywords:</b> Neuro-Oncology, Radiomics, Integration, Application Domain, Glioblastoma, IDH Mutation, Radiomics Quality Scoring <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831518/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139478904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The LLM Will See You Now: Performance of ChatGPT on the Brazilian Radiology and Diagnostic Imaging and Mammography Board Examinations. 法学硕士现在就来见你:ChatGPT 在巴西放射学和影像诊断学以及乳腺 X 射线照相术委员会考试中的表现。
IF 9.8 Pub Date : 2024-01-01 DOI: 10.1148/ryai.230568
Hari Trivedi, Judy Wawira Gichoya
{"title":"The LLM Will See You Now: Performance of ChatGPT on the Brazilian Radiology and Diagnostic Imaging and Mammography Board Examinations.","authors":"Hari Trivedi, Judy Wawira Gichoya","doi":"10.1148/ryai.230568","DOIUrl":"10.1148/ryai.230568","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831515/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139643048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of the Winning Algorithms of the RSNA 2022 Cervical Spine Fracture Detection Challenge. RSNA 2022 年颈椎骨折检测挑战赛获奖算法的性能。
IF 9.8 Pub Date : 2024-01-01 DOI: 10.1148/ryai.230256
Ghee Rye Lee, Adam E Flanders, Tyler Richards, Felipe Kitamura, Errol Colak, Hui Ming Lin, Robyn L Ball, Jason Talbott, Luciano M Prevedello

Purpose To evaluate and report the performance of the winning algorithms of the Radiological Society of North America Cervical Spine Fracture AI Challenge. Materials and Methods The competition was open to the public on Kaggle from July 28 to October 27, 2022. A sample of 3112 CT scans with and without cervical spine fractures (CSFx) were assembled from multiple sites (12 institutions across six continents) and prepared for the competition. The test set had 1093 scans (private test set: n = 789; mean age, 53.40 years ± 22.86 [SD]; 509 males; public test set: n = 304; mean age, 52.51 years ± 20.73; 189 males) and 847 fractures. The eight top-performing artificial intelligence (AI) algorithms were retrospectively evaluated, and the area under the receiver operating characteristic curve (AUC) value, F1 score, sensitivity, and specificity were calculated. Results A total of 1108 contestants composing 883 teams worldwide participated in the competition. The top eight AI models showed high performance, with a mean AUC value of 0.96 (95% CI: 0.95, 0.96), mean F1 score of 90% (95% CI: 90%, 91%), mean sensitivity of 88% (95% Cl: 86%, 90%), and mean specificity of 94% (95% CI: 93%, 96%). The highest values reported for previous models were an AUC of 0.85, F1 score of 81%, sensitivity of 76%, and specificity of 97%. Conclusion The competition successfully facilitated the development of AI models that could detect and localize CSFx on CT scans with high performance outcomes, which appear to exceed known values of previously reported models. Further study is needed to evaluate the generalizability of these models in a clinical environment. Keywords: Cervical Spine, Fracture Detection, Machine Learning, Artificial Intelligence Algorithms, CT, Head/Neck Supplemental material is available for this article. © RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 评估并报告北美放射学会颈椎骨折人工智能(AI)挑战赛获奖算法的性能。材料与方法 比赛于 2022 年 7 月 28 日至 10 月 27 日在 Kaggle 上向公众开放。从多个地点(横跨 6 大洲的 12 家机构)收集了 3,112 份有无颈椎骨折的 CT 扫描图像,并为比赛做好了准备。测试集有 1,093 份扫描(私人测试集:n= 789;平均年龄 53.40 ± [SD] 22.86 岁;509 名男性;公共测试集:n = 304;平均年龄 52.51 ± 20.73 岁;189 名男性)和 847 处骨折。对表现最好的 8 种算法进行了回顾性评估,并报告了接收者操作特征曲线下面积(AUC)值、F1 分数、灵敏度和特异性。结果 全球共有 883 个团队的 1 108 名选手参加了比赛。前 8 名的人工智能模型表现出较高的平均水平:AUC值为0.96(95% CI为0.95-0.96);F1得分率为90%(95% CI为90%-91%);灵敏度为88%(95% Cl为86%-90%),特异度为94%(95% CI为93%-96%)。以往模型的 AUC 为 0.85,F1 得分为 81%,灵敏度为 76%,特异性为 97%。结论 本次竞赛成功地促进了人工智能模型的开发,这些模型可以在 CT 上检测和定位颈椎骨折,并具有较高的性能结果,似乎超过了以前报告的模型的已知值。需要进一步研究以评估其在临床环境中的通用性。©RSNA,2024。
{"title":"Performance of the Winning Algorithms of the RSNA 2022 Cervical Spine Fracture Detection Challenge.","authors":"Ghee Rye Lee, Adam E Flanders, Tyler Richards, Felipe Kitamura, Errol Colak, Hui Ming Lin, Robyn L Ball, Jason Talbott, Luciano M Prevedello","doi":"10.1148/ryai.230256","DOIUrl":"10.1148/ryai.230256","url":null,"abstract":"<p><p>Purpose To evaluate and report the performance of the winning algorithms of the Radiological Society of North America Cervical Spine Fracture AI Challenge. Materials and Methods The competition was open to the public on Kaggle from July 28 to October 27, 2022. A sample of 3112 CT scans with and without cervical spine fractures (CSFx) were assembled from multiple sites (12 institutions across six continents) and prepared for the competition. The test set had 1093 scans (private test set: <i>n</i> = 789; mean age, 53.40 years ± 22.86 [SD]; 509 males; public test set: <i>n</i> = 304; mean age, 52.51 years ± 20.73; 189 males) and 847 fractures. The eight top-performing artificial intelligence (AI) algorithms were retrospectively evaluated, and the area under the receiver operating characteristic curve (AUC) value, F1 score, sensitivity, and specificity were calculated. Results A total of 1108 contestants composing 883 teams worldwide participated in the competition. The top eight AI models showed high performance, with a mean AUC value of 0.96 (95% CI: 0.95, 0.96), mean F1 score of 90% (95% CI: 90%, 91%), mean sensitivity of 88% (95% Cl: 86%, 90%), and mean specificity of 94% (95% CI: 93%, 96%). The highest values reported for previous models were an AUC of 0.85, F1 score of 81%, sensitivity of 76%, and specificity of 97%. Conclusion The competition successfully facilitated the development of AI models that could detect and localize CSFx on CT scans with high performance outcomes, which appear to exceed known values of previously reported models. Further study is needed to evaluate the generalizability of these models in a clinical environment. <b>Keywords:</b> Cervical Spine, Fracture Detection, Machine Learning, Artificial Intelligence Algorithms, CT, Head/Neck <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831508/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139088849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weak Supervision, Strong Results: Achieving High Performance in Intracranial Hemorrhage Detection with Fewer Annotation Labels. 弱监督,强结果:用较少的注释标签实现高性能颅内出血检测
IF 9.8 Pub Date : 2024-01-01 DOI: 10.1148/ryai.230598
Kareem A Wahid, David Fuentes
{"title":"Weak Supervision, Strong Results: Achieving High Performance in Intracranial Hemorrhage Detection with Fewer Annotation Labels.","authors":"Kareem A Wahid, David Fuentes","doi":"10.1148/ryai.230598","DOIUrl":"10.1148/ryai.230598","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831509/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139643049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Scottish Medical Imaging Archive: A Unique Resource for Imaging-related Research. 苏格兰医学影像档案:影像相关研究的独特资源。
IF 9.8 Pub Date : 2024-01-01 DOI: 10.1148/ryai.230466
Gary J Whitman, David J Vining
{"title":"The Scottish Medical Imaging Archive: A Unique Resource for Imaging-related Research.","authors":"Gary J Whitman, David J Vining","doi":"10.1148/ryai.230466","DOIUrl":"10.1148/ryai.230466","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831505/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139080957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Examination-Level Supervision for Deep Learning-based Intracranial Hemorrhage Detection on Head CT Scans. 基于深度学习的头部 CT 扫描颅内出血检测的检查级监督。
IF 9.8 Pub Date : 2024-01-01 DOI: 10.1148/ryai.230159
Jacopo Teneggi, Paul H Yi, Jeremias Sulam

Purpose To compare the effectiveness of weak supervision (ie, with examination-level labels only) and strong supervision (ie, with image-level labels) in training deep learning models for detection of intracranial hemorrhage (ICH) on head CT scans. Materials and Methods In this retrospective study, an attention-based convolutional neural network was trained with either local (ie, image level) or global (ie, examination level) binary labels on the Radiological Society of North America (RSNA) 2019 Brain CT Hemorrhage Challenge dataset of 21 736 examinations (8876 [40.8%] ICH) and 752 422 images (107 784 [14.3%] ICH). The CQ500 (436 examinations; 212 [48.6%] ICH) and CT-ICH (75 examinations; 36 [48.0%] ICH) datasets were employed for external testing. Performance in detecting ICH was compared between weak (examination-level labels) and strong (image-level labels) learners as a function of the number of labels available during training. Results On examination-level binary classification, strong and weak learners did not have different area under the receiver operating characteristic curve values on the internal validation split (0.96 vs 0.96; P = .64) and the CQ500 dataset (0.90 vs 0.92; P = .15). Weak learners outperformed strong ones on the CT-ICH dataset (0.95 vs 0.92; P = .03). Weak learners had better section-level ICH detection performance when more than 10 000 labels were available for training (average f1 = 0.73 vs 0.65; P < .001). Weakly supervised models trained on the entire RSNA dataset required 35 times fewer labels than equivalent strong learners. Conclusion Strongly supervised models did not achieve better performance than weakly supervised ones, which could reduce radiologist labor requirements for prospective dataset curation. Keywords: CT, Head/Neck, Brain/Brain Stem, Hemorrhage Supplemental material is available for this article. © RSNA, 2023 See also commentary by Wahid and Fuentes in this issue.

目的 比较弱监督(即仅使用检查级标签)和强监督(即使用图像级标签)训练深度学习模型检测头部 CT 扫描颅内出血 (ICH) 的效果。材料与方法 在这项回顾性研究中,在北美放射学会(RSNA)2019 年脑 CT 出血挑战赛数据集 21 736 次检查(8876 [40.8%] ICH)和 752 422 张图像(107 784 [14.3%] ICH)上,使用局部(即图像级)或全局(即检查级)二元标签训练了基于注意力的卷积神经网络。外部测试采用了 CQ500(436 次检查;212 [48.6%] ICH)和 CT-ICH (75 次检查;36 [48.0%] ICH)数据集。比较了弱学习者(检查级标签)和强学习者(图像级标签)检测 ICH 的性能,并将其作为训练期间可用标签数量的函数。结果 在检查级二元分类方面,强学习者和弱学习者在内部验证分割(0.96 vs 0.96; P = .64)和 CQ500 数据集(0.90 vs 0.92; P = .15)上的接收器操作特征曲线下面积值没有差异。在 CT-ICH 数据集上,弱学习者的表现优于强学习者(0.95 vs 0.92;P = .03)。当可用于训练的标签超过 10,000 个时,弱学习者的切片级 ICH 检测性能更好(平均 f1 = 0.73 vs 0.65;P < .001)。在整个 RSNA 数据集上训练的弱监督模型所需的标签比同等的强学习者少 35 倍。结论 强监督模型并不比弱监督模型取得更好的性能,而弱监督模型可以减少放射科医生在前瞻性数据集整理方面的人力需求。关键词CT、头颈部、脑/脑干、出血 本文有补充材料。© RSNA, 2023 另请参阅本期 Wahid 和 Fuentes 的评论。
{"title":"Examination-Level Supervision for Deep Learning-based Intracranial Hemorrhage Detection on Head CT Scans.","authors":"Jacopo Teneggi, Paul H Yi, Jeremias Sulam","doi":"10.1148/ryai.230159","DOIUrl":"10.1148/ryai.230159","url":null,"abstract":"<p><p>Purpose To compare the effectiveness of weak supervision (ie, with examination-level labels only) and strong supervision (ie, with image-level labels) in training deep learning models for detection of intracranial hemorrhage (ICH) on head CT scans. Materials and Methods In this retrospective study, an attention-based convolutional neural network was trained with either local (ie, image level) or global (ie, examination level) binary labels on the Radiological Society of North America (RSNA) 2019 Brain CT Hemorrhage Challenge dataset of 21 736 examinations (8876 [40.8%] ICH) and 752 422 images (107 784 [14.3%] ICH). The CQ500 (436 examinations; 212 [48.6%] ICH) and CT-ICH (75 examinations; 36 [48.0%] ICH) datasets were employed for external testing. Performance in detecting ICH was compared between weak (examination-level labels) and strong (image-level labels) learners as a function of the number of labels available during training. Results On examination-level binary classification, strong and weak learners did not have different area under the receiver operating characteristic curve values on the internal validation split (0.96 vs 0.96; <i>P</i> = .64) and the CQ500 dataset (0.90 vs 0.92; <i>P</i> = .15). Weak learners outperformed strong ones on the CT-ICH dataset (0.95 vs 0.92; <i>P</i> = .03). Weak learners had better section-level ICH detection performance when more than 10 000 labels were available for training (average <i>f</i><sub>1</sub> = 0.73 vs 0.65; <i>P</i> < .001). Weakly supervised models trained on the entire RSNA dataset required 35 times fewer labels than equivalent strong learners. Conclusion Strongly supervised models did not achieve better performance than weakly supervised ones, which could reduce radiologist labor requirements for prospective dataset curation. <b>Keywords:</b> CT, Head/Neck, Brain/Brain Stem, Hemorrhage <i>Supplemental material is available for this article.</i> © RSNA, 2023 See also commentary by Wahid and Fuentes in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831525/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139643047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Radiology-Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1