首页 > 最新文献

Radiology-Artificial Intelligence最新文献

英文 中文
A Deep Learning Pipeline for Assessing Ventricular Volumes from a Cardiac MRI Registry of Patients with Single Ventricle Physiology. 从单室生理学患者心脏磁共振成像注册表中评估心室容积的深度学习管道。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-01 DOI: 10.1148/ryai.230132
Tina Yao, Nicole St Clair, Gabriel F Miller, Adam L Dorfman, Mark A Fogel, Sunil Ghelani, Rajesh Krishnamurthy, Christopher Z Lam, Michael Quail, Joshua D Robinson, David Schidlow, Timothy C Slesnick, Justin Weigand, Jennifer A Steeden, Rahul H Rathod, Vivek Muthurangu

Purpose To develop an end-to-end deep learning (DL) pipeline for automated ventricular segmentation of cardiac MRI data from a multicenter registry of patients with Fontan circulation (Fontan Outcomes Registry Using CMR Examinations [FORCE]). Materials and Methods This retrospective study used 250 cardiac MRI examinations (November 2007-December 2022) from 13 institutions for training, validation, and testing. The pipeline contained three DL models: a classifier to identify short-axis cine stacks and two U-Net 3+ models for image cropping and segmentation. The automated segmentations were evaluated on the test set (n = 50) by using the Dice score. Volumetric and functional metrics derived from DL and ground truth manual segmentations were compared using Bland-Altman and intraclass correlation analysis. The pipeline was further qualitatively evaluated on 475 unseen examinations. Results There were acceptable limits of agreement (LOA) and minimal biases between the ground truth and DL end-diastolic volume (EDV) (bias: -0.6 mL/m2, LOA: -20.6 to 19.5 mL/m2) and end-systolic volume (ESV) (bias: -1.1 mL/m2, LOA: -18.1 to 15.9 mL/m2), with high intraclass correlation coefficients (ICCs > 0.97) and Dice scores (EDV, 0.91 and ESV, 0.86). There was moderate agreement for ventricular mass (bias: -1.9 g/m2, LOA: -17.3 to 13.5 g/m2) and an ICC of 0.94. There was also acceptable agreement for stroke volume (bias: 0.6 mL/m2, LOA: -17.2 to 18.3 mL/m2) and ejection fraction (bias: 0.6%, LOA: -12.2% to 13.4%), with high ICCs (>0.81). The pipeline achieved satisfactory segmentation in 68% of the 475 unseen examinations, while 26% needed minor adjustments, 5% needed major adjustments, and in 0.4%, the cropping model failed. Conclusion The DL pipeline can provide fast standardized segmentation for patients with single ventricle physiology across multiple centers. This pipeline can be applied to all cardiac MRI examinations in the FORCE registry. Keywords: Cardiac, Adults and Pediatrics, MR Imaging, Congenital, Volume Analysis, Segmentation, Quantification Supplemental material is available for this article. © RSNA, 2023.

目的 开发一种端到端的深度学习(DL)管道,用于对来自丰唐循环患者多中心登记处(Fontan Outcomes Registry Using CMR Examinations [FORCE])的心脏 MRI 数据进行自动心室分割。材料与方法 这项回顾性研究使用了 13 家机构的 250 次心脏 MRI 检查(2007 年 11 月至 2022 年 12 月)进行培训、验证和测试。管道包含三个 DL 模型:一个用于识别短轴电影堆叠的分类器和两个用于图像裁剪和分割的 U-Net 3+ 模型。在测试集(n = 50)上使用 Dice 分数对自动分割进行评估。使用 Bland-Altman 和类内相关分析比较了 DL 和地面实况人工分割得出的体积和功能指标。在 475 例未见检查中对管道进行了进一步的定性评估。结果 地面真实值和 DL 舒张末期容积(EDV)(偏差:-0.6 mL/m2,LOA:-20.6 至 19.5 mL/m2)和收缩末期容积(ESV)(偏差:-1.1 mL/m2,LOA:-18.1 至 15.9 mL/m2)之间存在偏差,具有较高的类内相关系数(ICCs > 0.97)和 Dice 评分(EDV,0.91;ESV,0.86)。心室质量(偏差:-1.9 g/m2,LOA:-17.3 至 13.5 g/m2)的一致性适中,ICC 为 0.94。搏出量(偏差:0.6 mL/m2,LOA:-17.2 至 18.3 mL/m2)和射血分数(偏差:0.6%,LOA:-12.2% 至 13.4%)的一致性也可以接受,ICC 较高(>0.81)。在 475 例未见检查中,该管道有 68% 实现了令人满意的分割,26% 需要微调,5% 需要大幅调整,0.4% 的裁剪模型失败。结论 DL 管道可为多个中心的单心室生理学患者提供快速的标准化分割。该管道可应用于 FORCE 注册中心的所有心脏 MRI 检查。关键词心脏、成人和儿科、磁共振成像、先天性、容积分析、分割、量化 本文有补充材料。© RSNA, 2023.
{"title":"A Deep Learning Pipeline for Assessing Ventricular Volumes from a Cardiac MRI Registry of Patients with Single Ventricle Physiology.","authors":"Tina Yao, Nicole St Clair, Gabriel F Miller, Adam L Dorfman, Mark A Fogel, Sunil Ghelani, Rajesh Krishnamurthy, Christopher Z Lam, Michael Quail, Joshua D Robinson, David Schidlow, Timothy C Slesnick, Justin Weigand, Jennifer A Steeden, Rahul H Rathod, Vivek Muthurangu","doi":"10.1148/ryai.230132","DOIUrl":"10.1148/ryai.230132","url":null,"abstract":"<p><p>Purpose To develop an end-to-end deep learning (DL) pipeline for automated ventricular segmentation of cardiac MRI data from a multicenter registry of patients with Fontan circulation (Fontan Outcomes Registry Using CMR Examinations [FORCE]). Materials and Methods This retrospective study used 250 cardiac MRI examinations (November 2007-December 2022) from 13 institutions for training, validation, and testing. The pipeline contained three DL models: a classifier to identify short-axis cine stacks and two U-Net 3+ models for image cropping and segmentation. The automated segmentations were evaluated on the test set (<i>n</i> = 50) by using the Dice score. Volumetric and functional metrics derived from DL and ground truth manual segmentations were compared using Bland-Altman and intraclass correlation analysis. The pipeline was further qualitatively evaluated on 475 unseen examinations. Results There were acceptable limits of agreement (LOA) and minimal biases between the ground truth and DL end-diastolic volume (EDV) (bias: -0.6 mL/m<sup>2</sup>, LOA: -20.6 to 19.5 mL/m<sup>2</sup>) and end-systolic volume (ESV) (bias: -1.1 mL/m<sup>2</sup>, LOA: -18.1 to 15.9 mL/m<sup>2</sup>), with high intraclass correlation coefficients (ICCs > 0.97) and Dice scores (EDV, 0.91 and ESV, 0.86). There was moderate agreement for ventricular mass (bias: -1.9 g/m<sup>2</sup>, LOA: -17.3 to 13.5 g/m<sup>2</sup>) and an ICC of 0.94. There was also acceptable agreement for stroke volume (bias: 0.6 mL/m<sup>2</sup>, LOA: -17.2 to 18.3 mL/m<sup>2</sup>) and ejection fraction (bias: 0.6%, LOA: -12.2% to 13.4%), with high ICCs (>0.81). The pipeline achieved satisfactory segmentation in 68% of the 475 unseen examinations, while 26% needed minor adjustments, 5% needed major adjustments, and in 0.4%, the cropping model failed. Conclusion The DL pipeline can provide fast standardized segmentation for patients with single ventricle physiology across multiple centers. This pipeline can be applied to all cardiac MRI examinations in the FORCE registry. <b>Keywords:</b> Cardiac, Adults and Pediatrics, MR Imaging, Congenital, Volume Analysis, Segmentation, Quantification <i>Supplemental material is available for this article.</i> © RSNA, 2023.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 1","pages":"e230132"},"PeriodicalIF":8.1,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831511/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139080954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of the Winning Algorithms of the RSNA 2022 Cervical Spine Fracture Detection Challenge. RSNA 2022 年颈椎骨折检测挑战赛获奖算法的性能。
IF 9.8 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-01 DOI: 10.1148/ryai.230256
Ghee Rye Lee, Adam E Flanders, Tyler Richards, Felipe Kitamura, Errol Colak, Hui Ming Lin, Robyn L Ball, Jason Talbott, Luciano M Prevedello

Purpose To evaluate and report the performance of the winning algorithms of the Radiological Society of North America Cervical Spine Fracture AI Challenge. Materials and Methods The competition was open to the public on Kaggle from July 28 to October 27, 2022. A sample of 3112 CT scans with and without cervical spine fractures (CSFx) were assembled from multiple sites (12 institutions across six continents) and prepared for the competition. The test set had 1093 scans (private test set: n = 789; mean age, 53.40 years ± 22.86 [SD]; 509 males; public test set: n = 304; mean age, 52.51 years ± 20.73; 189 males) and 847 fractures. The eight top-performing artificial intelligence (AI) algorithms were retrospectively evaluated, and the area under the receiver operating characteristic curve (AUC) value, F1 score, sensitivity, and specificity were calculated. Results A total of 1108 contestants composing 883 teams worldwide participated in the competition. The top eight AI models showed high performance, with a mean AUC value of 0.96 (95% CI: 0.95, 0.96), mean F1 score of 90% (95% CI: 90%, 91%), mean sensitivity of 88% (95% Cl: 86%, 90%), and mean specificity of 94% (95% CI: 93%, 96%). The highest values reported for previous models were an AUC of 0.85, F1 score of 81%, sensitivity of 76%, and specificity of 97%. Conclusion The competition successfully facilitated the development of AI models that could detect and localize CSFx on CT scans with high performance outcomes, which appear to exceed known values of previously reported models. Further study is needed to evaluate the generalizability of these models in a clinical environment. Keywords: Cervical Spine, Fracture Detection, Machine Learning, Artificial Intelligence Algorithms, CT, Head/Neck Supplemental material is available for this article. © RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 评估并报告北美放射学会颈椎骨折人工智能(AI)挑战赛获奖算法的性能。材料与方法 比赛于 2022 年 7 月 28 日至 10 月 27 日在 Kaggle 上向公众开放。从多个地点(横跨 6 大洲的 12 家机构)收集了 3,112 份有无颈椎骨折的 CT 扫描图像,并为比赛做好了准备。测试集有 1,093 份扫描(私人测试集:n= 789;平均年龄 53.40 ± [SD] 22.86 岁;509 名男性;公共测试集:n = 304;平均年龄 52.51 ± 20.73 岁;189 名男性)和 847 处骨折。对表现最好的 8 种算法进行了回顾性评估,并报告了接收者操作特征曲线下面积(AUC)值、F1 分数、灵敏度和特异性。结果 全球共有 883 个团队的 1 108 名选手参加了比赛。前 8 名的人工智能模型表现出较高的平均水平:AUC值为0.96(95% CI为0.95-0.96);F1得分率为90%(95% CI为90%-91%);灵敏度为88%(95% Cl为86%-90%),特异度为94%(95% CI为93%-96%)。以往模型的 AUC 为 0.85,F1 得分为 81%,灵敏度为 76%,特异性为 97%。结论 本次竞赛成功地促进了人工智能模型的开发,这些模型可以在 CT 上检测和定位颈椎骨折,并具有较高的性能结果,似乎超过了以前报告的模型的已知值。需要进一步研究以评估其在临床环境中的通用性。©RSNA,2024。
{"title":"Performance of the Winning Algorithms of the RSNA 2022 Cervical Spine Fracture Detection Challenge.","authors":"Ghee Rye Lee, Adam E Flanders, Tyler Richards, Felipe Kitamura, Errol Colak, Hui Ming Lin, Robyn L Ball, Jason Talbott, Luciano M Prevedello","doi":"10.1148/ryai.230256","DOIUrl":"10.1148/ryai.230256","url":null,"abstract":"<p><p>Purpose To evaluate and report the performance of the winning algorithms of the Radiological Society of North America Cervical Spine Fracture AI Challenge. Materials and Methods The competition was open to the public on Kaggle from July 28 to October 27, 2022. A sample of 3112 CT scans with and without cervical spine fractures (CSFx) were assembled from multiple sites (12 institutions across six continents) and prepared for the competition. The test set had 1093 scans (private test set: <i>n</i> = 789; mean age, 53.40 years ± 22.86 [SD]; 509 males; public test set: <i>n</i> = 304; mean age, 52.51 years ± 20.73; 189 males) and 847 fractures. The eight top-performing artificial intelligence (AI) algorithms were retrospectively evaluated, and the area under the receiver operating characteristic curve (AUC) value, F1 score, sensitivity, and specificity were calculated. Results A total of 1108 contestants composing 883 teams worldwide participated in the competition. The top eight AI models showed high performance, with a mean AUC value of 0.96 (95% CI: 0.95, 0.96), mean F1 score of 90% (95% CI: 90%, 91%), mean sensitivity of 88% (95% Cl: 86%, 90%), and mean specificity of 94% (95% CI: 93%, 96%). The highest values reported for previous models were an AUC of 0.85, F1 score of 81%, sensitivity of 76%, and specificity of 97%. Conclusion The competition successfully facilitated the development of AI models that could detect and localize CSFx on CT scans with high performance outcomes, which appear to exceed known values of previously reported models. Further study is needed to evaluate the generalizability of these models in a clinical environment. <b>Keywords:</b> Cervical Spine, Fracture Detection, Machine Learning, Artificial Intelligence Algorithms, CT, Head/Neck <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230256"},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831508/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139088849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accuracy of Radiomics in Predicting IDH Mutation Status in Diffuse Gliomas: A Bivariate Meta-Analysis. 放射组学预测弥漫性胶质瘤 IDH 突变状态的准确性:双变量元分析
IF 9.8 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-01 DOI: 10.1148/ryai.220257
Gianfranco Di Salle, Lorenzo Tumminello, Maria Elena Laino, Sherif Shalaby, Gayane Aghakhanyan, Salvatore Claudio Fanni, Maria Febi, Jorge Eduardo Shortrede, Mario Miccoli, Lorenzo Faggioni, Mirco Cosottini, Emanuele Neri

Purpose To perform a systematic review and meta-analysis assessing the predictive accuracy of radiomics in the noninvasive determination of isocitrate dehydrogenase (IDH) status in grade 4 and lower-grade diffuse gliomas. Materials and Methods A systematic search was performed in the PubMed, Scopus, Embase, Web of Science, and Cochrane Library databases for relevant articles published between January 1, 2010, and July 7, 2021. Pooled sensitivity and specificity across studies were estimated. Risk of bias was evaluated using Quality Assessment of Diagnostic Accuracy Studies-2, and methods were evaluated using the radiomics quality score (RQS). Additional subgroup analyses were performed according to tumor grade, RQS, and number of sequences used (PROSPERO ID: CRD42021268958). Results Twenty-six studies that included 3280 patients were included for analysis. The pooled sensitivity and specificity of radiomics for the detection of IDH mutation were 79% (95% CI: 76, 83) and 80% (95% CI: 76, 83), respectively. Low RQS scores were found overall for the included works. Subgroup analyses showed lower false-positive rates in very low RQS studies (RQS < 6) (meta-regression, z = -1.9; P = .02) compared with adequate RQS studies. No substantial differences were found in pooled sensitivity and specificity for the pure grade 4 gliomas group compared with the all-grade gliomas group (81% and 86% vs 79% and 79%, respectively) and for studies using single versus multiple sequences (80% and 77% vs 79% and 82%, respectively). Conclusion The pooled data showed that radiomics achieved good accuracy performance in distinguishing IDH mutation status in patients with grade 4 and lower-grade diffuse gliomas. The overall methodologic quality (RQS) was low and introduced potential bias. Keywords: Neuro-Oncology, Radiomics, Integration, Application Domain, Glioblastoma, IDH Mutation, Radiomics Quality Scoring Supplemental material is available for this article. Published under a CC BY 4.0 license.

目的 对放射组学在无创确定 4 级和低级别弥漫性胶质瘤中异柠檬酸脱氢酶(IDH)状态方面的预测准确性进行系统综述和荟萃分析。材料与方法 在 PubMed、Scopus、Embase、Web of Science 和 Cochrane Library 数据库中对 2010 年 1 月 1 日至 2021 年 7 月 7 日期间发表的相关文章进行了系统检索。对各项研究的汇总敏感性和特异性进行了估算。使用诊断准确性研究质量评估-2对偏倚风险进行评估,并使用放射组学质量评分(RQS)对方法进行评估。根据肿瘤分级、RQS和所用序列的数量进行了其他亚组分析(PROSPERO ID:CRD42021268958)。结果 共有26项研究纳入分析,共纳入3280名患者。放射组学检测IDH突变的总体敏感性和特异性分别为79%(95% CI:76,83)和80%(95% CI:76,83)。所纳入研究的总体 RQS 分数较低。亚组分析显示,与RQS足够高的研究相比,RQS非常低的研究(RQS < 6)假阳性率较低(元回归,z = -1.9; P = .02)。纯4级胶质瘤组与所有级别胶质瘤组(分别为81%和86% vs 79%和79%)以及使用单序列与多序列的研究(分别为80%和77% vs 79%和82%)的汇总灵敏度和特异性没有发现实质性差异。结论 汇总数据显示,放射组学在区分4级和低级别弥漫性胶质瘤患者的IDH突变状态方面具有良好的准确性。总体方法学质量(RQS)较低,存在潜在偏倚。关键词神经肿瘤学 放射组学 整合 应用领域 胶质母细胞瘤 IDH突变 放射组学质量评分 本文有补充材料。以 CC BY 4.0 许可发布。
{"title":"Accuracy of Radiomics in Predicting <i>IDH</i> Mutation Status in Diffuse Gliomas: A Bivariate Meta-Analysis.","authors":"Gianfranco Di Salle, Lorenzo Tumminello, Maria Elena Laino, Sherif Shalaby, Gayane Aghakhanyan, Salvatore Claudio Fanni, Maria Febi, Jorge Eduardo Shortrede, Mario Miccoli, Lorenzo Faggioni, Mirco Cosottini, Emanuele Neri","doi":"10.1148/ryai.220257","DOIUrl":"10.1148/ryai.220257","url":null,"abstract":"<p><p>Purpose To perform a systematic review and meta-analysis assessing the predictive accuracy of radiomics in the noninvasive determination of isocitrate dehydrogenase <i>(IDH</i>) status in grade 4 and lower-grade diffuse gliomas. Materials and Methods A systematic search was performed in the PubMed, Scopus, Embase, Web of Science, and Cochrane Library databases for relevant articles published between January 1, 2010, and July 7, 2021. Pooled sensitivity and specificity across studies were estimated. Risk of bias was evaluated using Quality Assessment of Diagnostic Accuracy Studies-2, and methods were evaluated using the radiomics quality score (RQS). Additional subgroup analyses were performed according to tumor grade, RQS, and number of sequences used (PROSPERO ID: CRD42021268958). Results Twenty-six studies that included 3280 patients were included for analysis. The pooled sensitivity and specificity of radiomics for the detection of <i>IDH</i> mutation were 79% (95% CI: 76, 83) and 80% (95% CI: 76, 83), respectively. Low RQS scores were found overall for the included works. Subgroup analyses showed lower false-positive rates in very low RQS studies (RQS < 6) (meta-regression, <i>z</i> = -1.9; <i>P</i> = .02) compared with adequate RQS studies. No substantial differences were found in pooled sensitivity and specificity for the pure grade 4 gliomas group compared with the all-grade gliomas group (81% and 86% vs 79% and 79%, respectively) and for studies using single versus multiple sequences (80% and 77% vs 79% and 82%, respectively). Conclusion The pooled data showed that radiomics achieved good accuracy performance in distinguishing <i>IDH</i> mutation status in patients with grade 4 and lower-grade diffuse gliomas. The overall methodologic quality (RQS) was low and introduced potential bias. <b>Keywords:</b> Neuro-Oncology, Radiomics, Integration, Application Domain, Glioblastoma, IDH Mutation, Radiomics Quality Scoring <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 1","pages":"e220257"},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831518/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139478904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The LLM Will See You Now: Performance of ChatGPT on the Brazilian Radiology and Diagnostic Imaging and Mammography Board Examinations. 法学硕士现在就来见你:ChatGPT 在巴西放射学和影像诊断学以及乳腺 X 射线照相术委员会考试中的表现。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-01 DOI: 10.1148/ryai.230568
Hari Trivedi, Judy Wawira Gichoya
{"title":"The LLM Will See You Now: Performance of ChatGPT on the Brazilian Radiology and Diagnostic Imaging and Mammography Board Examinations.","authors":"Hari Trivedi, Judy Wawira Gichoya","doi":"10.1148/ryai.230568","DOIUrl":"10.1148/ryai.230568","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 1","pages":"e230568"},"PeriodicalIF":8.1,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831515/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139643048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Scottish Medical Imaging Archive: A Unique Resource for Imaging-related Research. 苏格兰医学影像档案:影像相关研究的独特资源。
IF 9.8 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-01 DOI: 10.1148/ryai.230466
Gary J Whitman, David J Vining
{"title":"The Scottish Medical Imaging Archive: A Unique Resource for Imaging-related Research.","authors":"Gary J Whitman, David J Vining","doi":"10.1148/ryai.230466","DOIUrl":"10.1148/ryai.230466","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 1","pages":"e230466"},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831505/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139080957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weak Supervision, Strong Results: Achieving High Performance in Intracranial Hemorrhage Detection with Fewer Annotation Labels. 弱监督,强结果:用较少的注释标签实现高性能颅内出血检测
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-01 DOI: 10.1148/ryai.230598
Kareem A Wahid, David Fuentes
{"title":"Weak Supervision, Strong Results: Achieving High Performance in Intracranial Hemorrhage Detection with Fewer Annotation Labels.","authors":"Kareem A Wahid, David Fuentes","doi":"10.1148/ryai.230598","DOIUrl":"10.1148/ryai.230598","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 1","pages":"e230598"},"PeriodicalIF":8.1,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831509/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139643049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sharing Data Is Essential for the Future of AI in Medical Imaging. 共享数据对医学影像领域人工智能的未来至关重要。
IF 9.8 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-01 DOI: 10.1148/ryai.230337
Laura C Bell, Efrat Shimron
{"title":"Sharing Data Is Essential for the Future of AI in Medical Imaging.","authors":"Laura C Bell, Efrat Shimron","doi":"10.1148/ryai.230337","DOIUrl":"10.1148/ryai.230337","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 1","pages":"e230337"},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831510/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139478907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Examination-Level Supervision for Deep Learning-based Intracranial Hemorrhage Detection on Head CT Scans. 基于深度学习的头部 CT 扫描颅内出血检测的检查级监督。
IF 9.8 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-01 DOI: 10.1148/ryai.230159
Jacopo Teneggi, Paul H Yi, Jeremias Sulam

Purpose To compare the effectiveness of weak supervision (ie, with examination-level labels only) and strong supervision (ie, with image-level labels) in training deep learning models for detection of intracranial hemorrhage (ICH) on head CT scans. Materials and Methods In this retrospective study, an attention-based convolutional neural network was trained with either local (ie, image level) or global (ie, examination level) binary labels on the Radiological Society of North America (RSNA) 2019 Brain CT Hemorrhage Challenge dataset of 21 736 examinations (8876 [40.8%] ICH) and 752 422 images (107 784 [14.3%] ICH). The CQ500 (436 examinations; 212 [48.6%] ICH) and CT-ICH (75 examinations; 36 [48.0%] ICH) datasets were employed for external testing. Performance in detecting ICH was compared between weak (examination-level labels) and strong (image-level labels) learners as a function of the number of labels available during training. Results On examination-level binary classification, strong and weak learners did not have different area under the receiver operating characteristic curve values on the internal validation split (0.96 vs 0.96; P = .64) and the CQ500 dataset (0.90 vs 0.92; P = .15). Weak learners outperformed strong ones on the CT-ICH dataset (0.95 vs 0.92; P = .03). Weak learners had better section-level ICH detection performance when more than 10 000 labels were available for training (average f1 = 0.73 vs 0.65; P < .001). Weakly supervised models trained on the entire RSNA dataset required 35 times fewer labels than equivalent strong learners. Conclusion Strongly supervised models did not achieve better performance than weakly supervised ones, which could reduce radiologist labor requirements for prospective dataset curation. Keywords: CT, Head/Neck, Brain/Brain Stem, Hemorrhage Supplemental material is available for this article. © RSNA, 2023 See also commentary by Wahid and Fuentes in this issue.

目的 比较弱监督(即仅使用检查级标签)和强监督(即使用图像级标签)训练深度学习模型检测头部 CT 扫描颅内出血 (ICH) 的效果。材料与方法 在这项回顾性研究中,在北美放射学会(RSNA)2019 年脑 CT 出血挑战赛数据集 21 736 次检查(8876 [40.8%] ICH)和 752 422 张图像(107 784 [14.3%] ICH)上,使用局部(即图像级)或全局(即检查级)二元标签训练了基于注意力的卷积神经网络。外部测试采用了 CQ500(436 次检查;212 [48.6%] ICH)和 CT-ICH (75 次检查;36 [48.0%] ICH)数据集。比较了弱学习者(检查级标签)和强学习者(图像级标签)检测 ICH 的性能,并将其作为训练期间可用标签数量的函数。结果 在检查级二元分类方面,强学习者和弱学习者在内部验证分割(0.96 vs 0.96; P = .64)和 CQ500 数据集(0.90 vs 0.92; P = .15)上的接收器操作特征曲线下面积值没有差异。在 CT-ICH 数据集上,弱学习者的表现优于强学习者(0.95 vs 0.92;P = .03)。当可用于训练的标签超过 10,000 个时,弱学习者的切片级 ICH 检测性能更好(平均 f1 = 0.73 vs 0.65;P < .001)。在整个 RSNA 数据集上训练的弱监督模型所需的标签比同等的强学习者少 35 倍。结论 强监督模型并不比弱监督模型取得更好的性能,而弱监督模型可以减少放射科医生在前瞻性数据集整理方面的人力需求。关键词CT、头颈部、脑/脑干、出血 本文有补充材料。© RSNA, 2023 另请参阅本期 Wahid 和 Fuentes 的评论。
{"title":"Examination-Level Supervision for Deep Learning-based Intracranial Hemorrhage Detection on Head CT Scans.","authors":"Jacopo Teneggi, Paul H Yi, Jeremias Sulam","doi":"10.1148/ryai.230159","DOIUrl":"10.1148/ryai.230159","url":null,"abstract":"<p><p>Purpose To compare the effectiveness of weak supervision (ie, with examination-level labels only) and strong supervision (ie, with image-level labels) in training deep learning models for detection of intracranial hemorrhage (ICH) on head CT scans. Materials and Methods In this retrospective study, an attention-based convolutional neural network was trained with either local (ie, image level) or global (ie, examination level) binary labels on the Radiological Society of North America (RSNA) 2019 Brain CT Hemorrhage Challenge dataset of 21 736 examinations (8876 [40.8%] ICH) and 752 422 images (107 784 [14.3%] ICH). The CQ500 (436 examinations; 212 [48.6%] ICH) and CT-ICH (75 examinations; 36 [48.0%] ICH) datasets were employed for external testing. Performance in detecting ICH was compared between weak (examination-level labels) and strong (image-level labels) learners as a function of the number of labels available during training. Results On examination-level binary classification, strong and weak learners did not have different area under the receiver operating characteristic curve values on the internal validation split (0.96 vs 0.96; <i>P</i> = .64) and the CQ500 dataset (0.90 vs 0.92; <i>P</i> = .15). Weak learners outperformed strong ones on the CT-ICH dataset (0.95 vs 0.92; <i>P</i> = .03). Weak learners had better section-level ICH detection performance when more than 10 000 labels were available for training (average <i>f</i><sub>1</sub> = 0.73 vs 0.65; <i>P</i> < .001). Weakly supervised models trained on the entire RSNA dataset required 35 times fewer labels than equivalent strong learners. Conclusion Strongly supervised models did not achieve better performance than weakly supervised ones, which could reduce radiologist labor requirements for prospective dataset curation. <b>Keywords:</b> CT, Head/Neck, Brain/Brain Stem, Hemorrhage <i>Supplemental material is available for this article.</i> © RSNA, 2023 See also commentary by Wahid and Fuentes in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 1","pages":"e230159"},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831525/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139643047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-based Identification of Brain MRI Sequences Using a Model Trained on Large Multicentric Study Cohorts. 基于深度学习的脑磁共振成像序列识别,使用在大型多中心研究队列中训练的模型。
IF 9.8 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-01 DOI: 10.1148/ryai.230095
Mustafa Ahmed Mahmutoglu, Chandrakanth Jayachandran Preetha, Hagen Meredig, Joerg-Christian Tonn, Michael Weller, Wolfgang Wick, Martin Bendszus, Gianluca Brugnara, Philipp Vollmuth

Purpose To develop a fully automated device- and sequence-independent convolutional neural network (CNN) for reliable and high-throughput labeling of heterogeneous, unstructured MRI data. Materials and Methods Retrospective, multicentric brain MRI data (2179 patients with glioblastoma, 8544 examinations, 63 327 sequences) from 249 hospitals and 29 scanner types were used to develop a network based on ResNet-18 architecture to differentiate nine MRI sequence types, including T1-weighted, postcontrast T1-weighted, T2-weighted, fluid-attenuated inversion recovery, susceptibility-weighted, apparent diffusion coefficient, diffusion-weighted (low and high b value), and gradient-recalled echo T2*-weighted and dynamic susceptibility contrast-related images. The two-dimensional-midsection images from each sequence were allocated to training or validation (approximately 80%) and testing (approximately 20%) using a stratified split to ensure balanced groups across institutions, patients, and MRI sequence types. The prediction accuracy was quantified for each sequence type, and subgroup comparison of model performance was performed using χ2 tests. Results On the test set, the overall accuracy of the CNN (ResNet-18) ensemble model among all sequence types was 97.9% (95% CI: 97.6, 98.1), ranging from 84.2% for susceptibility-weighted images (95% CI: 81.8, 86.6) to 99.8% for T2-weighted images (95% CI: 99.7, 99.9). The ResNet-18 model achieved significantly better accuracy compared with ResNet-50 despite its simpler architecture (97.9% vs 97.1%; P ≤ .001). The accuracy of the ResNet-18 model was not affected by the presence versus absence of tumor on the two-dimensional-midsection images for any sequence type (P > .05). Conclusion The developed CNN (www.github.com/neuroAI-HD/HD-SEQ-ID) reliably differentiates nine types of MRI sequences within multicenter and large-scale population neuroimaging data and may enhance the speed, accuracy, and efficiency of clinical and research neuroradiologic workflows. Keywords: MR-Imaging, Neural Networks, CNS, Brain/Brain Stem, Computer Applications-General (Informatics), Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms Supplemental material is available for this article. © RSNA, 2023.

目的 开发一种独立于设备和序列的全自动卷积神经网络(CNN),用于对异构、非结构化 MRI 数据进行可靠的高通量标记。材料与方法 使用来自 249 家医院和 29 种扫描仪类型的回顾性、多中心脑部 MRI 数据(2179 名胶质母细胞瘤患者、8544 次检查、63 327 个序列)开发了基于 ResNet-18 架构的网络,以区分九种 MRI 序列类型、包括 T1 加权、对比后 T1 加权、T2 加权、流体增强反转恢复、感度加权、表观扩散系数、扩散加权(低和高 b 值)、梯度回波 T2* 加权和动态感度对比相关图像。每个序列的二维中切面图像被分配到训练或验证(约占 80%)和测试(约占 20%)中,采用分层分割法以确保不同机构、患者和 MRI 序列类型的组间平衡。对每种序列类型的预测准确率进行量化,并使用 χ2 检验对模型性能进行分组比较。结果 在测试集上,CNN(ResNet-18)集合模型在所有序列类型中的总体准确率为 97.9%(95% CI:97.6,98.1),从感度加权图像的 84.2%(95% CI:81.8,86.6)到 T2 加权图像的 99.8%(95% CI:99.7,99.9)不等。ResNet-18 模型的准确率明显高于 ResNet-50,尽管其架构更简单(97.9% vs 97.1%;P ≤ .001)。对于任何序列类型,ResNet-18 模型的准确性都不受二维中切面图像上有无肿瘤的影响(P > .05)。结论 开发的 CNN (www.github.com/neuroAI-HD/HD-SEQ-ID) 能可靠地区分多中心和大规模人群神经影像数据中的九种 MRI 序列,可提高临床和研究神经放射学工作流程的速度、准确性和效率。关键词磁共振成像 神经网络 中枢神经系统 脑/脑干 计算机应用-通用(信息学) 卷积神经网络(CNN) 深度学习算法 机器学习算法 本文有补充材料。© RSNA, 2023.
{"title":"Deep Learning-based Identification of Brain MRI Sequences Using a Model Trained on Large Multicentric Study Cohorts.","authors":"Mustafa Ahmed Mahmutoglu, Chandrakanth Jayachandran Preetha, Hagen Meredig, Joerg-Christian Tonn, Michael Weller, Wolfgang Wick, Martin Bendszus, Gianluca Brugnara, Philipp Vollmuth","doi":"10.1148/ryai.230095","DOIUrl":"10.1148/ryai.230095","url":null,"abstract":"<p><p>Purpose To develop a fully automated device- and sequence-independent convolutional neural network (CNN) for reliable and high-throughput labeling of heterogeneous, unstructured MRI data. Materials and Methods Retrospective, multicentric brain MRI data (2179 patients with glioblastoma, 8544 examinations, 63 327 sequences) from 249 hospitals and 29 scanner types were used to develop a network based on ResNet-18 architecture to differentiate nine MRI sequence types, including T1-weighted, postcontrast T1-weighted, T2-weighted, fluid-attenuated inversion recovery, susceptibility-weighted, apparent diffusion coefficient, diffusion-weighted (low and high <i>b</i> value), and gradient-recalled echo T2*-weighted and dynamic susceptibility contrast-related images. The two-dimensional-midsection images from each sequence were allocated to training or validation (approximately 80%) and testing (approximately 20%) using a stratified split to ensure balanced groups across institutions, patients, and MRI sequence types. The prediction accuracy was quantified for each sequence type, and subgroup comparison of model performance was performed using χ<sup>2</sup> tests. Results On the test set, the overall accuracy of the CNN (ResNet-18) ensemble model among all sequence types was 97.9% (95% CI: 97.6, 98.1), ranging from 84.2% for susceptibility-weighted images (95% CI: 81.8, 86.6) to 99.8% for T2-weighted images (95% CI: 99.7, 99.9). The ResNet-18 model achieved significantly better accuracy compared with ResNet-50 despite its simpler architecture (97.9% vs 97.1%; <i>P</i> ≤ .001). The accuracy of the ResNet-18 model was not affected by the presence versus absence of tumor on the two-dimensional-midsection images for any sequence type (<i>P</i> > .05). Conclusion The developed CNN (<i>www.github.com/neuroAI-HD/HD-SEQ-ID</i>) reliably differentiates nine types of MRI sequences within multicenter and large-scale population neuroimaging data and may enhance the speed, accuracy, and efficiency of clinical and research neuroradiologic workflows. <b>Keywords:</b> MR-Imaging, Neural Networks, CNS, Brain/Brain Stem, Computer Applications-General (Informatics), Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms <i>Supplemental material is available for this article.</i> © RSNA, 2023.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 1","pages":"e230095"},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831512/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139080955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement from the ACR, CAR, ESR, RANZCR and RSNA. 开发、采购、实施和监控放射学中的人工智能工具:实用考虑因素。来自 ACR、CAR、ESR、RANZCR 和 RSNA 的多协会声明。
IF 9.8 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-01 DOI: 10.1148/ryai.230513
Adrian P Brady, Bibb Allen, Jaron Chong, Elmar Kotter, Nina Kottler, John Mongan, Lauren Oakden-Rayner, Daniel Pinto Dos Santos, An Tang, Christoph Wald, John Slavotinek

Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools. This article is simultaneously published in Insights into Imaging (DOI 10.1186/s13244-023-01541-3), Journal of Medical Imaging and Radiation Oncology (DOI 10.1111/1754-9485.13612), Canadian Association of Radiologists Journal (DOI 10.1177/08465371231222229), Journal of the American College of Radiology (DOI 10.1016/j.jacr.2023.12.005), and Radiology: Artificial Intelligence (DOI 10.1148/ryai.230513). Keywords: Artificial Intelligence, Radiology, Automation, Machine Learning Published under a CC BY 4.0 license. ©The Author(s) 2024. Editor's Note: The RSNA Board of Directors has endorsed this article. It has not undergone review or editing by this journal.

人工智能(AI)有可能对放射学造成前所未有的破坏,并可能带来积极和消极的影响。将人工智能整合到放射学中,有可能推进多种医疗状况的诊断、量化和管理,从而彻底改变医疗实践。然而,放射学中的人工智能工具越来越多,这凸显出越来越有必要对其实用性进行严格评估,并将安全的产品与可能有害或根本无益的产品区分开来。这篇由多个学会共同撰写的论文阐述了美国、加拿大、欧洲、澳大利亚和新西兰放射学会的观点,明确了将人工智能应用于放射实践的潜在实际问题和伦理问题。除了阐述人工智能工具的开发者、监管者和购买者在将其引入临床实践之前应考虑的主要关注点之外,本声明还提出了监测其在临床使用中的稳定性和安全性以及是否适合发挥自主功能的方法。本声明旨在对参与放射学人工智能资源开发及其作为临床工具实施的各方应考虑的实际问题进行有益的总结。本文同时发表于《Insights into Imaging》(DOI 10.1186/s13244-023-01541-3)、《Journal of Medical Imaging and Radiation Oncology》(DOI 10.1111/1754-9485.13612)、《Canadian Association of Radiologists Journal》(DOI 10.1177/08465371231222229)、《Journal of the American College of Radiology》(DOI 10.1016/j.jacr.2023.12.005)和《Radiology:人工智能》(DOI 10.1148/ryai.230513)。关键词:人工智能人工智能 放射学 自动化 机器学习 采用 CC BY 4.0 许可发布。©作者 2024。编者注:RSNA 董事会已认可本文。本文未经本刊审阅或编辑。
{"title":"Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement from the ACR, CAR, ESR, RANZCR and RSNA.","authors":"Adrian P Brady, Bibb Allen, Jaron Chong, Elmar Kotter, Nina Kottler, John Mongan, Lauren Oakden-Rayner, Daniel Pinto Dos Santos, An Tang, Christoph Wald, John Slavotinek","doi":"10.1148/ryai.230513","DOIUrl":"10.1148/ryai.230513","url":null,"abstract":"<p><p>Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools. <i>This article is simultaneously published in Insights into Imaging (DOI 10.1186/s13244-023-01541-3), Journal of Medical Imaging and Radiation Oncology (DOI 10.1111/1754-9485.13612), Canadian Association of Radiologists Journal (DOI 10.1177/08465371231222229), Journal of the American College of Radiology (DOI 10.1016/j.jacr.2023.12.005), and Radiology: Artificial Intelligence (DOI 10.1148/ryai.230513).</i> <b>Keywords:</b> Artificial Intelligence, Radiology, Automation, Machine Learning Published under a CC BY 4.0 license. ©The Author(s) 2024. Editor's Note: The RSNA Board of Directors has endorsed this article. It has not undergone review or editing by this journal.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 1","pages":"e230513"},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831521/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139513870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Radiology-Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1