首页 > 最新文献

Radiology-Artificial Intelligence最新文献

英文 中文
The LLM Will See You Now: Performance of ChatGPT on the Brazilian Radiology and Diagnostic Imaging and Mammography Board Examinations. 法学硕士现在就来见你:ChatGPT 在巴西放射学和影像诊断学以及乳腺 X 射线照相术委员会考试中的表现。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-01 DOI: 10.1148/ryai.230568
Hari Trivedi, Judy Wawira Gichoya
{"title":"The LLM Will See You Now: Performance of ChatGPT on the Brazilian Radiology and Diagnostic Imaging and Mammography Board Examinations.","authors":"Hari Trivedi, Judy Wawira Gichoya","doi":"10.1148/ryai.230568","DOIUrl":"10.1148/ryai.230568","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831515/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139643048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Scottish Medical Imaging Archive: A Unique Resource for Imaging-related Research. 苏格兰医学影像档案:影像相关研究的独特资源。
IF 9.8 Pub Date : 2024-01-01 DOI: 10.1148/ryai.230466
Gary J Whitman, David J Vining
{"title":"The Scottish Medical Imaging Archive: A Unique Resource for Imaging-related Research.","authors":"Gary J Whitman, David J Vining","doi":"10.1148/ryai.230466","DOIUrl":"10.1148/ryai.230466","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831505/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139080957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weak Supervision, Strong Results: Achieving High Performance in Intracranial Hemorrhage Detection with Fewer Annotation Labels. 弱监督,强结果:用较少的注释标签实现高性能颅内出血检测
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-01 DOI: 10.1148/ryai.230598
Kareem A Wahid, David Fuentes
{"title":"Weak Supervision, Strong Results: Achieving High Performance in Intracranial Hemorrhage Detection with Fewer Annotation Labels.","authors":"Kareem A Wahid, David Fuentes","doi":"10.1148/ryai.230598","DOIUrl":"10.1148/ryai.230598","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831509/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139643049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Examination-Level Supervision for Deep Learning-based Intracranial Hemorrhage Detection on Head CT Scans. 基于深度学习的头部 CT 扫描颅内出血检测的检查级监督。
IF 9.8 Pub Date : 2024-01-01 DOI: 10.1148/ryai.230159
Jacopo Teneggi, Paul H Yi, Jeremias Sulam

Purpose To compare the effectiveness of weak supervision (ie, with examination-level labels only) and strong supervision (ie, with image-level labels) in training deep learning models for detection of intracranial hemorrhage (ICH) on head CT scans. Materials and Methods In this retrospective study, an attention-based convolutional neural network was trained with either local (ie, image level) or global (ie, examination level) binary labels on the Radiological Society of North America (RSNA) 2019 Brain CT Hemorrhage Challenge dataset of 21 736 examinations (8876 [40.8%] ICH) and 752 422 images (107 784 [14.3%] ICH). The CQ500 (436 examinations; 212 [48.6%] ICH) and CT-ICH (75 examinations; 36 [48.0%] ICH) datasets were employed for external testing. Performance in detecting ICH was compared between weak (examination-level labels) and strong (image-level labels) learners as a function of the number of labels available during training. Results On examination-level binary classification, strong and weak learners did not have different area under the receiver operating characteristic curve values on the internal validation split (0.96 vs 0.96; P = .64) and the CQ500 dataset (0.90 vs 0.92; P = .15). Weak learners outperformed strong ones on the CT-ICH dataset (0.95 vs 0.92; P = .03). Weak learners had better section-level ICH detection performance when more than 10 000 labels were available for training (average f1 = 0.73 vs 0.65; P < .001). Weakly supervised models trained on the entire RSNA dataset required 35 times fewer labels than equivalent strong learners. Conclusion Strongly supervised models did not achieve better performance than weakly supervised ones, which could reduce radiologist labor requirements for prospective dataset curation. Keywords: CT, Head/Neck, Brain/Brain Stem, Hemorrhage Supplemental material is available for this article. © RSNA, 2023 See also commentary by Wahid and Fuentes in this issue.

目的 比较弱监督(即仅使用检查级标签)和强监督(即使用图像级标签)训练深度学习模型检测头部 CT 扫描颅内出血 (ICH) 的效果。材料与方法 在这项回顾性研究中,在北美放射学会(RSNA)2019 年脑 CT 出血挑战赛数据集 21 736 次检查(8876 [40.8%] ICH)和 752 422 张图像(107 784 [14.3%] ICH)上,使用局部(即图像级)或全局(即检查级)二元标签训练了基于注意力的卷积神经网络。外部测试采用了 CQ500(436 次检查;212 [48.6%] ICH)和 CT-ICH (75 次检查;36 [48.0%] ICH)数据集。比较了弱学习者(检查级标签)和强学习者(图像级标签)检测 ICH 的性能,并将其作为训练期间可用标签数量的函数。结果 在检查级二元分类方面,强学习者和弱学习者在内部验证分割(0.96 vs 0.96; P = .64)和 CQ500 数据集(0.90 vs 0.92; P = .15)上的接收器操作特征曲线下面积值没有差异。在 CT-ICH 数据集上,弱学习者的表现优于强学习者(0.95 vs 0.92;P = .03)。当可用于训练的标签超过 10,000 个时,弱学习者的切片级 ICH 检测性能更好(平均 f1 = 0.73 vs 0.65;P < .001)。在整个 RSNA 数据集上训练的弱监督模型所需的标签比同等的强学习者少 35 倍。结论 强监督模型并不比弱监督模型取得更好的性能,而弱监督模型可以减少放射科医生在前瞻性数据集整理方面的人力需求。关键词CT、头颈部、脑/脑干、出血 本文有补充材料。© RSNA, 2023 另请参阅本期 Wahid 和 Fuentes 的评论。
{"title":"Examination-Level Supervision for Deep Learning-based Intracranial Hemorrhage Detection on Head CT Scans.","authors":"Jacopo Teneggi, Paul H Yi, Jeremias Sulam","doi":"10.1148/ryai.230159","DOIUrl":"10.1148/ryai.230159","url":null,"abstract":"<p><p>Purpose To compare the effectiveness of weak supervision (ie, with examination-level labels only) and strong supervision (ie, with image-level labels) in training deep learning models for detection of intracranial hemorrhage (ICH) on head CT scans. Materials and Methods In this retrospective study, an attention-based convolutional neural network was trained with either local (ie, image level) or global (ie, examination level) binary labels on the Radiological Society of North America (RSNA) 2019 Brain CT Hemorrhage Challenge dataset of 21 736 examinations (8876 [40.8%] ICH) and 752 422 images (107 784 [14.3%] ICH). The CQ500 (436 examinations; 212 [48.6%] ICH) and CT-ICH (75 examinations; 36 [48.0%] ICH) datasets were employed for external testing. Performance in detecting ICH was compared between weak (examination-level labels) and strong (image-level labels) learners as a function of the number of labels available during training. Results On examination-level binary classification, strong and weak learners did not have different area under the receiver operating characteristic curve values on the internal validation split (0.96 vs 0.96; <i>P</i> = .64) and the CQ500 dataset (0.90 vs 0.92; <i>P</i> = .15). Weak learners outperformed strong ones on the CT-ICH dataset (0.95 vs 0.92; <i>P</i> = .03). Weak learners had better section-level ICH detection performance when more than 10 000 labels were available for training (average <i>f</i><sub>1</sub> = 0.73 vs 0.65; <i>P</i> < .001). Weakly supervised models trained on the entire RSNA dataset required 35 times fewer labels than equivalent strong learners. Conclusion Strongly supervised models did not achieve better performance than weakly supervised ones, which could reduce radiologist labor requirements for prospective dataset curation. <b>Keywords:</b> CT, Head/Neck, Brain/Brain Stem, Hemorrhage <i>Supplemental material is available for this article.</i> © RSNA, 2023 See also commentary by Wahid and Fuentes in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831525/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139643047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sharing Data Is Essential for the Future of AI in Medical Imaging. 共享数据对医学影像领域人工智能的未来至关重要。
IF 9.8 Pub Date : 2024-01-01 DOI: 10.1148/ryai.230337
Laura C Bell, Efrat Shimron
{"title":"Sharing Data Is Essential for the Future of AI in Medical Imaging.","authors":"Laura C Bell, Efrat Shimron","doi":"10.1148/ryai.230337","DOIUrl":"10.1148/ryai.230337","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831510/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139478907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-based Identification of Brain MRI Sequences Using a Model Trained on Large Multicentric Study Cohorts. 基于深度学习的脑磁共振成像序列识别,使用在大型多中心研究队列中训练的模型。
IF 9.8 Pub Date : 2024-01-01 DOI: 10.1148/ryai.230095
Mustafa Ahmed Mahmutoglu, Chandrakanth Jayachandran Preetha, Hagen Meredig, Joerg-Christian Tonn, Michael Weller, Wolfgang Wick, Martin Bendszus, Gianluca Brugnara, Philipp Vollmuth

Purpose To develop a fully automated device- and sequence-independent convolutional neural network (CNN) for reliable and high-throughput labeling of heterogeneous, unstructured MRI data. Materials and Methods Retrospective, multicentric brain MRI data (2179 patients with glioblastoma, 8544 examinations, 63 327 sequences) from 249 hospitals and 29 scanner types were used to develop a network based on ResNet-18 architecture to differentiate nine MRI sequence types, including T1-weighted, postcontrast T1-weighted, T2-weighted, fluid-attenuated inversion recovery, susceptibility-weighted, apparent diffusion coefficient, diffusion-weighted (low and high b value), and gradient-recalled echo T2*-weighted and dynamic susceptibility contrast-related images. The two-dimensional-midsection images from each sequence were allocated to training or validation (approximately 80%) and testing (approximately 20%) using a stratified split to ensure balanced groups across institutions, patients, and MRI sequence types. The prediction accuracy was quantified for each sequence type, and subgroup comparison of model performance was performed using χ2 tests. Results On the test set, the overall accuracy of the CNN (ResNet-18) ensemble model among all sequence types was 97.9% (95% CI: 97.6, 98.1), ranging from 84.2% for susceptibility-weighted images (95% CI: 81.8, 86.6) to 99.8% for T2-weighted images (95% CI: 99.7, 99.9). The ResNet-18 model achieved significantly better accuracy compared with ResNet-50 despite its simpler architecture (97.9% vs 97.1%; P ≤ .001). The accuracy of the ResNet-18 model was not affected by the presence versus absence of tumor on the two-dimensional-midsection images for any sequence type (P > .05). Conclusion The developed CNN (www.github.com/neuroAI-HD/HD-SEQ-ID) reliably differentiates nine types of MRI sequences within multicenter and large-scale population neuroimaging data and may enhance the speed, accuracy, and efficiency of clinical and research neuroradiologic workflows. Keywords: MR-Imaging, Neural Networks, CNS, Brain/Brain Stem, Computer Applications-General (Informatics), Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms Supplemental material is available for this article. © RSNA, 2023.

目的 开发一种独立于设备和序列的全自动卷积神经网络(CNN),用于对异构、非结构化 MRI 数据进行可靠的高通量标记。材料与方法 使用来自 249 家医院和 29 种扫描仪类型的回顾性、多中心脑部 MRI 数据(2179 名胶质母细胞瘤患者、8544 次检查、63 327 个序列)开发了基于 ResNet-18 架构的网络,以区分九种 MRI 序列类型、包括 T1 加权、对比后 T1 加权、T2 加权、流体增强反转恢复、感度加权、表观扩散系数、扩散加权(低和高 b 值)、梯度回波 T2* 加权和动态感度对比相关图像。每个序列的二维中切面图像被分配到训练或验证(约占 80%)和测试(约占 20%)中,采用分层分割法以确保不同机构、患者和 MRI 序列类型的组间平衡。对每种序列类型的预测准确率进行量化,并使用 χ2 检验对模型性能进行分组比较。结果 在测试集上,CNN(ResNet-18)集合模型在所有序列类型中的总体准确率为 97.9%(95% CI:97.6,98.1),从感度加权图像的 84.2%(95% CI:81.8,86.6)到 T2 加权图像的 99.8%(95% CI:99.7,99.9)不等。ResNet-18 模型的准确率明显高于 ResNet-50,尽管其架构更简单(97.9% vs 97.1%;P ≤ .001)。对于任何序列类型,ResNet-18 模型的准确性都不受二维中切面图像上有无肿瘤的影响(P > .05)。结论 开发的 CNN (www.github.com/neuroAI-HD/HD-SEQ-ID) 能可靠地区分多中心和大规模人群神经影像数据中的九种 MRI 序列,可提高临床和研究神经放射学工作流程的速度、准确性和效率。关键词磁共振成像 神经网络 中枢神经系统 脑/脑干 计算机应用-通用(信息学) 卷积神经网络(CNN) 深度学习算法 机器学习算法 本文有补充材料。© RSNA, 2023.
{"title":"Deep Learning-based Identification of Brain MRI Sequences Using a Model Trained on Large Multicentric Study Cohorts.","authors":"Mustafa Ahmed Mahmutoglu, Chandrakanth Jayachandran Preetha, Hagen Meredig, Joerg-Christian Tonn, Michael Weller, Wolfgang Wick, Martin Bendszus, Gianluca Brugnara, Philipp Vollmuth","doi":"10.1148/ryai.230095","DOIUrl":"10.1148/ryai.230095","url":null,"abstract":"<p><p>Purpose To develop a fully automated device- and sequence-independent convolutional neural network (CNN) for reliable and high-throughput labeling of heterogeneous, unstructured MRI data. Materials and Methods Retrospective, multicentric brain MRI data (2179 patients with glioblastoma, 8544 examinations, 63 327 sequences) from 249 hospitals and 29 scanner types were used to develop a network based on ResNet-18 architecture to differentiate nine MRI sequence types, including T1-weighted, postcontrast T1-weighted, T2-weighted, fluid-attenuated inversion recovery, susceptibility-weighted, apparent diffusion coefficient, diffusion-weighted (low and high <i>b</i> value), and gradient-recalled echo T2*-weighted and dynamic susceptibility contrast-related images. The two-dimensional-midsection images from each sequence were allocated to training or validation (approximately 80%) and testing (approximately 20%) using a stratified split to ensure balanced groups across institutions, patients, and MRI sequence types. The prediction accuracy was quantified for each sequence type, and subgroup comparison of model performance was performed using χ<sup>2</sup> tests. Results On the test set, the overall accuracy of the CNN (ResNet-18) ensemble model among all sequence types was 97.9% (95% CI: 97.6, 98.1), ranging from 84.2% for susceptibility-weighted images (95% CI: 81.8, 86.6) to 99.8% for T2-weighted images (95% CI: 99.7, 99.9). The ResNet-18 model achieved significantly better accuracy compared with ResNet-50 despite its simpler architecture (97.9% vs 97.1%; <i>P</i> ≤ .001). The accuracy of the ResNet-18 model was not affected by the presence versus absence of tumor on the two-dimensional-midsection images for any sequence type (<i>P</i> > .05). Conclusion The developed CNN (<i>www.github.com/neuroAI-HD/HD-SEQ-ID</i>) reliably differentiates nine types of MRI sequences within multicenter and large-scale population neuroimaging data and may enhance the speed, accuracy, and efficiency of clinical and research neuroradiologic workflows. <b>Keywords:</b> MR-Imaging, Neural Networks, CNS, Brain/Brain Stem, Computer Applications-General (Informatics), Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms <i>Supplemental material is available for this article.</i> © RSNA, 2023.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831512/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139080955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement from the ACR, CAR, ESR, RANZCR and RSNA. 开发、采购、实施和监控放射学中的人工智能工具:实用考虑因素。来自 ACR、CAR、ESR、RANZCR 和 RSNA 的多协会声明。
IF 9.8 Pub Date : 2024-01-01 DOI: 10.1148/ryai.230513
Adrian P Brady, Bibb Allen, Jaron Chong, Elmar Kotter, Nina Kottler, John Mongan, Lauren Oakden-Rayner, Daniel Pinto Dos Santos, An Tang, Christoph Wald, John Slavotinek

Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools. This article is simultaneously published in Insights into Imaging (DOI 10.1186/s13244-023-01541-3), Journal of Medical Imaging and Radiation Oncology (DOI 10.1111/1754-9485.13612), Canadian Association of Radiologists Journal (DOI 10.1177/08465371231222229), Journal of the American College of Radiology (DOI 10.1016/j.jacr.2023.12.005), and Radiology: Artificial Intelligence (DOI 10.1148/ryai.230513). Keywords: Artificial Intelligence, Radiology, Automation, Machine Learning Published under a CC BY 4.0 license. ©The Author(s) 2024. Editor's Note: The RSNA Board of Directors has endorsed this article. It has not undergone review or editing by this journal.

人工智能(AI)有可能对放射学造成前所未有的破坏,并可能带来积极和消极的影响。将人工智能整合到放射学中,有可能推进多种医疗状况的诊断、量化和管理,从而彻底改变医疗实践。然而,放射学中的人工智能工具越来越多,这凸显出越来越有必要对其实用性进行严格评估,并将安全的产品与可能有害或根本无益的产品区分开来。这篇由多个学会共同撰写的论文阐述了美国、加拿大、欧洲、澳大利亚和新西兰放射学会的观点,明确了将人工智能应用于放射实践的潜在实际问题和伦理问题。除了阐述人工智能工具的开发者、监管者和购买者在将其引入临床实践之前应考虑的主要关注点之外,本声明还提出了监测其在临床使用中的稳定性和安全性以及是否适合发挥自主功能的方法。本声明旨在对参与放射学人工智能资源开发及其作为临床工具实施的各方应考虑的实际问题进行有益的总结。本文同时发表于《Insights into Imaging》(DOI 10.1186/s13244-023-01541-3)、《Journal of Medical Imaging and Radiation Oncology》(DOI 10.1111/1754-9485.13612)、《Canadian Association of Radiologists Journal》(DOI 10.1177/08465371231222229)、《Journal of the American College of Radiology》(DOI 10.1016/j.jacr.2023.12.005)和《Radiology:人工智能》(DOI 10.1148/ryai.230513)。关键词:人工智能人工智能 放射学 自动化 机器学习 采用 CC BY 4.0 许可发布。©作者 2024。编者注:RSNA 董事会已认可本文。本文未经本刊审阅或编辑。
{"title":"Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement from the ACR, CAR, ESR, RANZCR and RSNA.","authors":"Adrian P Brady, Bibb Allen, Jaron Chong, Elmar Kotter, Nina Kottler, John Mongan, Lauren Oakden-Rayner, Daniel Pinto Dos Santos, An Tang, Christoph Wald, John Slavotinek","doi":"10.1148/ryai.230513","DOIUrl":"10.1148/ryai.230513","url":null,"abstract":"<p><p>Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools. <i>This article is simultaneously published in Insights into Imaging (DOI 10.1186/s13244-023-01541-3), Journal of Medical Imaging and Radiation Oncology (DOI 10.1111/1754-9485.13612), Canadian Association of Radiologists Journal (DOI 10.1177/08465371231222229), Journal of the American College of Radiology (DOI 10.1016/j.jacr.2023.12.005), and Radiology: Artificial Intelligence (DOI 10.1148/ryai.230513).</i> <b>Keywords:</b> Artificial Intelligence, Radiology, Automation, Machine Learning Published under a CC BY 4.0 license. ©The Author(s) 2024. Editor's Note: The RSNA Board of Directors has endorsed this article. It has not undergone review or editing by this journal.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831521/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139513870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Seeing Is Not Always Believing: Discrepancies in Saliency Maps. 眼见不一定为实:显著性地图中的差异。
IF 9.8 Pub Date : 2024-01-01 DOI: 10.1148/ryai.230488
Masahiro Yanagawa, Junya Sato
{"title":"Seeing Is Not Always Believing: Discrepancies in Saliency Maps.","authors":"Masahiro Yanagawa, Junya Sato","doi":"10.1148/ryai.230488","DOIUrl":"10.1148/ryai.230488","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831517/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139080956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy, Please: Safeguarding Medical Data in Imaging AI Using Differential Privacy Techniques. 请注意隐私使用差异隐私技术保护人工智能成像中的医疗数据。
IF 9.8 Pub Date : 2024-01-01 DOI: 10.1148/ryai.230560
Abhinav Suri, Ronald M Summers
{"title":"Privacy, Please: Safeguarding Medical Data in Imaging AI Using Differential Privacy Techniques.","authors":"Abhinav Suri, Ronald M Summers","doi":"10.1148/ryai.230560","DOIUrl":"10.1148/ryai.230560","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831504/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139478906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Celebrating the Journal's First 5 Years. 庆祝期刊创刊 5 周年。
IF 9.8 Pub Date : 2024-01-01 DOI: 10.1148/ryai.240037
Charles E Kahn
{"title":"Celebrating the Journal's First 5 Years.","authors":"Charles E Kahn","doi":"10.1148/ryai.240037","DOIUrl":"10.1148/ryai.240037","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831516/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139643046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Radiology-Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1