首页 > 最新文献

Radiology-Artificial Intelligence最新文献

英文 中文
Completing the Baby Album: AI Synthesizing Infant Brain MRI for Missing Time Points. 完成婴儿专辑:AI合成婴儿脑MRI以弥补缺失的时间点
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-01 DOI: 10.1148/ryai.250779
Gunvant Chaudhari, Andreas Rauschecker
{"title":"Completing the Baby Album: AI Synthesizing Infant Brain MRI for Missing Time Points.","authors":"Gunvant Chaudhari, Andreas Rauschecker","doi":"10.1148/ryai.250779","DOIUrl":"https://doi.org/10.1148/ryai.250779","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 6","pages":"e250779"},"PeriodicalIF":13.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145606000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridging Radiologic Reasoning and Artificial Intelligence: Explainable Deep Learning for Focal Liver Lesions. 连接放射学推理和人工智能:局灶性肝病变的可解释深度学习。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-01 DOI: 10.1148/ryai.250806
Lisa C Adams, Keno K Bressem
{"title":"Bridging Radiologic Reasoning and Artificial Intelligence: Explainable Deep Learning for Focal Liver Lesions.","authors":"Lisa C Adams, Keno K Bressem","doi":"10.1148/ryai.250806","DOIUrl":"https://doi.org/10.1148/ryai.250806","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 6","pages":"e250806"},"PeriodicalIF":13.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145446158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distinguishing between Rigor and Transparency in FDA Marketing Authorization of AI-enabled Medical Devices. 区分FDA对人工智能医疗器械上市授权的严谨性和透明度。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-01 DOI: 10.1148/ryai.250369
Abdul Rahman Diab, William Lotter

The increasing prevalence of artificial intelligence (AI)-enabled medical devices presents significant opportunities for improving patient outcomes. However, recent studies based on public U.S. Food and Drug Administration (FDA) summaries have raised concerns about the extent of validation that such devices undergo before FDA marketing authorization and subsequent clinical deployment. Here, the authors clarify key concepts of FDA regulation and provide insights into the current standards of performance validation, focusing on radiology AI devices. The authors distinguish between two fundamentally different but often conflated concepts: validation rigor (ie, the quality and comprehensiveness of the evidence supporting a device's performance) and validation transparency (ie, the extent to which this evidence is publicly accessible). The authors begin by describing the inverse relationship between the amount of performance data contained and the transparency of specific components of an FDA submission. Drawing on FDA guidelines and on experience developing authorized AI devices, the authors outline current validation standards and present a mapping from common radiology AI device types to their typical clinical study designs. This article concludes with actionable recommendations, advocating for a balanced approach tailored to specific use cases while still enforcing certain universal standards. These measures will help ensure that AI-enabled medical devices are both rigorously evaluated and transparently reported, thereby fostering greater public trust and enhancing clinical utility.

人工智能医疗设备的日益普及为改善患者预后提供了重要机会。然而,最近基于FDA公开摘要的研究引起了对此类器械在FDA上市许可和随后的临床部署之前的验证程度的担忧。在这里,我们澄清了FDA法规的关键概念,并提供了对当前性能验证标准的见解,重点是放射学人工智能设备。我们区分了两个根本不同但经常混淆的概念:验证严谨性-支持设备性能的证据的质量和全面性-验证透明度-该证据可公开访问的程度。我们首先描述所包含的性能数据量与FDA提交的特定组件的透明度之间的反比关系。根据FDA指南和我们自己开发授权人工智能设备的经验,我们概述了当前的验证标准,并提供了从常见放射学人工智能设备类型到其典型临床研究设计的映射。最后,我们提出了可行的建议,倡导针对特定用例定制的平衡方法,同时仍然执行某些通用标准。这些措施将有助于确保人工智能医疗设备得到严格评估和透明报告,从而增强公众信任,提高临床效用。©RSNA, 2025年。
{"title":"Distinguishing between Rigor and Transparency in FDA Marketing Authorization of AI-enabled Medical Devices.","authors":"Abdul Rahman Diab, William Lotter","doi":"10.1148/ryai.250369","DOIUrl":"10.1148/ryai.250369","url":null,"abstract":"<p><p>The increasing prevalence of artificial intelligence (AI)-enabled medical devices presents significant opportunities for improving patient outcomes. However, recent studies based on public U.S. Food and Drug Administration (FDA) summaries have raised concerns about the extent of validation that such devices undergo before FDA marketing authorization and subsequent clinical deployment. Here, the authors clarify key concepts of FDA regulation and provide insights into the current standards of performance validation, focusing on radiology AI devices. The authors distinguish between two fundamentally different but often conflated concepts: validation rigor (ie, the quality and comprehensiveness of the evidence supporting a device's performance) and validation transparency (ie, the extent to which this evidence is publicly accessible). The authors begin by describing the inverse relationship between the amount of performance data contained and the transparency of specific components of an FDA submission. Drawing on FDA guidelines and on experience developing authorized AI devices, the authors outline current validation standards and present a mapping from common radiology AI device types to their typical clinical study designs. This article concludes with actionable recommendations, advocating for a balanced approach tailored to specific use cases while still enforcing certain universal standards. These measures will help ensure that AI-enabled medical devices are both rigorously evaluated and transparently reported, thereby fostering greater public trust and enhancing clinical utility.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e250369"},"PeriodicalIF":13.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145132123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Are AI Models Using Shortcuts to Detect Breast Cancer Risk? 人工智能模型是否使用捷径来检测乳腺癌风险?
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-01 DOI: 10.1148/ryai.250798
Judy W Gichoya, Hari Trivedi
{"title":"Are AI Models Using Shortcuts to Detect Breast Cancer Risk?","authors":"Judy W Gichoya, Hari Trivedi","doi":"10.1148/ryai.250798","DOIUrl":"https://doi.org/10.1148/ryai.250798","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 6","pages":"e250798"},"PeriodicalIF":13.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145551106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Deep Learning-based Segmentation of the Dentate Nucleus Using Quantitative Susceptibility Mapping MRI. 基于深度学习的齿状核定量敏感性成像自动分割。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-01 DOI: 10.1148/ryai.240478
Diogo H Shiraishi, Susmita Saha, Isaac M Adanyeguh, Sirio Cocozza, Louise A Corben, Andreas Deistung, Martin B Delatycki, Imis Dogan, William Gaetz, Nellie Georgiou-Karistianis, Simon Graf, Marina Grisoli, Pierre-Gilles Henry, Gustavo M Jarola, James M Joers, Christian Langkammer, Christophe Lenglet, Jiakun Li, Camila C Lobo, Eric F Lock, David R Lynch, Thomas H Mareci, Alberto R M Martinez, Serena Monti, Anna Nigri, Massimo Pandolfo, Kathrin Reetz, Timothy P Roberts, Sandro Romanzetti, David A Rudko, Alessandra Scaravilli, Jörg B Schulz, S H Subramony, Dagmar Timmann, Marcondes C França, Ian H Harding, Thiago J R Rezende

Purpose To develop a dentate nucleus (DN) segmentation tool using deep learning applied to brain MRI-based quantitative susceptibility mapping (QSM) images. Materials and Methods Brain QSM images from healthy controls and individuals with cerebellar ataxia or multiple sclerosis were collected from nine different datasets (2016-2023) worldwide for this retrospective study (ClinicalTrials.gov identifier: NCT04349514). Manual delineation of the DN was performed by experienced raters. Automated segmentation performance was evaluated against manual reference segmentations following training with several deep learning architectures. A two-step approach was used, consisting of a localization model followed by DN segmentation. Performance metrics included intraclass correlation coefficient (ICC), Dice score, and Pearson correlation coefficient. Results The training and testing datasets comprised 328 individuals (age range, 11-64 years; 171 female individuals), including 141 healthy individuals and 187 with cerebellar ataxia or multiple sclerosis. The manual tracing protocol produced reference standards with high intrarater (average ICC, 0.91) and interrater reliability (average ICC, 0.78). Initial deep learning architecture exploration indicated that the nnU-Net framework performed best. The two-step localization plus segmentation pipeline achieved a Dice score of 0.90 ± 0.03 (SD) and 0.89 ± 0.04 for left and right DN segmentation, respectively. In external testing, the proposed algorithm outperformed the current leading automated tool (mean Dice scores for left and right DN, 0.86 ± 0.04 vs 0.57 ± 0.22 [P < .001]; 0.84 ± 0.07 vs 0.58 ± 0.24 [P < .001]). The model demonstrated generalizability across datasets unseen during the training step, with automated segmentations showing high correlation with manual annotations (left DN: r = 0.74 [P < .001]; right DN: r = 0.48 [P = .03]). Conclusion The proposed model accurately and efficiently segmented the DN from brain QSM images. The model is publicly available (https://github.com/art2mri/DentateSeg). Keywords: MR Imaging, Brain/Brain Stem, Segmentation, Convolutional Neural Network, Supervised Learning, Computer Applications-3D, Volume Analysis, Image Postprocessing ClinicalTrials.gov registration no. NCT04349514 Supplemental material is available for this article. © RSNA, 2025.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的开发一种基于深度学习(DL)的齿状核(DN)分割工具,应用于基于mri的定量敏感性图谱(QSM)图像。材料和方法本回顾性研究从全球9个不同的数据集(2016-2023)中收集健康对照组和小脑性失调或多发性硬化症患者的脑QSM图像(ClinicalTrials.gov标识符:NCT04349514)。由经验丰富的评分员手动划定DN。在使用几种深度学习架构进行训练后,对自动分割性能与手动参考分割进行了评估。采用两步方法,包括定位模型和DN分割。性能指标包括类内相关系数(ICC)、Dice评分和Pearson相关系数。结果训练和测试数据集共328人,年龄11 ~ 64岁;171名女性),包括141名健康个体和187名患有小脑性共济失调或多发性硬化症的个体。手工跟踪协议产生了具有高内部(平均ICC 0.91)和内部可靠性(平均ICC 0.78)的参考标准。初步的深度学习架构探索表明,nnU-Net框架表现最好。两步定位加分割流水线的左DN和右DN分割的Dice得分分别为0.90±0.03和0.89±0.04。在外部测试中,该算法优于目前领先的自动化工具(左DN和右DN的平均Dice得分:0.86±0.04 vs 0.57±0.22,P < 0.001;0.84±0.07 vs 0.58±0.24,P < 0.001)。该模型展示了在训练阶段未见的数据集之间的泛化性,自动分割显示出与手动注释的高相关性(左DN: r = 0.74;P < .001;右DN: r = 0.48;P = .03)。结论该模型能准确、高效地分割脑QSM图像中的DN。该模型是公开的(https://github.com/art2mri/DentateSeg)。©RSNA, 2025年。
{"title":"Automated Deep Learning-based Segmentation of the Dentate Nucleus Using Quantitative Susceptibility Mapping MRI.","authors":"Diogo H Shiraishi, Susmita Saha, Isaac M Adanyeguh, Sirio Cocozza, Louise A Corben, Andreas Deistung, Martin B Delatycki, Imis Dogan, William Gaetz, Nellie Georgiou-Karistianis, Simon Graf, Marina Grisoli, Pierre-Gilles Henry, Gustavo M Jarola, James M Joers, Christian Langkammer, Christophe Lenglet, Jiakun Li, Camila C Lobo, Eric F Lock, David R Lynch, Thomas H Mareci, Alberto R M Martinez, Serena Monti, Anna Nigri, Massimo Pandolfo, Kathrin Reetz, Timothy P Roberts, Sandro Romanzetti, David A Rudko, Alessandra Scaravilli, Jörg B Schulz, S H Subramony, Dagmar Timmann, Marcondes C França, Ian H Harding, Thiago J R Rezende","doi":"10.1148/ryai.240478","DOIUrl":"10.1148/ryai.240478","url":null,"abstract":"<p><p>Purpose To develop a dentate nucleus (DN) segmentation tool using deep learning applied to brain MRI-based quantitative susceptibility mapping (QSM) images. Materials and Methods Brain QSM images from healthy controls and individuals with cerebellar ataxia or multiple sclerosis were collected from nine different datasets (2016-2023) worldwide for this retrospective study (ClinicalTrials.gov identifier: NCT04349514). Manual delineation of the DN was performed by experienced raters. Automated segmentation performance was evaluated against manual reference segmentations following training with several deep learning architectures. A two-step approach was used, consisting of a localization model followed by DN segmentation. Performance metrics included intraclass correlation coefficient (ICC), Dice score, and Pearson correlation coefficient. Results The training and testing datasets comprised 328 individuals (age range, 11-64 years; 171 female individuals), including 141 healthy individuals and 187 with cerebellar ataxia or multiple sclerosis. The manual tracing protocol produced reference standards with high intrarater (average ICC, 0.91) and interrater reliability (average ICC, 0.78). Initial deep learning architecture exploration indicated that the nnU-Net framework performed best. The two-step localization plus segmentation pipeline achieved a Dice score of 0.90 ± 0.03 (SD) and 0.89 ± 0.04 for left and right DN segmentation, respectively. In external testing, the proposed algorithm outperformed the current leading automated tool (mean Dice scores for left and right DN, 0.86 ± 0.04 vs 0.57 ± 0.22 [<i>P</i> < .001]; 0.84 ± 0.07 vs 0.58 ± 0.24 [<i>P</i> < .001]). The model demonstrated generalizability across datasets unseen during the training step, with automated segmentations showing high correlation with manual annotations (left DN: <i>r =</i> 0.74 [<i>P</i> < .001]; right DN: <i>r =</i> 0.48 [<i>P</i> = .03]). Conclusion The proposed model accurately and efficiently segmented the DN from brain QSM images. The model is publicly available <i>(https://github.com/art2mri/DentateSeg)</i>. <b>Keywords:</b> MR Imaging, Brain/Brain Stem, Segmentation, Convolutional Neural Network, Supervised Learning, Computer Applications-3D, Volume Analysis, Image Postprocessing ClinicalTrials.gov registration no. NCT04349514 <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240478"},"PeriodicalIF":13.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144790143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Explainable AI to Characterize Features in the Mirai Mammographic Breast Cancer Risk Prediction Model. 使用可解释的AI来描述Mirai乳房x线摄影乳腺癌风险预测模型中的特征。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-01 DOI: 10.1148/ryai.240417
Yao-Kuan Wang, Zan Klanecek, Tobias Wagner, Lesley Cockmartin, Nicholas Marshall, Andrej Studen, Robert Jeraj, Hilde Bosmans

Purpose To evaluate whether features extracted by Mirai can be aligned with mammographic observations and contribute meaningfully to the prediction of breast cancer risk. Materials and Methods This retrospective study examined the correlation of 512 Mirai features with mammographic observations in terms of receptive field and anatomic location. A total of 29 374 screening examinations with mammograms (10 415 female patients; mean age at examination, 60 years ± 11 [SD]) from the EMory BrEast imaging Dataset (EMBED) (2013-2020) were used to evaluate feature importance using a feature-centric explainable artificial intelligence pipeline. Risk prediction was evaluated using only calcification features (CalcMirai) or mass features (MassMirai) against Mirai. Performance was assessed in screening and screen-negative (time to cancer, >6 months) populations using the area under the receiver operating characteristic curve (AUC). Results Eighteen calcification features and 18 mass features were selected for CalcMirai and MassMirai, respectively. Both CalcMirai and MassMirai had lower performance than Mirai in lesion detection (screening population: Mirai 1-year AUC, 0.81 [95% CI: 0.78, 0.84]; CalcMirai 1-year AUC, 0.76 [95% CI: 0.73, 0.80]; MassMirai 1-year AUC, 0.74 [95% CI: 0.71, 0.78] [P < .001]). In risk prediction, there was no evidence of a difference in performance between CalcMirai and Mirai (screen-negative population: Mirai 5-year AUC, 0.66 [95% CI: 0.63, 0.69]; CalcMirai 5-year AUC, 0.66 [95% CI: 0.64, 0.69] [P = .71]). However, MassMirai achieved lower performance than Mirai (5-year AUC, 0.57 [95% CI: 0.54, 0.60]; P < .001). Radiologist review of calcification features confirmed Mirai's use of benign calcification in risk prediction. Conclusion The explainable AI pipeline demonstrated that Mirai implicitly learned to identify mammographic lesion features, particularly calcifications, for lesion detection and risk prediction. Keywords: Breast, Mammography, Screening Supplemental material is available for this article. © The Author(s) 2025. Published by the Radiological Society of North America under a CC BY 4.0 license. See also commentary by Gichoya and Trivedi in this issue.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的评价Mirai提取的特征是否能与乳房x线摄影观察相一致,并为预测做出有意义的贡献。材料和方法回顾性研究了512种Mirai特征与乳腺x线摄影观察的感受野和解剖位置的相关性。来自EMBED数据集(2013-2020)的29,374例乳房x线筛查检查(10,415名女性,检查时平均年龄60 [SD: 11]岁)被用于使用以特征为中心的可解释AI管道评估特征重要性。仅使用钙化特征(CalcMirai)或肿块特征(MassMirai)对Mirai进行风险预测。使用受者工作特征曲线(AUC)下的面积评估筛查和筛查阴性人群(至癌时间为6个月)的表现。结果CalcMirai和MassMirai分别选择了18个钙化特征和18个肿块特征。CalcMirai和MassMirai在病变检测方面的表现均低于Mirai(筛查人群,1年AUC: Mirai 0.81 [95% CI: 0.78, 0.84], CalcMirai 0.76 [95% CI: 0.73, 0.80], MassMirai 0.74 [95% CI: 0.71, 0.78], P值< 0.001)。在风险预测方面,没有证据表明CalcMirai和Mirai之间的表现有差异(筛查阴性人群,5年AUC: Mirai, 0.66 [95% CI: 0.63, 0.69], CalcMirai, 0.66 [95% CI: 0.64, 0.69], P值:0.71);然而,MassMirai的性能低于Mirai (AUC, 0.57 [95% CI: 0.54, 0.60]; P值< 0.001)。放射科医师对钙化特征的回顾证实了Mirai良性钙化在风险预测中的应用。结论可解释的AI管道表明Mirai隐式学习识别乳房x线病变特征,特别是钙化,用于病变检测和风险预测。©RSNA, 2025年。
{"title":"Using Explainable AI to Characterize Features in the Mirai Mammographic Breast Cancer Risk Prediction Model.","authors":"Yao-Kuan Wang, Zan Klanecek, Tobias Wagner, Lesley Cockmartin, Nicholas Marshall, Andrej Studen, Robert Jeraj, Hilde Bosmans","doi":"10.1148/ryai.240417","DOIUrl":"10.1148/ryai.240417","url":null,"abstract":"<p><p>Purpose To evaluate whether features extracted by Mirai can be aligned with mammographic observations and contribute meaningfully to the prediction of breast cancer risk. Materials and Methods This retrospective study examined the correlation of 512 Mirai features with mammographic observations in terms of receptive field and anatomic location. A total of 29 374 screening examinations with mammograms (10 415 female patients; mean age at examination, 60 years ± 11 [SD]) from the EMory BrEast imaging Dataset (EMBED) (2013-2020) were used to evaluate feature importance using a feature-centric explainable artificial intelligence pipeline. Risk prediction was evaluated using only calcification features (CalcMirai) or mass features (MassMirai) against Mirai. Performance was assessed in screening and screen-negative (time to cancer, >6 months) populations using the area under the receiver operating characteristic curve (AUC). Results Eighteen calcification features and 18 mass features were selected for CalcMirai and MassMirai, respectively. Both CalcMirai and MassMirai had lower performance than Mirai in lesion detection (screening population: Mirai 1-year AUC, 0.81 [95% CI: 0.78, 0.84]; CalcMirai 1-year AUC, 0.76 [95% CI: 0.73, 0.80]; MassMirai 1-year AUC, 0.74 [95% CI: 0.71, 0.78] [<i>P</i> < .001]). In risk prediction, there was no evidence of a difference in performance between CalcMirai and Mirai (screen-negative population: Mirai 5-year AUC, 0.66 [95% CI: 0.63, 0.69]; CalcMirai 5-year AUC, 0.66 [95% CI: 0.64, 0.69] [<i>P</i> = .71]). However, MassMirai achieved lower performance than Mirai (5-year AUC, 0.57 [95% CI: 0.54, 0.60]; <i>P</i> < .001). Radiologist review of calcification features confirmed Mirai's use of benign calcification in risk prediction. Conclusion The explainable AI pipeline demonstrated that Mirai implicitly learned to identify mammographic lesion features, particularly calcifications, for lesion detection and risk prediction. <b>Keywords:</b> Breast, Mammography, Screening <i>Supplemental material is available for this article.</i> © The Author(s) 2025. Published by the Radiological Society of North America under a CC BY 4.0 license. See also commentary by Gichoya and Trivedi in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240417"},"PeriodicalIF":13.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond the Image: How Acquisition Parameters Influence AI and Radiologists in Screening Mammography. 影像之外:采集参数如何影响人工智能和放射科医生筛查乳房x光检查。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-01 DOI: 10.1148/ryai.250770
Hyo-Jae Lee, Min Sun Bae
{"title":"Beyond the Image: How Acquisition Parameters Influence AI and Radiologists in Screening Mammography.","authors":"Hyo-Jae Lee, Min Sun Bae","doi":"10.1148/ryai.250770","DOIUrl":"https://doi.org/10.1148/ryai.250770","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 6","pages":"e250770"},"PeriodicalIF":13.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145497012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SNRAware: Improved Deep Learning MRI Denoising with Signal-to-Noise Ratio Unit Training and G-Factor Map Augmentation. SNRAware:基于信噪比单元训练和g因子图增强的改进深度学习MRI去噪。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-01 DOI: 10.1148/ryai.250227
Hui Xue, Sarah M Hooper, Iain Pierce, Rhodri H Davies, John Stairs, Joseph Naegele, Adrienne E Campbell-Washburn, Charlotte Manisty, James C Moon, Thomas A Treibel, Michael S Hansen, Peter Kellman

Purpose To develop and evaluate a deep learning-based MRI denoising method using quantitative noise distribution information obtained during image reconstruction to improve model performance and generalization. Materials and Methods This retrospective study included a training set of 2 885 236 images from 96 605 cardiac cine series acquired with 3-T MRI scanners from January 2018 to December 2020. Of these data, 95% were used for training, and 5% were used for validation. The hold-out test set included 3000 cine series, acquired in the same period. Fourteen model architectures were evaluated by instantiating each of the two backbone types with seven transformer and convolution block types. The proposed SNRAware training scheme leveraged MRI reconstruction knowledge to enhance denoising by simulating diverse synthetic datasets and providing quantitative noise distribution information. Internal testing measured performance using peak signal-to-noise ratio and structural similarity index measure, whereas external tests conducted with 1.5-T real-time cardiac cine, first-pass cardiac perfusion, brain, and spine MRI assessed generalization across various sequences, contrast agents, anatomies, and field strengths. Results SNRAware improved performance on internal tests conducted on a hold-out dataset of 3000 cine series. Models trained without reconstruction knowledge achieved the worst performance metrics. Improvement was architecture agnostic for both convolution and transformer models. However, transformer models outperformed their convolutional counterparts. Additionally, three-dimensional input tensors showed improved performance over two-dimensional images. The best-performing model from the internal testing generalized well to external samples, delivering 6.5 and 2.9 times contrast-to-noise ratio improvement for real-time cine and perfusion imaging, respectively. The model trained using only cardiac cine data generalized well to three-dimensional T1-weighted magnetization-prepared rapid gradient-echo brain and T2-weighted turbo spin-echo spine MRI acquisitions. Conclusion The SNRAware training scheme leveraged data obtained during the image reconstruction process for deep learning-based MRI denoising training, resulting in improved performance and good generalization. Keywords: MRI, Deep Learning, MRI Denoising Supplemental material is available for this article. © The Author(s) 2025. Published by the Radiological Society of North America under a CC BY 4.0 license.

目的开发和评估一种基于深度学习的MRI去噪方法,利用图像重建过程中获得的定量噪声分布信息来提高模型的性能和泛化能力。材料与方法本回顾性研究纳入了2018年1月至2020年12月在3T MRI扫描仪上获得的96605张心脏电影系列图像的2885236张训练集。95%的数据用于训练,5%用于验证。保留测试集包括同期获得的3000部电影系列。通过实例化七种变压器和卷积块类型的两种骨干类型,对14种模型架构进行了评估。提出的SNRAware训练方案利用MRI重构知识,通过模拟不同的合成数据集,提供定量的噪声分布信息,增强去噪能力。内部测试使用峰值信噪比(PSNR)和结构相似指数测量(SSIM)来测量性能,而外部测试使用1.5T实时心脏影像、首过心脏灌注、脑和脊柱mri来评估各种序列、对比、解剖结构和场强的泛化性。结果在3000部电影系列的保留数据集上进行的内部测试中,SNRAware提高了性能。在没有重建知识的情况下训练的模型获得了最差的性能指标。对于卷积模型和变压器模型,改进是与体系结构无关的;然而,变压器模型的表现优于卷积模型。此外,3D输入张量比2D图像表现出更好的性能。内部测试中表现最好的模型可以很好地推广到外部样本,实时电影成像和灌注成像的噪比分别提高了6.5倍和2.9倍。仅使用心脏影像数据训练的模型可以很好地推广到T1 MPRAGE(磁化制备快速梯度回波)脑3D和T2 TSE(涡轮自旋回波)脊柱mri。结论SNRAware训练方案利用图像重建过程中获得的数据进行基于深度学习的MRI去噪训练,提高了训练性能,泛化效果好。©RSNA, 2025年。
{"title":"SNRAware: Improved Deep Learning MRI Denoising with Signal-to-Noise Ratio Unit Training and G-Factor Map Augmentation.","authors":"Hui Xue, Sarah M Hooper, Iain Pierce, Rhodri H Davies, John Stairs, Joseph Naegele, Adrienne E Campbell-Washburn, Charlotte Manisty, James C Moon, Thomas A Treibel, Michael S Hansen, Peter Kellman","doi":"10.1148/ryai.250227","DOIUrl":"10.1148/ryai.250227","url":null,"abstract":"<p><p>Purpose To develop and evaluate a deep learning-based MRI denoising method using quantitative noise distribution information obtained during image reconstruction to improve model performance and generalization. Materials and Methods This retrospective study included a training set of 2 885 236 images from 96 605 cardiac cine series acquired with 3-T MRI scanners from January 2018 to December 2020. Of these data, 95% were used for training, and 5% were used for validation. The hold-out test set included 3000 cine series, acquired in the same period. Fourteen model architectures were evaluated by instantiating each of the two backbone types with seven transformer and convolution block types. The proposed SNRAware training scheme leveraged MRI reconstruction knowledge to enhance denoising by simulating diverse synthetic datasets and providing quantitative noise distribution information. Internal testing measured performance using peak signal-to-noise ratio and structural similarity index measure, whereas external tests conducted with 1.5-T real-time cardiac cine, first-pass cardiac perfusion, brain, and spine MRI assessed generalization across various sequences, contrast agents, anatomies, and field strengths. Results SNRAware improved performance on internal tests conducted on a hold-out dataset of 3000 cine series. Models trained without reconstruction knowledge achieved the worst performance metrics. Improvement was architecture agnostic for both convolution and transformer models. However, transformer models outperformed their convolutional counterparts. Additionally, three-dimensional input tensors showed improved performance over two-dimensional images. The best-performing model from the internal testing generalized well to external samples, delivering 6.5 and 2.9 times contrast-to-noise ratio improvement for real-time cine and perfusion imaging, respectively. The model trained using only cardiac cine data generalized well to three-dimensional T1-weighted magnetization-prepared rapid gradient-echo brain and T2-weighted turbo spin-echo spine MRI acquisitions. Conclusion The SNRAware training scheme leveraged data obtained during the image reconstruction process for deep learning-based MRI denoising training, resulting in improved performance and good generalization. <b>Keywords:</b> MRI, Deep Learning, MRI Denoising <i>Supplemental material is available for this article.</i> © The Author(s) 2025. Published by the Radiological Society of North America under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e250227"},"PeriodicalIF":13.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12665503/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145348765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Influence of Mammography Acquisition Parameters on AI and Radiologist Interpretive Performance. 乳房x线摄影采集参数对人工智能和放射科医生解释性能的影响。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-01 DOI: 10.1148/ryai.240861
William Lotter, Daniel S Hippe, Thomas Oshiro, Kathryn P Lowry, Hannah S Milch, Diana L Miglioretti, Joann G Elmore, Christoph I Lee, William Hsu

Purpose To evaluate the impact of screening mammography acquisition parameters on the interpretive performance of artificial intelligence (AI) and radiologists. Materials and Methods The associations between seven mammogram acquisition parameters-mammography machine version, kilovoltage peak, x-ray exposure delivered, relative x-ray exposure, paddle size, compression force, and breast thickness-and AI and radiologist performance in interpreting two-dimensional screening mammograms acquired by a diverse health system between December 2010 and 2019 were retrospectively evaluated. The top 11 AI models and the ensemble model from the Digital Mammography Dialogue on Reverse Engineering Assessment and Methods (DREAM) Challenge were assessed. The associations between each acquisition parameter and the sensitivity and specificity of the AI models and the radiologists' interpretations were separately evaluated using generalized estimating equations-based models at the examination level, adjusted for several clinical factors. Results The dataset included 28 278 screening two-dimensional mammograms from 22 626 women (mean age ± SD, 58.5 years ± 11.5; 4913 women had multiple mammograms). Of these, 324 examinations resulted in a breast cancer diagnosis within 1 year. The acquisition parameters were significantly associated with the performance of both AI and radiologists, with absolute effect sizes reaching 10% for sensitivity and 5% for specificity; however, the associations differed between AI and radiologists for several parameters. Increased exposure delivered reduced the specificity for the ensemble AI (-4.5% per 1 SD increase; P < .001) but not radiologists (P = .44). Increased compression force reduced the specificity for radiologists (-1.3% per 1 SD increase; P < .001) but not for AI (P = .60). Conclusion Screening mammography acquisition parameters impacted the performance of both AI and radiologists, with some parameters impacting performance differently. Keywords: AI Robustness, Mammography, Medical Physics Supplemental material is available for this article. © RSNA, 2025 See also commentary by Lee and Bae in this issue.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的评价筛查性乳房x线摄影采集参数对人工智能和放射科医生解释能力的影响。材料和方法回顾性评估了7个乳房x线照片采集参数(乳房x线机版本、kVp、x线照射量、相对x线照射量、叶片大小、压缩力和乳房厚度)与人工智能和放射科医生在解释2010年12月至2019年不同卫生系统获得的二维筛查乳房x线照片时的表现之间的关系。对数字乳房x线摄影梦想挑战赛中排名前11位的AI模型和集成模型进行了评估。每个采集参数与人工智能模型的敏感性和特异性之间的关系以及放射科医生的解释在检查水平上分别使用基于广义估计方程的模型进行评估,并根据几个临床因素进行调整。结果该数据集包括来自22,626名女性(平均年龄58.5岁±11.5 [SD]; 4913名女性有多次乳房x光检查)的28,278张筛查性二维乳房x光检查。其中324例在1年内被诊断为乳腺癌。采集参数与人工智能和放射科医生的表现显著相关,绝对效应值在灵敏度上达到10%,在特异性上达到5%;然而,人工智能和放射科医生之间的关联在几个参数上有所不同。暴露量的增加降低了集合AI的特异性(每增加1 SD -4.5%; P < .001),但放射科医生没有(P = .44)。压缩力的增加降低了放射科医生的特异性(每增加1 SD减少1.3%;P < .001),但对AI没有降低特异性(P = .60)。结论筛选乳腺x线摄影采集参数对人工智能和放射科医生的工作表现均有影响,但某些参数对工作表现的影响不同。©RSNA, 2025年。
{"title":"Influence of Mammography Acquisition Parameters on AI and Radiologist Interpretive Performance.","authors":"William Lotter, Daniel S Hippe, Thomas Oshiro, Kathryn P Lowry, Hannah S Milch, Diana L Miglioretti, Joann G Elmore, Christoph I Lee, William Hsu","doi":"10.1148/ryai.240861","DOIUrl":"10.1148/ryai.240861","url":null,"abstract":"<p><p>Purpose To evaluate the impact of screening mammography acquisition parameters on the interpretive performance of artificial intelligence (AI) and radiologists. Materials and Methods The associations between seven mammogram acquisition parameters-mammography machine version, kilovoltage peak, x-ray exposure delivered, relative x-ray exposure, paddle size, compression force, and breast thickness-and AI and radiologist performance in interpreting two-dimensional screening mammograms acquired by a diverse health system between December 2010 and 2019 were retrospectively evaluated. The top 11 AI models and the ensemble model from the Digital Mammography Dialogue on Reverse Engineering Assessment and Methods (DREAM) Challenge were assessed. The associations between each acquisition parameter and the sensitivity and specificity of the AI models and the radiologists' interpretations were separately evaluated using generalized estimating equations-based models at the examination level, adjusted for several clinical factors. Results The dataset included 28 278 screening two-dimensional mammograms from 22 626 women (mean age ± SD, 58.5 years ± 11.5; 4913 women had multiple mammograms). Of these, 324 examinations resulted in a breast cancer diagnosis within 1 year. The acquisition parameters were significantly associated with the performance of both AI and radiologists, with absolute effect sizes reaching 10% for sensitivity and 5% for specificity; however, the associations differed between AI and radiologists for several parameters. Increased exposure delivered reduced the specificity for the ensemble AI (-4.5% per 1 SD increase; <i>P</i> < .001) but not radiologists (<i>P</i> = .44). Increased compression force reduced the specificity for radiologists (-1.3% per 1 SD increase; <i>P</i> < .001) but not for AI (<i>P</i> = .60). Conclusion Screening mammography acquisition parameters impacted the performance of both AI and radiologists, with some parameters impacting performance differently. <b>Keywords:</b> AI Robustness, Mammography, Medical Physics <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Lee and Bae in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240861"},"PeriodicalIF":13.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12649416/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145076228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Deep Learning Framework for Synthesizing Longitudinal Infant Brain MRI during Early Development. 婴儿早期发育纵向脑MRI的深度学习框架。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-01 DOI: 10.1148/ryai.240708
Yu Fang, Honglin Xiong, Jiawei Huang, Feihong Liu, Zhenrong Shen, Xinyi Cai, Han Zhang, Qian Wang

Purpose To develop a three-stage, age- and modality-conditioned framework to synthesize longitudinal infant brain MRI scans and account for rapid structural and contrast changes during early brain development. Materials and Methods This retrospective study utilized T1- and T2-weighted MRI scans (848 in total) from 139 infants in the Baby Connectome Project, collected between September 2016 and May 2020. The framework models three critical image cues related: volumetric expansion, cortical folding, and myelination, predicting missing time points with age and modality as predictive factors. The method was compared with LGAN, CounterSyn, and a diffusion-based approach using peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and the Dice similarity coefficient (DSC). Results The framework was trained on 119 participants (mean age ± SD, 11.25 months ± 6.16; 60 female and 59 male infants) and tested on 20 participants (mean age, 12.98 months ± 6.59; 11 female and nine male infants). For T1-weighted images, PSNRs were 25.44 ± 1.95 and 26.93 ± 2.50 for forward and backward MRI synthesis, respectively, and SSIMs were 0.87 ± 0.03 and 0.90 ± 0.02, respectively. For T2-weighted images, PSNRs were 26.35 ± 2.30 and 26.40 ± 2.56, respectively, with SSIMs of 0.87 ± 0.03 and 0.89 ± 0.02, respectively, showing significant outperformance compared with competing methods (P < .001). The framework also excelled in tissue segmentation (P < .001) and cortical reconstruction, achieving a DSC of 0.85 for gray matter and 0.86 for white matter, with intraclass correlation coefficients exceeding 0.8 in most cortical regions. Conclusion The proposed three-stage framework effectively synthesized age-specific infant brain MRI scans, outperforming competing methods in image quality and tissue segmentation and with strong performance in cortical reconstruction, demonstrating potential for developmental modeling and longitudinal analyses. Keywords: Pediatrics, Brain, Brain Stem, MRI, Infant Brain MRI Supplemental material is available for this article. © RSNA, 2025 See also commentary by Chaudhari and Rauschecker in this issue.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的建立一个三阶段,年龄和形态条件的框架来综合纵向婴儿脑MRI扫描,并解释早期大脑发育过程中的快速结构和对比变化。材料和方法本回顾性研究使用了自2016年9月以来收集的139名婴儿的T1和t2加权MRI扫描(848次扫描)。该框架模拟了三个关键的图像线索:体积扩张、皮质折叠和髓鞘形成,以年龄和模式作为预测因素预测缺失的时间点。通过峰值信噪比(PSNR)、结构相似指数(SSIM)和Dice相似系数(DSC),将该方法与LGAN、CounterSyn和基于扩散的方法进行比较。结果对119名参与者(平均年龄:11.25±6.16个月,女性60名,男性59名)进行了培训,对20名参与者(平均年龄:12.98±6.59个月,女性11名,男性9名)进行了测试。t1加权图像正反向MRI合成的PSNRs分别为25.44±1.95和26.93±2.50,SSIMs分别为0.87±0.03和0.90±0.02。t2加权图像的psnr分别为26.35±2.30和26.40±2.56,ssim分别为0.87±0.03和0.89±0.02,显著优于竞争方法(P < 0.001)。该框架在组织分割(P < 0.001)和皮层重建方面也表现出色,灰质的DSC为0.85,白质的DSC为0.86,大多数皮层区域的类内相关系数超过0.8。结论提出的三阶段框架有效地合成了特定年龄的婴儿脑MRI扫描,在图像质量和组织分割方面优于竞争对手的方法,在皮层重建方面表现出色,显示出发展建模和纵向分析的潜力。©RSNA, 2025年。
{"title":"A Deep Learning Framework for Synthesizing Longitudinal Infant Brain MRI during Early Development.","authors":"Yu Fang, Honglin Xiong, Jiawei Huang, Feihong Liu, Zhenrong Shen, Xinyi Cai, Han Zhang, Qian Wang","doi":"10.1148/ryai.240708","DOIUrl":"10.1148/ryai.240708","url":null,"abstract":"<p><p>Purpose To develop a three-stage, age- and modality-conditioned framework to synthesize longitudinal infant brain MRI scans and account for rapid structural and contrast changes during early brain development. Materials and Methods This retrospective study utilized T1- and T2-weighted MRI scans (848 in total) from 139 infants in the Baby Connectome Project, collected between September 2016 and May 2020. The framework models three critical image cues related: volumetric expansion, cortical folding, and myelination, predicting missing time points with age and modality as predictive factors. The method was compared with LGAN, CounterSyn, and a diffusion-based approach using peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and the Dice similarity coefficient (DSC). Results The framework was trained on 119 participants (mean age ± SD, 11.25 months ± 6.16; 60 female and 59 male infants) and tested on 20 participants (mean age, 12.98 months ± 6.59; 11 female and nine male infants). For T1-weighted images, PSNRs were 25.44 ± 1.95 and 26.93 ± 2.50 for forward and backward MRI synthesis, respectively, and SSIMs were 0.87 ± 0.03 and 0.90 ± 0.02, respectively. For T2-weighted images, PSNRs were 26.35 ± 2.30 and 26.40 ± 2.56, respectively, with SSIMs of 0.87 ± 0.03 and 0.89 ± 0.02, respectively, showing significant outperformance compared with competing methods (<i>P</i> < .001). The framework also excelled in tissue segmentation (<i>P</i> < .001) and cortical reconstruction, achieving a DSC of 0.85 for gray matter and 0.86 for white matter, with intraclass correlation coefficients exceeding 0.8 in most cortical regions. Conclusion The proposed three-stage framework effectively synthesized age-specific infant brain MRI scans, outperforming competing methods in image quality and tissue segmentation and with strong performance in cortical reconstruction, demonstrating potential for developmental modeling and longitudinal analyses. <b>Keywords:</b> Pediatrics, Brain, Brain Stem, MRI, Infant Brain MRI <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Chaudhari and Rauschecker in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240708"},"PeriodicalIF":13.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145076276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Radiology-Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1