首页 > 最新文献

Radiology-Artificial Intelligence最新文献

英文 中文
A Semiautonomous Deep Learning System to Reduce False Positives in Screening Mammography. 减少乳腺 X 射线筛查假阳性结果的半自主深度学习系统。
IF 9.8 Pub Date : 2024-05-01 DOI: 10.1148/ryai.230033
Stefano Pedemonte, Trevor Tsue, Brent Mombourquette, Yen Nhi Truong Vu, Thomas Matthews, Rodrigo Morales Hoil, Meet Shah, Nikita Ghare, Naomi Zingman-Daniels, Susan Holley, Catherine M Appleton, Jason Su, Richard L Wahl

Purpose To evaluate the ability of a semiautonomous artificial intelligence (AI) model to identify screening mammograms not suspicious for breast cancer and reduce the number of false-positive examinations. Materials and Methods The deep learning algorithm was trained using 123 248 two-dimensional digital mammograms (6161 cancers) and a retrospective study was performed on three nonoverlapping datasets of 14 831 screening mammography examinations (1026 cancers) from two U.S. institutions and one U.K. institution (2008-2017). The stand-alone performance of humans and AI was compared. Human plus AI performance was simulated to examine reductions in the cancer detection rate, number of examinations, false-positive callbacks, and benign biopsies. Metrics were adjusted to mimic the natural distribution of a screening population, and bootstrapped CIs and P values were calculated. Results Retrospective evaluation on all datasets showed minimal changes to the cancer detection rate with use of the AI device (noninferiority margin of 0.25 cancers per 1000 examinations: U.S. dataset 1, P = .02; U.S. dataset 2, P < .001; U.K. dataset, P < .001). On U.S. dataset 1 (11 592 mammograms; 101 cancers; 3810 female patients; mean age, 57.3 years ± 10.0 [SD]), the device reduced screening examinations requiring radiologist interpretation by 41.6% (95% CI: 40.6%, 42.4%; P < .001), diagnostic examinations callbacks by 31.1% (95% CI: 28.7%, 33.4%; P < .001), and benign needle biopsies by 7.4% (95% CI: 4.1%, 12.4%; P < .001). U.S. dataset 2 (1362 mammograms; 330 cancers; 1293 female patients; mean age, 55.4 years ± 10.5) was reduced by 19.5% (95% CI: 16.9%, 22.1%; P < .001), 11.9% (95% CI: 8.6%, 15.7%; P < .001), and 6.5% (95% CI: 0.0%, 19.0%; P = .08), respectively. The U.K. dataset (1877 mammograms; 595 cancers; 1491 female patients; mean age, 63.5 years ± 7.1) was reduced by 36.8% (95% CI: 34.4%, 39.7%; P < .001), 17.1% (95% CI: 5.9%, 30.1%: P < .001), and 5.9% (95% CI: 2.9%, 11.5%; P < .001), respectively. Conclusion This work demonstrates the potential of a semiautonomous breast cancer screening system to reduce false positives, unnecessary procedures, patient anxiety, and medical expenses. Keywords: Artificial Intelligence, Semiautonomous Deep Learning, Breast Cancer, Screening Mammography Supplemental material is available for this article. Published under a CC BY 4.0 license.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现一些可能影响内容的错误。目的 评估半自主人工智能(AI)模型识别乳腺癌筛查乳房X光照片的能力,并减少假阳性检查的数量。材料与方法 使用 123,248 张二维数字乳房 X 光照片(6,161 例癌症)对深度学习算法进行了训练,并对来自 2 个美国机构和 1 个英国机构(2008-2017 年)的 3 个非重叠数据集 14,831 例乳房 X 光筛查检查(1,026 例癌症)进行了回顾性研究。比较了人类和人工智能的独立性能。模拟了人类+人工智能的性能,以检查癌症检出率、检查次数、假阳性回调和良性活检的减少情况。对指标进行了调整,以模拟筛查人群的自然分布,并计算了引导置信区间(CI)和 P 值。结果 对所有数据集进行的回顾性评估显示,使用人工智能设备对癌症检出率的影响微乎其微(美国数据集 1 P = .02,美国数据集 2 P < .001,英国 P < .001,非劣效差为每 1000 例检查中发现 0.25 例癌症)。在美国数据集 1(11,592 例乳腺 X 光检查,101 例癌症,3810 名女性患者,平均年龄 57.3 ± [SD] 10.0 岁)中,该设备将需要放射医师判读的筛查减少了 41.6% [95% CI:40.6%, 42.4%] (P < .001),诊断检查回调减少了 31.1% [28.7%, 33.4%] (P < .001),良性针活检减少了 7.4% [4.1%, 12.4%] (P < .001)。美国数据集 2(1362 例乳腺 X 光检查,330 例癌症,1293 例女性患者,平均年龄 55.4 ± 10.5 岁)分别减少了 19.5% [16.9%, 22.1%] (P < .001), 11.9% [8.6%, 15.7%] (P < .001), 和 6.5% [0.0%, 19.0%] (P = .08)。英国数据集(1877 次乳房 X 光检查,595 例癌症,1491 名女性患者,平均年龄为 63.5 ± 7.1 SD)分别减少了 36.8% [34.4%, 39.7%] (P < .001), 17.1% [5.9%, 30.1%] (P < .001), 和 5.9% [2.9%, 11.5%] (P < .001)。结论 这项工作证明了半自主乳腺癌筛查系统在减少假阳性、不必要的手术、患者焦虑和医疗费用方面的潜力。以 CC BY 4.0 许可发布。
{"title":"A Semiautonomous Deep Learning System to Reduce False Positives in Screening Mammography.","authors":"Stefano Pedemonte, Trevor Tsue, Brent Mombourquette, Yen Nhi Truong Vu, Thomas Matthews, Rodrigo Morales Hoil, Meet Shah, Nikita Ghare, Naomi Zingman-Daniels, Susan Holley, Catherine M Appleton, Jason Su, Richard L Wahl","doi":"10.1148/ryai.230033","DOIUrl":"10.1148/ryai.230033","url":null,"abstract":"<p><p>Purpose To evaluate the ability of a semiautonomous artificial intelligence (AI) model to identify screening mammograms not suspicious for breast cancer and reduce the number of false-positive examinations. Materials and Methods The deep learning algorithm was trained using 123 248 two-dimensional digital mammograms (6161 cancers) and a retrospective study was performed on three nonoverlapping datasets of 14 831 screening mammography examinations (1026 cancers) from two U.S. institutions and one U.K. institution (2008-2017). The stand-alone performance of humans and AI was compared. Human plus AI performance was simulated to examine reductions in the cancer detection rate, number of examinations, false-positive callbacks, and benign biopsies. Metrics were adjusted to mimic the natural distribution of a screening population, and bootstrapped CIs and <i>P</i> values were calculated. Results Retrospective evaluation on all datasets showed minimal changes to the cancer detection rate with use of the AI device (noninferiority margin of 0.25 cancers per 1000 examinations: U.S. dataset 1, <i>P</i> = .02; U.S. dataset 2, <i>P</i> < .001; U.K. dataset, <i>P</i> < .001). On U.S. dataset 1 (11 592 mammograms; 101 cancers; 3810 female patients; mean age, 57.3 years ± 10.0 [SD]), the device reduced screening examinations requiring radiologist interpretation by 41.6% (95% CI: 40.6%, 42.4%; <i>P</i> < .001), diagnostic examinations callbacks by 31.1% (95% CI: 28.7%, 33.4%; <i>P</i> < .001), and benign needle biopsies by 7.4% (95% CI: 4.1%, 12.4%; <i>P</i> < .001). U.S. dataset 2 (1362 mammograms; 330 cancers; 1293 female patients; mean age, 55.4 years ± 10.5) was reduced by 19.5% (95% CI: 16.9%, 22.1%; <i>P</i> < .001), 11.9% (95% CI: 8.6%, 15.7%; <i>P</i> < .001), and 6.5% (95% CI: 0.0%, 19.0%; <i>P</i> = .08), respectively. The U.K. dataset (1877 mammograms; 595 cancers; 1491 female patients; mean age, 63.5 years ± 7.1) was reduced by 36.8% (95% CI: 34.4%, 39.7%; <i>P</i> < .001), 17.1% (95% CI: 5.9%, 30.1%: <i>P</i> < .001), and 5.9% (95% CI: 2.9%, 11.5%; <i>P</i> < .001), respectively. Conclusion This work demonstrates the potential of a semiautonomous breast cancer screening system to reduce false positives, unnecessary procedures, patient anxiety, and medical expenses. <b>Keywords:</b> Artificial Intelligence, Semiautonomous Deep Learning, Breast Cancer, Screening Mammography <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140506/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140872447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Erratum for: Performance of the Winning Algorithms of the RSNA 2022 Cervical Spine Fracture Detection Challenge. 勘误:RSNA 2022 年颈椎骨折检测挑战赛获奖算法的性能。
IF 9.8 Pub Date : 2024-05-01 DOI: 10.1148/ryai.249002
Ghee Rye Lee, Adam E Flanders, Tyler Richards, Felipe Kitamura, Errol Colak, Hui Ming Lin, Robyn L Ball, Jason Talbott, Luciano M Prevedello
{"title":"Erratum for: Performance of the Winning Algorithms of the RSNA 2022 Cervical Spine Fracture Detection Challenge.","authors":"Ghee Rye Lee, Adam E Flanders, Tyler Richards, Felipe Kitamura, Errol Colak, Hui Ming Lin, Robyn L Ball, Jason Talbott, Luciano M Prevedello","doi":"10.1148/ryai.249002","DOIUrl":"10.1148/ryai.249002","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140496/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140865979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Intelligence for Breast Cancer Screening: Trade-offs between Sensitivity and Specificity. 人工智能乳腺癌筛查:灵敏度与特异性之间的权衡。
IF 9.8 Pub Date : 2024-05-01 DOI: 10.1148/ryai.240184
Manisha Bahl, Synho Do
{"title":"Artificial Intelligence for Breast Cancer Screening: Trade-offs between Sensitivity and Specificity.","authors":"Manisha Bahl, Synho Do","doi":"10.1148/ryai.240184","DOIUrl":"10.1148/ryai.240184","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140497/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140877471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Noninvasive Molecular Subtyping of Pediatric Low-Grade Glioma with Self-Supervised Transfer Learning. 利用自我监督转移学习对小儿低级别胶质瘤进行无创分子亚型分析
IF 9.8 Pub Date : 2024-05-01 DOI: 10.1148/ryai.230333
Divyanshu Tak, Zezhong Ye, Anna Zapaischykova, Yining Zha, Aidan Boyd, Sridhar Vajapeyam, Rishi Chopra, Hasaan Hayat, Sanjay P Prabhu, Kevin X Liu, Hesham Elhalawani, Ali Nabavizadeh, Ariana Familiar, Adam C Resnick, Sabine Mueller, Hugo J W L Aerts, Pratiti Bandopadhayay, Keith L Ligon, Daphne A Haas-Kogan, Tina Y Poussaint, Benjamin H Kann

Purpose To develop and externally test a scan-to-prediction deep learning pipeline for noninvasive, MRI-based BRAF mutational status classification for pediatric low-grade glioma. Materials and Methods This retrospective study included two pediatric low-grade glioma datasets with linked genomic and diagnostic T2-weighted MRI data of patients: Dana-Farber/Boston Children's Hospital (development dataset, n = 214 [113 (52.8%) male; 104 (48.6%) BRAF wild type, 60 (28.0%) BRAF fusion, and 50 (23.4%) BRAF V600E]) and the Children's Brain Tumor Network (external testing, n = 112 [55 (49.1%) male; 35 (31.2%) BRAF wild type, 60 (53.6%) BRAF fusion, and 17 (15.2%) BRAF V600E]). A deep learning pipeline was developed to classify BRAF mutational status (BRAF wild type vs BRAF fusion vs BRAF V600E) via a two-stage process: (a) three-dimensional tumor segmentation and extraction of axial tumor images and (b) section-wise, deep learning-based classification of mutational status. Knowledge-transfer and self-supervised approaches were investigated to prevent model overfitting, with a primary end point of the area under the receiver operating characteristic curve (AUC). To enhance model interpretability, a novel metric, center of mass distance, was developed to quantify the model attention around the tumor. Results A combination of transfer learning from a pretrained medical imaging-specific network and self-supervised label cross-training (TransferX) coupled with consensus logic yielded the highest classification performance with an AUC of 0.82 (95% CI: 0.72, 0.91), 0.87 (95% CI: 0.61, 0.97), and 0.85 (95% CI: 0.66, 0.95) for BRAF wild type, BRAF fusion, and BRAF V600E, respectively, on internal testing. On external testing, the pipeline yielded an AUC of 0.72 (95% CI: 0.64, 0.86), 0.78 (95% CI: 0.61, 0.89), and 0.72 (95% CI: 0.64, 0.88) for BRAF wild type, BRAF fusion, and BRAF V600E, respectively. Conclusion Transfer learning and self-supervised cross-training improved classification performance and generalizability for noninvasive pediatric low-grade glioma mutational status prediction in a limited data scenario. Keywords: Pediatrics, MRI, CNS, Brain/Brain Stem, Oncology, Feature Detection, Diagnosis, Supervised Learning, Transfer Learning, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 为基于 MRI 的小儿低级别胶质瘤(pLGG)无创 BRAF 突变状态分类开发一种扫描到预测的深度学习管道,并对其进行外部测试。材料与方法 这项回顾性研究包括两个 pLGG 数据集,其中包含患者的基因组和诊断 T2 加权 MRI 数据:BCH(开发数据集,n = 214 [60 (28%) BRAF-Fusion, 50 (23%) BRAF V600E, 104 (49%) 野生型])和儿童脑肿瘤网络(外部测试,n = 112 [60 (53%) BRAF-Fusion, 17 (15%) BRAF-V600E, 35 (32%) 野生型])。我们开发了一个深度学习管道,通过两个阶段对 BRAF 突变状态(V600E 与融合型与野生型)进行分类:1)轴向肿瘤图像的三维肿瘤分割和提取;2)基于深度学习的突变状态切片分类。我们研究了知识转移和自我监督方法,以防止模型过拟合,主要终点是接收者操作特征曲线下面积(AUC)。为了提高模型的可解释性,我们开发了一种新的指标--COMDist(质量中心距离),用于量化肿瘤周围的模型关注度。结果 在内部测试中,来自预训练医学影像特定网络的迁移学习和自监督标签交叉训练(TransferX)与共识逻辑相结合产生了最高的分类性能,对于野生型、BRAF-融合型和BRAF-V600E的AUC分别为0.82[95% CI:0.72-0.91]、0.87[95% CI:0.61-0.97]和0.85[95% CI:0.66-0.95]。在外部测试中,野生型、BRAF-融合型和 BRAF-V600E 类别的 AUC 分别为 0.72 [95% CI: 0.64-0.86]、0.78 [95% CI: 0.61-0.89] 和 0.72 [95% CI: 0.64-0.88]。结论 在数据有限的情况下,迁移学习和自我监督交叉训练提高了无创 pLGG 突变状态预测的分类性能和普适性。©RSNA, 2024.
{"title":"Noninvasive Molecular Subtyping of Pediatric Low-Grade Glioma with Self-Supervised Transfer Learning.","authors":"Divyanshu Tak, Zezhong Ye, Anna Zapaischykova, Yining Zha, Aidan Boyd, Sridhar Vajapeyam, Rishi Chopra, Hasaan Hayat, Sanjay P Prabhu, Kevin X Liu, Hesham Elhalawani, Ali Nabavizadeh, Ariana Familiar, Adam C Resnick, Sabine Mueller, Hugo J W L Aerts, Pratiti Bandopadhayay, Keith L Ligon, Daphne A Haas-Kogan, Tina Y Poussaint, Benjamin H Kann","doi":"10.1148/ryai.230333","DOIUrl":"10.1148/ryai.230333","url":null,"abstract":"<p><p>Purpose To develop and externally test a scan-to-prediction deep learning pipeline for noninvasive, MRI-based <i>BRAF</i> mutational status classification for pediatric low-grade glioma. Materials and Methods This retrospective study included two pediatric low-grade glioma datasets with linked genomic and diagnostic T2-weighted MRI data of patients: Dana-Farber/Boston Children's Hospital (development dataset, <i>n</i> = 214 [113 (52.8%) male; 104 (48.6%) <i>BRAF</i> wild type, 60 (28.0%) <i>BRAF</i> fusion, and 50 (23.4%) <i>BRAF</i> V600E]) and the Children's Brain Tumor Network (external testing, <i>n</i> = 112 [55 (49.1%) male; 35 (31.2%) <i>BRAF</i> wild type, 60 (53.6%) <i>BRAF</i> fusion, and 17 (15.2%) <i>BRAF</i> V600E]). A deep learning pipeline was developed to classify <i>BRAF</i> mutational status (<i>BRAF</i> wild type vs <i>BRAF</i> fusion vs <i>BRAF</i> V600E) via a two-stage process: <i>(a)</i> three-dimensional tumor segmentation and extraction of axial tumor images and <i>(b)</i> section-wise, deep learning-based classification of mutational status. Knowledge-transfer and self-supervised approaches were investigated to prevent model overfitting, with a primary end point of the area under the receiver operating characteristic curve (AUC). To enhance model interpretability, a novel metric, center of mass distance, was developed to quantify the model attention around the tumor. Results A combination of transfer learning from a pretrained medical imaging-specific network and self-supervised label cross-training (TransferX) coupled with consensus logic yielded the highest classification performance with an AUC of 0.82 (95% CI: 0.72, 0.91), 0.87 (95% CI: 0.61, 0.97), and 0.85 (95% CI: 0.66, 0.95) for <i>BRAF</i> wild type, <i>BRAF</i> fusion, and <i>BRAF</i> V600E, respectively, on internal testing. On external testing, the pipeline yielded an AUC of 0.72 (95% CI: 0.64, 0.86), 0.78 (95% CI: 0.61, 0.89), and 0.72 (95% CI: 0.64, 0.88) for <i>BRAF</i> wild type, <i>BRAF</i> fusion, and <i>BRAF</i> V600E, respectively. Conclusion Transfer learning and self-supervised cross-training improved classification performance and generalizability for noninvasive pediatric low-grade glioma mutational status prediction in a limited data scenario. <b>Keywords:</b> Pediatrics, MRI, CNS, Brain/Brain Stem, Oncology, Feature Detection, Diagnosis, Supervised Learning, Transfer Learning, Convolutional Neural Network (CNN) <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140508/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140040504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of Deep Learning Image Reconstruction Methods on MRI Throughput. 深度学习图像重建方法对核磁共振成像吞吐量的影响
IF 9.8 Pub Date : 2024-05-01 DOI: 10.1148/ryai.230181
Anthony Yang, Mark Finkelstein, Clara Koo, Amish H Doshi

Purpose To evaluate the effect of implementing two distinct commercially available deep learning reconstruction (DLR) algorithms on the efficiency of MRI examinations conducted in real clinical practice within an outpatient setting at a large, multicenter institution. Materials and Methods This retrospective study included 7346 examinations from 10 clinical MRI scanners analyzed during the pre- and postimplementation periods of DLR methods. Two different types of DLR methods, namely Digital Imaging and Communications in Medicine (DICOM)-based and k-space-based methods, were implemented in half of the scanners (three DICOM-based and two k-space-based), while the remaining five scanners had no DLR method implemented. Scan and room times of each examination type during the pre- and postimplementation periods were compared among the different DLR methods using the Wilcoxon test. Results The application of deep learning methods resulted in significant reductions in scan and room times for certain examination types. The DICOM-based method demonstrated up to a 53% reduction in scan times and a 41% reduction in room times for various study types. The k-space-based method demonstrated up to a 27% reduction in scan times but did not significantly reduce room times. Conclusion DLR methods were associated with reductions in scan and room times in a clinical setting, though the effects were heterogeneous depending on examination type. Thus, potential adopters should carefully evaluate their case mix to determine the impact of integrating these tools. Keywords: Deep Learning MRI Reconstruction, Reconstruction Algorithms, DICOM-based Reconstruction, k-Space-based Reconstruction © RSNA, 2024 See also the commentary by GharehMohammadi in this issue.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响文章内容的错误。目的 评估在一家大型多中心机构的门诊环境中,实施两种不同的市售深度学习重建(DLR)算法对实际临床实践中进行的 MRI 检查效率的影响。材料与方法 这项回顾性研究包括十台临床磁共振成像扫描仪的 7346 次检查,在 DLR 方法实施前和实施后进行了分析。半数扫描仪(三台基于 DICOM,两台基于 k-space)采用了两种不同类型的 DLR 方法,即基于医学数字成像和通信(DICOM)的方法和基于 k-space的方法,其余五台扫描仪未采用 DLR 方法。使用 Wilcoxon 检验比较了不同 DLR 方法在实施前和实施后期间每种检查类型的扫描时间和检查室时间。结果 深度学习方法的应用显著缩短了某些检查类型的扫描和检查室时间。基于 DICOM 的方法显示,各种检查类型的扫描时间最多可减少 53%,检查室时间最多可减少 41%。基于 k 空间的方法最多可减少 27% 的扫描时间,但不能显著减少检查室时间。结论 DLR 方法与临床环境中扫描和室内时间的减少有关,但效果因检查类型而异。因此,潜在的采用者应仔细评估其病例组合,以确定整合这些工具的影响。©RSNA,2024。
{"title":"Impact of Deep Learning Image Reconstruction Methods on MRI Throughput.","authors":"Anthony Yang, Mark Finkelstein, Clara Koo, Amish H Doshi","doi":"10.1148/ryai.230181","DOIUrl":"10.1148/ryai.230181","url":null,"abstract":"<p><p>Purpose To evaluate the effect of implementing two distinct commercially available deep learning reconstruction (DLR) algorithms on the efficiency of MRI examinations conducted in real clinical practice within an outpatient setting at a large, multicenter institution. Materials and Methods This retrospective study included 7346 examinations from 10 clinical MRI scanners analyzed during the pre- and postimplementation periods of DLR methods. Two different types of DLR methods, namely Digital Imaging and Communications in Medicine (DICOM)-based and k-space-based methods, were implemented in half of the scanners (three DICOM-based and two k-space-based), while the remaining five scanners had no DLR method implemented. Scan and room times of each examination type during the pre- and postimplementation periods were compared among the different DLR methods using the Wilcoxon test. Results The application of deep learning methods resulted in significant reductions in scan and room times for certain examination types. The DICOM-based method demonstrated up to a 53% reduction in scan times and a 41% reduction in room times for various study types. The k-space-based method demonstrated up to a 27% reduction in scan times but did not significantly reduce room times. Conclusion DLR methods were associated with reductions in scan and room times in a clinical setting, though the effects were heterogeneous depending on examination type. Thus, potential adopters should carefully evaluate their case mix to determine the impact of integrating these tools. <b>Keywords:</b> Deep Learning MRI Reconstruction, Reconstruction Algorithms, DICOM-based Reconstruction, k-Space-based Reconstruction © RSNA, 2024 See also the commentary by GharehMohammadi in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140511/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140176775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bone Age Prediction under Stress. 压力下的骨龄预测
IF 9.8 Pub Date : 2024-05-01 DOI: 10.1148/ryai.240137
Shahriar Faghani, Bradley J Erickson
{"title":"Bone Age Prediction under Stress.","authors":"Shahriar Faghani, Bradley J Erickson","doi":"10.1148/ryai.240137","DOIUrl":"10.1148/ryai.240137","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140503/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140862083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of AI for Digital Breast Tomosynthesis on Breast Cancer Detection and Interpretation Time. 数字乳腺断层合成的人工智能对乳腺癌检测和判读时间的影响。
IF 9.8 Pub Date : 2024-05-01 DOI: 10.1148/ryai.230318
Eun Kyung Park, SooYoung Kwak, Weonsuk Lee, Joon Suk Choi, Thijs Kooi, Eun-Kyung Kim

Purpose To develop an artificial intelligence (AI) model for the diagnosis of breast cancer on digital breast tomosynthesis (DBT) images and to investigate whether it could improve diagnostic accuracy and reduce radiologist reading time. Materials and Methods A deep learning AI algorithm was developed and validated for DBT with retrospectively collected examinations (January 2010 to December 2021) from 14 institutions in the United States and South Korea. A multicenter reader study was performed to compare the performance of 15 radiologists (seven breast specialists, eight general radiologists) in interpreting DBT examinations in 258 women (mean age, 56 years ± 13.41 [SD]), including 65 cancer cases, with and without the use of AI. Area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and reading time were evaluated. Results The AUC for stand-alone AI performance was 0.93 (95% CI: 0.92, 0.94). With AI, radiologists' AUC improved from 0.90 (95% CI: 0.86, 0.93) to 0.92 (95% CI: 0.88, 0.96) (P = .003) in the reader study. AI showed higher specificity (89.64% [95% CI: 85.34%, 93.94%]) than radiologists (77.34% [95% CI: 75.82%, 78.87%]) (P < .001). When reading with AI, radiologists' sensitivity increased from 85.44% (95% CI: 83.22%, 87.65%) to 87.69% (95% CI: 85.63%, 89.75%) (P = .04), with no evidence of a difference in specificity. Reading time decreased from 54.41 seconds (95% CI: 52.56, 56.27) without AI to 48.52 seconds (95% CI: 46.79, 50.25) with AI (P < .001). Interreader agreement measured by Fleiss κ increased from 0.59 to 0.62. Conclusion The AI model showed better diagnostic accuracy than radiologists in breast cancer detection, as well as reduced reading times. The concurrent use of AI in DBT interpretation could improve both accuracy and efficiency. Keywords: Breast, Computer-Aided Diagnosis (CAD), Tomosynthesis, Artificial Intelligence, Digital Breast Tomosynthesis, Breast Cancer, Computer-Aided Detection, Screening Supplemental material is available for this article. © RSNA, 2024 See also the commentary by Bae in this issue.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 开发一种用于数字乳腺断层扫描(DBT)中乳腺癌诊断的人工智能(AI),并研究它是否能提高诊断准确性并减少放射科医生的阅读时间。材料与方法 针对 DBT 开发了深度学习人工智能算法,并通过回顾性收集美国和韩国 14 家机构的检查结果(2010 年 1 月至 2021 年 12 月)进行了验证。我们进行了一项多中心、读者研究,比较了 15 位放射科医生(7 位乳腺专家,8 位普通放射科医生)在解读 258 位女性(平均 56 岁 ± 13.41 [SD])(包括 65 例癌症病例)的 DBT 检查结果时,使用和未使用人工智能的表现。对接收者操作特征曲线下面积(AUC)、灵敏度、特异性和读片时间进行了评估。结果 独立人工智能性能的 AUC 为 0.93(95% CI:0.92,0.94)。在读者研究中,使用人工智能后,放射医师的 AUC 从 0.90 (0.86, 0.93) 提高到 0.92 (0.88, 0.96; P = .003)。人工智能的特异性(89.64% (85.34, 93.94))高于放射科医生(77.34% (75.82, 78.87; P < .001))。使用 AI 进行读片时,放射科医生的灵敏度从 85.44% (83.22, 87.65) 提高到 87.69% (85.63, 89.75; P = .04),但特异性没有差异。阅读时间从无人工智能时的 54.41 秒(52.56, 56.27)减少到有人工智能时的 48.52 秒(46.79, 50.25)(P < .001)。用弗莱斯卡帕(Fleiss kappa)测量的读数间一致性分别从 0.59 上升到 0.62。结论 在乳腺癌检测方面,人工智能模型比放射科医生显示出更高的诊断准确性,并缩短了读片时间。在 DBT 解释中同时使用人工智能可以提高准确性和效率。©RSNA,2024。
{"title":"Impact of AI for Digital Breast Tomosynthesis on Breast Cancer Detection and Interpretation Time.","authors":"Eun Kyung Park, SooYoung Kwak, Weonsuk Lee, Joon Suk Choi, Thijs Kooi, Eun-Kyung Kim","doi":"10.1148/ryai.230318","DOIUrl":"10.1148/ryai.230318","url":null,"abstract":"<p><p>Purpose To develop an artificial intelligence (AI) model for the diagnosis of breast cancer on digital breast tomosynthesis (DBT) images and to investigate whether it could improve diagnostic accuracy and reduce radiologist reading time. Materials and Methods A deep learning AI algorithm was developed and validated for DBT with retrospectively collected examinations (January 2010 to December 2021) from 14 institutions in the United States and South Korea. A multicenter reader study was performed to compare the performance of 15 radiologists (seven breast specialists, eight general radiologists) in interpreting DBT examinations in 258 women (mean age, 56 years ± 13.41 [SD]), including 65 cancer cases, with and without the use of AI. Area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and reading time were evaluated. Results The AUC for stand-alone AI performance was 0.93 (95% CI: 0.92, 0.94). With AI, radiologists' AUC improved from 0.90 (95% CI: 0.86, 0.93) to 0.92 (95% CI: 0.88, 0.96) (<i>P</i> = .003) in the reader study. AI showed higher specificity (89.64% [95% CI: 85.34%, 93.94%]) than radiologists (77.34% [95% CI: 75.82%, 78.87%]) (<i>P</i> < .001). When reading with AI, radiologists' sensitivity increased from 85.44% (95% CI: 83.22%, 87.65%) to 87.69% (95% CI: 85.63%, 89.75%) (<i>P</i> = .04), with no evidence of a difference in specificity. Reading time decreased from 54.41 seconds (95% CI: 52.56, 56.27) without AI to 48.52 seconds (95% CI: 46.79, 50.25) with AI (<i>P</i> < .001). Interreader agreement measured by Fleiss κ increased from 0.59 to 0.62. Conclusion The AI model showed better diagnostic accuracy than radiologists in breast cancer detection, as well as reduced reading times. The concurrent use of AI in DBT interpretation could improve both accuracy and efficiency. <b>Keywords:</b> Breast, Computer-Aided Diagnosis (CAD), Tomosynthesis, Artificial Intelligence, Digital Breast Tomosynthesis, Breast Cancer, Computer-Aided Detection, Screening <i>Supplemental material is available for this article.</i> © RSNA, 2024 See also the commentary by Bae in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140510/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140858350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lessons Learned in Building Expertly Annotated Multi-Institution Datasets and Hosting the RSNA AI Challenges. 建立专家注释的多机构数据集和举办 RSNA 人工智能挑战赛的经验教训。
IF 9.8 Pub Date : 2024-05-01 DOI: 10.1148/ryai.230227
Felipe C Kitamura, Luciano M Prevedello, Errol Colak, Safwan S Halabi, Matthew P Lungren, Robyn L Ball, Jayashree Kalpathy-Cramer, Charles E Kahn, Tyler Richards, Jason F Talbott, George Shih, Hui Ming Lin, Katherine P Andriole, Maryam Vazirabad, Bradley J Erickson, Adam E Flanders, John Mongan

The Radiological Society of North America (RSNA) has held artificial intelligence competitions to tackle real-world medical imaging problems at least annually since 2017. This article examines the challenges and processes involved in organizing these competitions, with a specific emphasis on the creation and curation of high-quality datasets. The collection of diverse and representative medical imaging data involves dealing with issues of patient privacy and data security. Furthermore, ensuring quality and consistency in data, which includes expert labeling and accounting for various patient and imaging characteristics, necessitates substantial planning and resources. Overcoming these obstacles requires meticulous project management and adherence to strict timelines. The article also highlights the potential of crowdsourced annotation to progress medical imaging research. Through the RSNA competitions, an effective global engagement has been realized, resulting in innovative solutions to complex medical imaging problems, thus potentially transforming health care by enhancing diagnostic accuracy and patient outcomes. Keywords: Use of AI in Education, Artificial Intelligence © RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现一些可能影响内容的错误。自 2017 年以来,北美放射学会(RSNA)至少每年举办一次人工智能竞赛,以解决现实世界中的医学影像问题。本文探讨了组织这些竞赛所面临的挑战和过程,特别强调了高质量数据集的创建和整理。收集多样化和有代表性的医学影像数据涉及处理患者隐私和数据安全问题。此外,要确保数据的质量和一致性,包括专家标记和考虑各种患者和成像特征,还需要大量的规划和资源。要克服这些障碍,就必须进行细致的项目管理,并严格遵守时间表。文章还强调了众包注释在推动医学影像研究方面的潜力。通过 RSNA 竞赛,实现了有效的全球参与,为复杂的医学影像问题提供了创新的解决方案,从而有可能通过提高诊断准确性和患者疗效来改变医疗保健。©RSNA,2024。
{"title":"Lessons Learned in Building Expertly Annotated Multi-Institution Datasets and Hosting the RSNA AI Challenges.","authors":"Felipe C Kitamura, Luciano M Prevedello, Errol Colak, Safwan S Halabi, Matthew P Lungren, Robyn L Ball, Jayashree Kalpathy-Cramer, Charles E Kahn, Tyler Richards, Jason F Talbott, George Shih, Hui Ming Lin, Katherine P Andriole, Maryam Vazirabad, Bradley J Erickson, Adam E Flanders, John Mongan","doi":"10.1148/ryai.230227","DOIUrl":"10.1148/ryai.230227","url":null,"abstract":"<p><p>The Radiological Society of North America (RSNA) has held artificial intelligence competitions to tackle real-world medical imaging problems at least annually since 2017. This article examines the challenges and processes involved in organizing these competitions, with a specific emphasis on the creation and curation of high-quality datasets. The collection of diverse and representative medical imaging data involves dealing with issues of patient privacy and data security. Furthermore, ensuring quality and consistency in data, which includes expert labeling and accounting for various patient and imaging characteristics, necessitates substantial planning and resources. Overcoming these obstacles requires meticulous project management and adherence to strict timelines. The article also highlights the potential of crowdsourced annotation to progress medical imaging research. Through the RSNA competitions, an effective global engagement has been realized, resulting in innovative solutions to complex medical imaging problems, thus potentially transforming health care by enhancing diagnostic accuracy and patient outcomes. <b>Keywords:</b> Use of AI in Education, Artificial Intelligence © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140499/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140111550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
When the Student Becomes the Master: Boosting Intracranial Hemorrhage Detection Generalizability with Teacher-Student Learning. 当学生成为主人:通过师生学习提高颅内出血检测的通用性。
IF 9.8 Pub Date : 2024-05-01 DOI: 10.1148/ryai.240126
Nathaniel Swinburne
{"title":"When the Student Becomes the Master: Boosting Intracranial Hemorrhage Detection Generalizability with Teacher-Student Learning.","authors":"Nathaniel Swinburne","doi":"10.1148/ryai.240126","DOIUrl":"10.1148/ryai.240126","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140502/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140868223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-based Approach for Brainstem and Ventricular MR Planimetry: Application in Patients with Progressive Supranuclear Palsy. 基于深度学习的脑干和脑室磁共振平面测量法:在进行性核上性麻痹患者中的应用。
IF 9.8 Pub Date : 2024-05-01 DOI: 10.1148/ryai.230151
Salvatore Nigro, Marco Filardi, Benedetta Tafuri, Martina Nicolardi, Roberto De Blasi, Alessia Giugno, Valentina Gnoni, Giammarco Milella, Daniele Urso, Stefano Zoccolella, Giancarlo Logroscino

Purpose To develop a fast and fully automated deep learning (DL)-based method for the MRI planimetric segmentation and measurement of the brainstem and ventricular structures most affected in patients with progressive supranuclear palsy (PSP). Materials and Methods In this retrospective study, T1-weighted MR images in healthy controls (n = 84) were used to train DL models for segmenting the midbrain, pons, middle cerebellar peduncle (MCP), superior cerebellar peduncle (SCP), third ventricle, and frontal horns (FHs). Internal, external, and clinical test datasets (n = 305) were used to assess segmentation model reliability. DL masks from test datasets were used to automatically extract midbrain and pons areas and the width of MCP, SCP, third ventricle, and FHs. Automated measurements were compared with those manually performed by an expert radiologist. Finally, these measures were combined to calculate the midbrain to pons area ratio, MR parkinsonism index (MRPI), and MRPI 2.0, which were used to differentiate patients with PSP (n = 71) from those with Parkinson disease (PD) (n = 129). Results Dice coefficients above 0.85 were found for all brain regions when comparing manual and DL-based segmentations. A strong correlation was observed between automated and manual measurements (Spearman ρ > 0.80, P < .001). DL-based measurements showed excellent performance in differentiating patients with PSP from those with PD, with an area under the receiver operating characteristic curve above 0.92. Conclusion The automated approach successfully segmented and measured the brainstem and ventricular structures. DL-based models may represent a useful approach to support the diagnosis of PSP and potentially other conditions associated with brainstem and ventricular alterations. Keywords: MR Imaging, Brain/Brain Stem, Segmentation, Quantification, Diagnosis, Convolutional Neural Network Supplemental material is available for this article. © RSNA, 2024 See also the commentary by Mohajer in this issue.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 开发一种基于深度学习(DL)的快速全自动方法,用于进行性核上性麻痹(PSP)患者脑干和脑室结构的平面分割和测量。材料与方法 在这项回顾性研究中,健康对照组(n=84)的 T1 加权磁共振图像被用于训练 DL 模型,以分割中脑、脑桥、小脑中胚层 (MCP)、小脑上胚层 (SCP)、第三脑室 (3rd V) 和额角 (FHs)。内部、外部和临床测试数据集(n=305)用于评估分割模型的可靠性。测试数据集的 DL 掩膜用于自动提取中脑和脑桥区域以及 MCP、SCP、第 3 V 和 FHs 的宽度。将自动测量结果与放射科专家手动测量结果进行比较。最后,综合这些测量结果计算出中脑与脑桥面积比、磁共振帕金森病指数(MRPI)和 MRPI 2.0,用于区分帕金森病患者(71 人)和帕金森病患者(129 人)。结果 在比较人工和基于 DL 的分割时,发现所有脑区的 Dice 系数均高于 0.85。自动测量与手动测量之间存在很强的相关性(Spearman's Rho>0.80,p
{"title":"Deep Learning-based Approach for Brainstem and Ventricular MR Planimetry: Application in Patients with Progressive Supranuclear Palsy.","authors":"Salvatore Nigro, Marco Filardi, Benedetta Tafuri, Martina Nicolardi, Roberto De Blasi, Alessia Giugno, Valentina Gnoni, Giammarco Milella, Daniele Urso, Stefano Zoccolella, Giancarlo Logroscino","doi":"10.1148/ryai.230151","DOIUrl":"10.1148/ryai.230151","url":null,"abstract":"<p><p>Purpose To develop a fast and fully automated deep learning (DL)-based method for the MRI planimetric segmentation and measurement of the brainstem and ventricular structures most affected in patients with progressive supranuclear palsy (PSP). Materials and Methods In this retrospective study, T1-weighted MR images in healthy controls (<i>n</i> = 84) were used to train DL models for segmenting the midbrain, pons, middle cerebellar peduncle (MCP), superior cerebellar peduncle (SCP), third ventricle, and frontal horns (FHs). Internal, external, and clinical test datasets (<i>n</i> = 305) were used to assess segmentation model reliability. DL masks from test datasets were used to automatically extract midbrain and pons areas and the width of MCP, SCP, third ventricle, and FHs. Automated measurements were compared with those manually performed by an expert radiologist. Finally, these measures were combined to calculate the midbrain to pons area ratio, MR parkinsonism index (MRPI), and MRPI 2.0, which were used to differentiate patients with PSP (<i>n</i> = 71) from those with Parkinson disease (PD) (<i>n</i> = 129). Results Dice coefficients above 0.85 were found for all brain regions when comparing manual and DL-based segmentations. A strong correlation was observed between automated and manual measurements (Spearman ρ > 0.80, <i>P</i> < .001). DL-based measurements showed excellent performance in differentiating patients with PSP from those with PD, with an area under the receiver operating characteristic curve above 0.92. Conclusion The automated approach successfully segmented and measured the brainstem and ventricular structures. DL-based models may represent a useful approach to support the diagnosis of PSP and potentially other conditions associated with brainstem and ventricular alterations. <b>Keywords:</b> MR Imaging, Brain/Brain Stem, Segmentation, Quantification, Diagnosis, Convolutional Neural Network <i>Supplemental material is available for this article.</i> © RSNA, 2024 See also the commentary by Mohajer in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140505/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140176774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Radiology-Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1