首页 > 最新文献

Radiology-Artificial Intelligence最新文献

英文 中文
Accelerating Complex Tissue Analysis in Prostate MRI: From Hours to Seconds Using Physics-informed Neural Networks. 加速前列腺MRI复杂组织分析:从几小时到几秒钟使用物理信息神经网络。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-01 DOI: 10.1148/ryai.250016
Lisa C Adams, Keno K Bressem
{"title":"Accelerating Complex Tissue Analysis in Prostate MRI: From Hours to Seconds Using Physics-informed Neural Networks.","authors":"Lisa C Adams, Keno K Bressem","doi":"10.1148/ryai.250016","DOIUrl":"10.1148/ryai.250016","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 2","pages":"e250016"},"PeriodicalIF":13.2,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143558239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridging the Trust Gap: Conformal Prediction for AI-based Intracranial Hemorrhage Detection. 弥合信任鸿沟:基于人工智能的颅内出血检测的适形预测。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-01 DOI: 10.1148/ryai.250032
Peter K Ngum, Christopher G Filippi
{"title":"Bridging the Trust Gap: Conformal Prediction for AI-based Intracranial Hemorrhage Detection.","authors":"Peter K Ngum, Christopher G Filippi","doi":"10.1148/ryai.250032","DOIUrl":"10.1148/ryai.250032","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 2","pages":"e250032"},"PeriodicalIF":8.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143711519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Applying Conformal Prediction to a Deep Learning Model for Intracranial Hemorrhage Detection to Improve Trustworthiness. 在颅内出血检测的深度学习模型中应用共形预测,提高可信度。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-01 DOI: 10.1148/ryai.240032
Cooper Gamble, Shahriar Faghani, Bradley J Erickson

Purpose To apply conformal prediction to a deep learning (DL) model for intracranial hemorrhage (ICH) detection and evaluate model performance in detection as well as model accuracy in identifying challenging cases. Materials and Methods This was a retrospective (November-December 2017) study of 491 noncontrast head CT volumes from the CQ500 dataset, in which three senior radiologists annotated sections containing ICH. The dataset was split into definite and challenging (uncertain) subsets, in which challenging images were defined as those in which there was disagreement among readers. A DL model was trained on patients from the definite data (training dataset) to perform ICH localization and classification into five classes. To develop an uncertainty-aware DL model, 1546 sections of the definite data (calibration dataset) were used for Mondrian conformal prediction (MCP). The uncertainty-aware DL model was tested on 8401 definite and challenging sections to assess its ability to identify challenging sections. The difference in predictive performance (P value) and ability to identify challenging sections (accuracy) were reported. Results The study included 146 patients (mean age, 45.7 years ± 9.9 [SD]; 76 [52.1%] men, 70 [47.9%] women). After the MCP procedure, the model achieved an F1 score of 0.919 for localization and classification. Additionally, it correctly identified patients with challenging cases with 95.3% (143 of 150) accuracy. It did not incorrectly label any definite sections as challenging. Conclusion The uncertainty-aware MCP-augmented DL model achieved high performance in ICH detection and high accuracy in identifying challenging sections, suggesting its usefulness in automated ICH detection and potential to increase trustworthiness of DL models in radiology. Keywords: CT, Head and Neck, Brain, Brain Stem, Hemorrhage, Feature Detection, Diagnosis, Supervised Learning Supplemental material is available for this article. © RSNA, 2025 See also commentary by Ngum and Filippi in this issue.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现一些错误,从而影响文章内容。目的 将保形预测应用于颅内出血(ICH)检测的深度学习(DL)模型,并评估模型在检测方面的性能以及模型在识别挑战性病例方面的准确性。材料与方法 这是一项回顾性研究(2017 年 11 月至 2017 年 12 月),研究对象是 CQ500 数据集中的 491 张非对比头部 CT 卷,其中有三位资深放射科医生对含有 ICH 的切片进行了注释。数据集被分成确定和具有挑战性(不确定)的子集,其中具有挑战性的图像被定义为读者之间存在分歧的图像。对明确数据(训练数据集)中的 146 名患者(平均年龄 45.7 岁,女性 70 人,男性 76 人)进行了 DL 模型训练,以进行 ICH 定位并将其分为五类。为了开发不确定性感知 DL 模型,使用了 1,546 个明确数据(校准数据集)进行蒙德里安共形预测 (MCP)。不确定性感知 DL 模型在 8,401 个确定断面和挑战断面上进行了测试,以评估其识别挑战断面的能力。报告了预测性能的差异(P 值)和识别挑战性路段的能力(准确性)。结果 经过 MCP 程序后,该模型在测试数据集上的非物质文化遗产分类 F1 得分为 0.920。此外,在总共 6856 个具有挑战性的部分中,该模型正确识别了 6837 个具有挑战性的部分(准确率为 99.7%)。它没有错误地将任何明确的部分标记为具有挑战性。结论 不确定性感知的 MCP 增强 DL 模型在 ICH 检测中取得了很高的性能,在识别具有挑战性的切片方面也有很高的准确性,这表明它在自动 ICH 检测中非常有用,并有可能提高 DL 模型在放射学中的可信度。©RSNA,2024。
{"title":"Applying Conformal Prediction to a Deep Learning Model for Intracranial Hemorrhage Detection to Improve Trustworthiness.","authors":"Cooper Gamble, Shahriar Faghani, Bradley J Erickson","doi":"10.1148/ryai.240032","DOIUrl":"10.1148/ryai.240032","url":null,"abstract":"<p><p>Purpose To apply conformal prediction to a deep learning (DL) model for intracranial hemorrhage (ICH) detection and evaluate model performance in detection as well as model accuracy in identifying challenging cases. Materials and Methods This was a retrospective (November-December 2017) study of 491 noncontrast head CT volumes from the CQ500 dataset, in which three senior radiologists annotated sections containing ICH. The dataset was split into definite and challenging (uncertain) subsets, in which challenging images were defined as those in which there was disagreement among readers. A DL model was trained on patients from the definite data (training dataset) to perform ICH localization and classification into five classes. To develop an uncertainty-aware DL model, 1546 sections of the definite data (calibration dataset) were used for Mondrian conformal prediction (MCP). The uncertainty-aware DL model was tested on 8401 definite and challenging sections to assess its ability to identify challenging sections. The difference in predictive performance (<i>P</i> value) and ability to identify challenging sections (accuracy) were reported. Results The study included 146 patients (mean age, 45.7 years ± 9.9 [SD]; 76 [52.1%] men, 70 [47.9%] women). After the MCP procedure, the model achieved an F1 score of 0.919 for localization and classification. Additionally, it correctly identified patients with challenging cases with 95.3% (143 of 150) accuracy. It did not incorrectly label any definite sections as challenging. Conclusion The uncertainty-aware MCP-augmented DL model achieved high performance in ICH detection and high accuracy in identifying challenging sections, suggesting its usefulness in automated ICH detection and potential to increase trustworthiness of DL models in radiology. <b>Keywords:</b> CT, Head and Neck, Brain, Brain Stem, Hemorrhage, Feature Detection, Diagnosis, Supervised Learning <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Ngum and Filippi in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240032"},"PeriodicalIF":8.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142732987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of Lung Cancer Prediction Models for Screening-detected, Incidental, and Biopsied Pulmonary Nodules. 肺癌预测模型在筛查发现的肺结节、偶然发现的肺结节和活检肺结节中的表现。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-01 DOI: 10.1148/ryai.230506
Thomas Z Li, Kaiwen Xu, Aravind Krishnan, Riqiang Gao, Michael N Kammer, Sanja Antic, David Xiao, Michael Knight, Yency Martinez, Rafael Paez, Robert J Lentz, Stephen Deppen, Eric L Grogan, Thomas A Lasko, Kim L Sandler, Fabien Maldonado, Bennett A Landman

Purpose To evaluate the performance of eight lung cancer prediction models on patient cohorts with screening-detected, incidentally detected, and bronchoscopically biopsied pulmonary nodules. Materials and Methods This study retrospectively evaluated promising predictive models for lung cancer prediction in three clinical settings: lung cancer screening with low-dose CT, incidentally detected pulmonary nodules, and nodules deemed suspicious enough to warrant a biopsy. The area under the receiver operating characteristic curve of eight validated models, including logistic regressions on clinical variables and radiologist nodule characterizations, artificial intelligence (AI) on chest CT scans, longitudinal imaging AI, and multimodal approaches for prediction of lung cancer risk was assessed in nine cohorts (n = 898, 896, 882, 219, 364, 117, 131, 115, 373) from multiple institutions. Each model was implemented from their published literature, and each cohort was curated from primary data sources collected over periods from 2002 to 2021. Results No single predictive model emerged as the highest-performing model across all cohorts, but certain models performed better in specific clinical contexts. Single-time-point chest CT AI performed well for screening-detected nodules but did not generalize well to other clinical settings. Longitudinal imaging and multimodal models demonstrated comparatively good performance on incidentally detected nodules. When applied to biopsied nodules, all models showed low performance. Conclusion Eight lung cancer prediction models failed to generalize well across clinical settings and sites outside of their training distributions. Keywords: Diagnosis, Classification, Application Domain, Lung Supplemental material is available for this article. © RSNA, 2025 See also commentary by Shao and Niu in this issue.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现一些错误,从而影响文章内容。目的 评估八个肺癌预测模型在筛查发现的、偶然发现的和支气管镜活检发现的肺结节患者队列中的表现。材料与方法 本研究回顾性地评估了三种临床环境下肺癌预测模型的预测效果:使用低剂量 CT 进行肺癌筛查、偶然检测到肺结节以及被认为可疑到需要进行活检的结节。在来自多个机构的 9 个队列(n = 898、896、882、219、364、117、131、115、373)中评估了 8 个经过验证的模型的接收器工作特征曲线下面积(AUC),这些模型包括临床变量和放射科医生结节特征的逻辑回归、胸部 CT 人工智能(AI)、纵向成像人工智能以及预测肺癌风险的多模态方法。每个模型都是根据其发表的文献实施的,每个队列都是根据 2002 年至 2021 年期间收集的原始数据来源策划的。结果 在所有队列中,没有一个预测模型是表现最好的模型,但某些模型在特定的临床环境中表现更好。单个时间点胸部 CT AI 在筛查发现的结节方面表现良好,但在其他临床环境中表现不佳。纵向成像和多模态模型在偶然检测到的结节上表现相对较好。当应用于活检结节时,所有模型都表现出较低的性能。结论 八种肺癌预测模型未能在其训练分布以外的临床环境和部位中很好地推广。©RSNA, 2025.
{"title":"Performance of Lung Cancer Prediction Models for Screening-detected, Incidental, and Biopsied Pulmonary Nodules.","authors":"Thomas Z Li, Kaiwen Xu, Aravind Krishnan, Riqiang Gao, Michael N Kammer, Sanja Antic, David Xiao, Michael Knight, Yency Martinez, Rafael Paez, Robert J Lentz, Stephen Deppen, Eric L Grogan, Thomas A Lasko, Kim L Sandler, Fabien Maldonado, Bennett A Landman","doi":"10.1148/ryai.230506","DOIUrl":"10.1148/ryai.230506","url":null,"abstract":"<p><p>Purpose To evaluate the performance of eight lung cancer prediction models on patient cohorts with screening-detected, incidentally detected, and bronchoscopically biopsied pulmonary nodules. Materials and Methods This study retrospectively evaluated promising predictive models for lung cancer prediction in three clinical settings: lung cancer screening with low-dose CT, incidentally detected pulmonary nodules, and nodules deemed suspicious enough to warrant a biopsy. The area under the receiver operating characteristic curve of eight validated models, including logistic regressions on clinical variables and radiologist nodule characterizations, artificial intelligence (AI) on chest CT scans, longitudinal imaging AI, and multimodal approaches for prediction of lung cancer risk was assessed in nine cohorts (<i>n</i> = 898, 896, 882, 219, 364, 117, 131, 115, 373) from multiple institutions. Each model was implemented from their published literature, and each cohort was curated from primary data sources collected over periods from 2002 to 2021. Results No single predictive model emerged as the highest-performing model across all cohorts, but certain models performed better in specific clinical contexts. Single-time-point chest CT AI performed well for screening-detected nodules but did not generalize well to other clinical settings. Longitudinal imaging and multimodal models demonstrated comparatively good performance on incidentally detected nodules. When applied to biopsied nodules, all models showed low performance. Conclusion Eight lung cancer prediction models failed to generalize well across clinical settings and sites outside of their training distributions. <b>Keywords:</b> Diagnosis, Classification, Application Domain, Lung <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Shao and Niu in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230506"},"PeriodicalIF":8.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950892/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143190741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physics-Informed Autoencoder for Prostate Tissue Microstructure Profiling with Hybrid Multidimensional MRI. 混合多维MRI前列腺组织微观结构分析的物理信息自编码器。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-01 DOI: 10.1148/ryai.240167
Batuhan Gundogdu, Aritrick Chatterjee, Milica Medved, Ulas Bagci, Gregory S Karczmar, Aytekin Oto

Purpose To evaluate the performance of Physics-Informed Autoencoder (PIA), a self-supervised deep learning model, in measuring tissue-based biomarkers for prostate cancer (PCa) using hybrid multidimensional MRI. Materials and Methods This retrospective study introduces PIA, an emerging self-supervised deep learning model that integrates a three-compartment diffusion-relaxation model with hybrid multidimensional MRI. PIA was trained to encode the biophysical model into a deep neural network to predict measurements of tissue-specific biomarkers for PCa without extensive training data requirements. Comprehensive in silico and in vivo experiments, using histopathology measurements as the reference standard, were conducted to validate the model's efficacy in comparison to the traditional nonlinear least squares (NLLS) algorithm. PIA's robustness to noise was tested in in silico experiments with varying signal-to-noise ratio (SNR) conditions, and in vivo performance for estimating volume fractions was evaluated in 21 patients (mean age, 60 years ± 6.6 [SD]; all male) with PCa (71 regions of interest). Evaluation metrics included the intraclass correlation coefficient (ICC) and Pearson correlation coefficient. Results PIA predicted the reference standard tissue parameters with high accuracy, outperforming conventional NLLS methods, especially under noisy conditions (rs = 0.80 vs 0.65, P < .001 for epithelium volume at SNR of 20:1). In in vivo validation, PIA's noninvasive volume fraction estimates matched quantitative histology (ICC, 0.94, 0.85, and 0.92 for epithelium, stroma, and lumen compartments, respectively; P < .001 for all). PIA's measurements strongly correlated with PCa aggressiveness (r = 0.75, P < .001). Furthermore, PIA ran 10 000 faster than NLLS (0.18 second vs 40 minutes per image). Conclusion PIA provided accurate prostate tissue biomarker measurements from MRI data with better robustness to noise and computational efficiency compared with the NLLS algorithm. The results demonstrate the potential of PIA as an accurate, noninvasive, and explainable artificial intelligence method for PCa detection. Keywords: Prostate, Stacked Auto-Encoders, Tissue Characterization, MR-Diffusion-weighted Imaging Supplemental material is available for this article. ©RSNA, 2025 See also commentary by Adams and Bressem in this issue.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的评估物理信息自动编码器(PIA),一种自我监督的深度学习模型,在使用混合多维MRI测量前列腺癌(PCa)的组织生物标志物方面的性能。材料和方法本回顾性研究介绍了PIA,一种新型的自监督深度学习模型,它将三室扩散-松弛模型与混合多维MRI相结合。PIA经过训练,将生物物理模型编码为深度神经网络,以预测PCa的组织特异性生物标志物的测量,而无需大量的训练数据要求。以组织病理学测量值为参考标准,进行了全面的计算机和体内实验,验证了该模型与传统非线性最小二乘(NLLS)算法的有效性。在不同信噪比(SNR)条件下的计算机实验中测试了PIA对噪声的鲁棒性,并评估了21例患者(平均年龄60岁(SD:6.6)岁;所有男性)均有PCa (n = 71个感兴趣的区域)。评价指标包括类内相关系数(ICC)和Pearson相关系数。结果PIA预测参考标准组织参数准确率高,优于传统的NLLS方法,特别是在噪声条件下(rs = 0.80 vs 0.65,在信噪比= 20:1时,P < 0.001)。在体内验证中,PIA的无创体积分数估计值与定量组织学相匹配(上皮、间质和管腔室的ICC分别为0.94、0.85和0.92,P < 0.001)。PIA测量值与前列腺癌侵袭性强相关(r = 0.75, P < 0.001)。此外,PIA的运行速度比NLLS快10,000(每张图像0.18秒对40分钟)。结论与NLLS算法相比,PIA算法能够从MRI数据中准确测量前列腺组织生物标志物,对噪声具有更好的鲁棒性和计算效率。结果表明,PIA作为一种准确、无创、可解释的PCa检测人工智能方法的潜力。©RSNA, 2025年。
{"title":"Physics-Informed Autoencoder for Prostate Tissue Microstructure Profiling with Hybrid Multidimensional MRI.","authors":"Batuhan Gundogdu, Aritrick Chatterjee, Milica Medved, Ulas Bagci, Gregory S Karczmar, Aytekin Oto","doi":"10.1148/ryai.240167","DOIUrl":"10.1148/ryai.240167","url":null,"abstract":"<p><p>Purpose To evaluate the performance of Physics-Informed Autoencoder (PIA), a self-supervised deep learning model, in measuring tissue-based biomarkers for prostate cancer (PCa) using hybrid multidimensional MRI. Materials and Methods This retrospective study introduces PIA, an emerging self-supervised deep learning model that integrates a three-compartment diffusion-relaxation model with hybrid multidimensional MRI. PIA was trained to encode the biophysical model into a deep neural network to predict measurements of tissue-specific biomarkers for PCa without extensive training data requirements. Comprehensive in silico and in vivo experiments, using histopathology measurements as the reference standard, were conducted to validate the model's efficacy in comparison to the traditional nonlinear least squares (NLLS) algorithm. PIA's robustness to noise was tested in in silico experiments with varying signal-to-noise ratio (SNR) conditions, and in vivo performance for estimating volume fractions was evaluated in 21 patients (mean age, 60 years ± 6.6 [SD]; all male) with PCa (71 regions of interest). Evaluation metrics included the intraclass correlation coefficient (ICC) and Pearson correlation coefficient. Results PIA predicted the reference standard tissue parameters with high accuracy, outperforming conventional NLLS methods, especially under noisy conditions (<i>r</i><sub>s</sub> = 0.80 vs 0.65, <i>P</i> < .001 for epithelium volume at SNR of 20:1). In in vivo validation, PIA's noninvasive volume fraction estimates matched quantitative histology (ICC, 0.94, 0.85, and 0.92 for epithelium, stroma, and lumen compartments, respectively; <i>P</i> < .001 for all). PIA's measurements strongly correlated with PCa aggressiveness (<i>r</i> = 0.75, <i>P</i> < .001). Furthermore, PIA ran 10 000 faster than NLLS (0.18 second vs 40 minutes per image). Conclusion PIA provided accurate prostate tissue biomarker measurements from MRI data with better robustness to noise and computational efficiency compared with the NLLS algorithm. The results demonstrate the potential of PIA as an accurate, noninvasive, and explainable artificial intelligence method for PCa detection. <b>Keywords:</b> Prostate, Stacked Auto-Encoders, Tissue Characterization, MR-Diffusion-weighted Imaging <i>Supplemental material is available for this article.</i> ©RSNA, 2025 See also commentary by Adams and Bressem in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240167"},"PeriodicalIF":8.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950878/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143190758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridging Artificial Intelligence Models to Clinical Practice: Challenges in Lung Cancer Prediction. 连接人工智能模型与临床实践:肺癌预测的挑战。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-01 DOI: 10.1148/ryai.250080
Xiaonan Shao, Rong Niu
{"title":"Bridging Artificial Intelligence Models to Clinical Practice: Challenges in Lung Cancer Prediction.","authors":"Xiaonan Shao, Rong Niu","doi":"10.1148/ryai.250080","DOIUrl":"10.1148/ryai.250080","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 2","pages":"e250080"},"PeriodicalIF":8.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143606551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2024 Manuscript Reviewers: A Note of Thanks. 2024手稿审稿人:一封感谢信。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-01 DOI: 10.1148/ryai.250163
Umar Mahmood, Charles E Kahn
{"title":"2024 Manuscript Reviewers: A Note of Thanks.","authors":"Umar Mahmood, Charles E Kahn","doi":"10.1148/ryai.250163","DOIUrl":"https://doi.org/10.1148/ryai.250163","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 2","pages":"e250163"},"PeriodicalIF":8.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143711428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Serial MRI-based Deep Learning Model to Predict Survival in Patients with Locoregionally Advanced Nasopharyngeal Carcinoma. 基于序列mri的深度学习模型预测局部区域晚期鼻咽癌患者的生存。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-01 DOI: 10.1148/ryai.230544
Jia Kou, Jun-Yi Peng, Wen-Bing Lv, Chen-Fei Wu, Zi-Hang Chen, Guan-Qun Zhou, Ya-Qin Wang, Li Lin, Li-Jun Lu, Ying Sun

Purpose To develop and evaluate a deep learning-based prognostic model for predicting survival in locoregionally advanced nasopharyngeal carcinoma (LA-NPC) using serial MRI before and after induction chemotherapy (IC). Materials and Methods This multicenter retrospective study included 1039 patients with LA-NPC (779 male and 260 female patients; mean age, 44 years ± 11 [SD]) diagnosed between December 2011 and January 2016. A radiomics-clinical prognostic model (model RC) was developed using pre- and post-IC MRI acquisitions and other clinical factors using graph convolutional neural networks. The concordance index (C-index) was used to evaluate model performance in predicting disease-free survival (DFS). The survival benefits of concurrent chemoradiation therapy (CCRT) were analyzed in model-defined risk groups. Results The C-indexes of model RC for predicting DFS were significantly higher than those of TNM staging in the internal (0.79 vs 0.53) and external (0.79 vs 0.62, both P < .001) testing cohorts. The 5-year DFS for the model RC-defined low-risk group was significantly better than that of the high-risk group (90.6% vs 58.9%, P < .001). In high-risk patients, those who underwent CCRT had a higher 5-year DFS rate than those who did not (58.7% vs 28.6%, P = .03). There was no evidence of a difference in 5-year DFS rate in low-risk patients who did or did not undergo CCRT (91.9% vs 81.3%, P = .19). Conclusion Serial MRI before and after IC can effectively help predict survival in LA-NPC. The radiomics-clinical prognostic model developed using a graph convolutional network-based deep learning method showed good risk discrimination capabilities and may facilitate risk-adapted therapy. Keywords: Nasopharyngeal Carcinoma, Deep Learning, Induction Chemotherapy, Serial MRI, MR Imaging, Radiomics, Prognosis, Radiation Therapy/Oncology, Head/Neck Supplemental material is available for this article. © RSNA, 2025.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的建立并评估基于深度学习的预测局部晚期鼻咽癌(LA-NPC)诱导化疗(IC)前后序列MRI生存的预后模型。材料与方法本研究纳入2009年4月至2015年12月诊断的1039例LA-NPC患者,其中男性779例,女性260例,平均年龄44岁[标准差:11]。使用图卷积神经网络(GCN)利用ic前和ic后的MRI和其他临床因素开发了放射组学-临床预后模型(模型RC)。一致性指数(C-index)用于评估模型在预测无病生存(DFS)方面的性能。在模型定义的风险组中分析同步放化疗(CCRT)的生存获益。结果模型RC预测DFS的c指数在内部(0.79比0.53)和外部(0.79比0.62,P均< 0.001)检测队列中显著高于TNM分期。模型rc定义的低危组的5年DFS明显优于高危组(90.6% vs 58.9%, P < 0.001)。在高危患者中,接受CCRT的患者的5年DFS率高于未接受CCRT的患者(58.7%比28.6%,P = 0.03)。没有证据表明接受或未接受CCRT的低风险患者的5年DFS率有差异(91.9% vs 81.3%, P = 0.19)。结论超声造影前后连续MRI可有效预测LA-NPC患者的生存。使用基于gcn的深度学习方法开发的放射组学-临床预后模型显示出良好的风险识别能力,并可能促进风险适应性治疗。©RSNA, 2025年。
{"title":"A Serial MRI-based Deep Learning Model to Predict Survival in Patients with Locoregionally Advanced Nasopharyngeal Carcinoma.","authors":"Jia Kou, Jun-Yi Peng, Wen-Bing Lv, Chen-Fei Wu, Zi-Hang Chen, Guan-Qun Zhou, Ya-Qin Wang, Li Lin, Li-Jun Lu, Ying Sun","doi":"10.1148/ryai.230544","DOIUrl":"10.1148/ryai.230544","url":null,"abstract":"<p><p>Purpose To develop and evaluate a deep learning-based prognostic model for predicting survival in locoregionally advanced nasopharyngeal carcinoma (LA-NPC) using serial MRI before and after induction chemotherapy (IC). Materials and Methods This multicenter retrospective study included 1039 patients with LA-NPC (779 male and 260 female patients; mean age, 44 years ± 11 [SD]) diagnosed between December 2011 and January 2016. A radiomics-clinical prognostic model (model RC) was developed using pre- and post-IC MRI acquisitions and other clinical factors using graph convolutional neural networks. The concordance index (C-index) was used to evaluate model performance in predicting disease-free survival (DFS). The survival benefits of concurrent chemoradiation therapy (CCRT) were analyzed in model-defined risk groups. Results The C-indexes of model RC for predicting DFS were significantly higher than those of TNM staging in the internal (0.79 vs 0.53) and external (0.79 vs 0.62, both <i>P</i> < .001) testing cohorts. The 5-year DFS for the model RC-defined low-risk group was significantly better than that of the high-risk group (90.6% vs 58.9%, <i>P</i> < .001). In high-risk patients, those who underwent CCRT had a higher 5-year DFS rate than those who did not (58.7% vs 28.6%, <i>P</i> = .03). There was no evidence of a difference in 5-year DFS rate in low-risk patients who did or did not undergo CCRT (91.9% vs 81.3%, <i>P</i> = .19). Conclusion Serial MRI before and after IC can effectively help predict survival in LA-NPC. The radiomics-clinical prognostic model developed using a graph convolutional network-based deep learning method showed good risk discrimination capabilities and may facilitate risk-adapted therapy. <b>Keywords:</b> Nasopharyngeal Carcinoma, Deep Learning, Induction Chemotherapy, Serial MRI, MR Imaging, Radiomics, Prognosis, Radiation Therapy/Oncology, Head/Neck <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230544"},"PeriodicalIF":8.1,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142984834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating the Impact of Changes in Artificial Intelligence-derived Case Scores over Time on Digital Breast Tomosynthesis Screening Outcomes. 评估人工智能病例评分随时间推移的变化对数字乳腺断层合成筛查结果的影响。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-01 DOI: 10.1148/ryai.230597
Samantha P Zuckerman, Senthil Periaswamy, Julie L Shisler, Ameena Elahi, Christine E Edmonds, Jeffrey Hoffmeister, Emily F Conant

Purpose To evaluate the change in digital breast tomosynthesis artificial intelligence (DBT-AI) case scores over sequential screenings. Materials and Methods This retrospective review included 21 108 female patients (mean age ± SD, 58.1 years ± 11.5) with 31 741 DBT screening examinations performed at a single site from February 3, 2020, to September 12, 2022. Among 7000 patients with two or more DBT-AI screenings, 1799 had a 1-year follow-up and were included in the analysis. DBT-AI case scores and differences in case score over time were determined. Case scores ranged from 0 to 100. For each screening outcome (true positive [TP], false positive [FP], true negative [TN], false negative [FN]), mean and median case score change was calculated. Results The highest average case score was seen in TP examinations (average, 75; range, 7-100; n = 41), and the lowest average case score was seen in TN examinations (average, 34; range, 0-100; n = 1640). The largest positive case score change was seen in TP examinations (mean case score change, 21.1; median case score change, 17). FN examinations included mammographically occult cancers diagnosed following supplemental screening and those found at symptomatic diagnostic imaging. Differences between TP and TN mean case score change (P < .001) and between TP and FP mean case score change (P = .02) were statistically significant. Conclusion Using the combination of DBT AI case score with change in case score over time may help radiologists make recall decisions in DBT screening. All studies with high case score and/or case score changes should be carefully scrutinized to maximize screening performance. Keywords: Mammography, Breast, Computer Aided Diagnosis (CAD) Supplemental material is available for this article. © RSNA, 2025.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的评价连续筛查后DBT-AI(数字乳腺断层合成-人工智能)病例评分的变化。材料与方法本回顾性研究纳入21,108例女性患者(平均年龄58.1±[SD] 11.5岁),于2020年3月2日至2022年9月12日在单个部位进行了31,741次DBT筛查检查。在7000名接受两次或两次以上DBT-AI筛查的患者中,1799名患者进行了一年的随访,并被纳入分析。确定DBT-AI病例评分和病例评分随时间的差异。案例得分从0-100分不等。对于每个筛查结果(真阳性(TP)、假阳性(FP)、真阴性(TN)、假阴性(FN)),计算病例评分变化的平均值和中位数。结果TP检查平均病例评分最高(平均75分,范围7 ~ 100分,n = 41), TN检查平均病例评分最低(平均34分,范围0 ~ 100分,n = 1640)。TP检查阳性病例评分变化最大(平均病例评分变化21.1分,中位数病例评分变化17分)。FN检查包括在辅助筛查后诊断的乳腺x线摄影隐匿癌和在症状性诊断成像中发现的癌。TP与TN平均病例评分变化差异(P < 0.001), TP与FP平均病例评分变化差异(P = 0.02)均有统计学意义。结论结合DBT- ai病例评分与病例评分随时间的变化可以帮助放射科医生在DBT筛查中做出回忆决策。所有高病例评分和/或病例评分变化的研究都应仔细审查,以最大限度地提高筛查效果。©RSNA, 2025年。
{"title":"Evaluating the Impact of Changes in Artificial Intelligence-derived Case Scores over Time on Digital Breast Tomosynthesis Screening Outcomes.","authors":"Samantha P Zuckerman, Senthil Periaswamy, Julie L Shisler, Ameena Elahi, Christine E Edmonds, Jeffrey Hoffmeister, Emily F Conant","doi":"10.1148/ryai.230597","DOIUrl":"10.1148/ryai.230597","url":null,"abstract":"<p><p>Purpose To evaluate the change in digital breast tomosynthesis artificial intelligence (DBT-AI) case scores over sequential screenings. Materials and Methods This retrospective review included 21 108 female patients (mean age ± SD, 58.1 years ± 11.5) with 31 741 DBT screening examinations performed at a single site from February 3, 2020, to September 12, 2022. Among 7000 patients with two or more DBT-AI screenings, 1799 had a 1-year follow-up and were included in the analysis. DBT-AI case scores and differences in case score over time were determined. Case scores ranged from 0 to 100. For each screening outcome (true positive [TP], false positive [FP], true negative [TN], false negative [FN]), mean and median case score change was calculated. Results The highest average case score was seen in TP examinations (average, 75; range, 7-100; <i>n</i> = 41), and the lowest average case score was seen in TN examinations (average, 34; range, 0-100; <i>n</i> = 1640). The largest positive case score change was seen in TP examinations (mean case score change, 21.1; median case score change, 17). FN examinations included mammographically occult cancers diagnosed following supplemental screening and those found at symptomatic diagnostic imaging. Differences between TP and TN mean case score change (<i>P</i> < .001) and between TP and FP mean case score change (<i>P</i> = .02) were statistically significant. Conclusion Using the combination of DBT AI case score with change in case score over time may help radiologists make recall decisions in DBT screening. All studies with high case score and/or case score changes should be carefully scrutinized to maximize screening performance. <b>Keywords:</b> Mammography, Breast, Computer Aided Diagnosis (CAD) <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230597"},"PeriodicalIF":8.1,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950889/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142984862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RSNA 2023 Abdominal Trauma AI Challenge: Review and Outcomes. RSNA 2023 腹部创伤人工智能挑战回顾与结果分析。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-01 DOI: 10.1148/ryai.240334
Sebastiaan Hermans, Zixuan Hu, Robyn L Ball, Hui Ming Lin, Luciano M Prevedello, Ferco H Berger, Ibrahim Yusuf, Jeffrey D Rudie, Maryam Vazirabad, Adam E Flanders, George Shih, John Mongan, Savvas Nicolaou, Brett S Marinelli, Melissa A Davis, Kirti Magudia, Ervin Sejdić, Errol Colak

Purpose To evaluate the performance of the winning machine learning models from the 2023 RSNA Abdominal Trauma Detection AI Challenge. Materials and Methods The competition was hosted on Kaggle and took place between July 26 and October 15, 2023. The multicenter competition dataset consisted of 4274 abdominal trauma CT scans, in which solid organs (liver, spleen, and kidneys) were annotated as healthy, low-grade, or high-grade injury. Studies were labeled as positive or negative for the presence of bowel and mesenteric injury and active extravasation. In this study, performances of the eight award-winning models were retrospectively assessed and compared using various metrics, including the area under the receiver operating characteristic curve (AUC), for each injury category. The reported mean values of these metrics were calculated by averaging the performance across all models for each specified injury type. Results The models exhibited strong performance in detecting solid organ injuries, particularly high-grade injuries. For binary detection of injuries, the models demonstrated mean AUC values of 0.92 (range, 0.90-0.94) for liver, 0.91 (range, 0.87-0.93) for splenic, and 0.94 (range, 0.93-0.95) for kidney injuries. The models achieved mean AUC values of 0.98 (range, 0.96-0.98) for high-grade liver, 0.98 (range, 0.97-0.99) for high-grade splenic, and 0.98 (range, 0.97-0.98) for high-grade kidney injuries. For the detection of bowel and mesenteric injuries and active extravasation, the models demonstrated mean AUC values of 0.85 (range, 0.74-0.93) and 0.85 (range, 0.79-0.89), respectively. Conclusion The award-winning models from the artificial intelligence challenge demonstrated strong performance in the detection of traumatic abdominal injuries on CT scans, particularly high-grade injuries. These models may serve as a performance baseline for future investigations and algorithms. Keywords: Abdominal Trauma, CT, American Association for the Surgery of Trauma, Machine Learning, Artificial Intelligence Supplemental material is available for this article. © RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 评估 2023 年 RSNA 腹部创伤检测人工智能挑战赛获奖机器学习(ML)模型的性能。材料与方法 比赛在 Kaggle 上举办,时间为 2023 年 7 月 26 日至 2023 年 10 月 15 日。多中心竞赛数据集包括 4,274 份腹部创伤 CT 扫描,其中实体器官(肝脏、脾脏和肾脏)被标注为健康、低度或高度损伤。对于肠/括约肌损伤和活动性外渗,研究结果被标记为阳性或阴性。在本研究中,对 8 个获奖模型的性能进行了回顾性评估,并使用各种指标(包括接收器操作特征曲线下面积 (AUC))对每个损伤类别进行了比较。所报告的这些指标的平均值是通过对每种特定损伤类型的所有模型的性能进行平均计算得出的。结果 这些模型在检测实体器官损伤,尤其是高级别损伤方面表现出很强的性能。在损伤的二元检测中,模型对肝脏损伤的平均 AUC 值为 0.92(范围:0.91-0.94),对脾脏损伤的平均 AUC 值为 0.91(范围:0.87-0.93),对肾脏损伤的平均 AUC 值为 0.94(范围:0.93-0.95)。这些模型的平均 AUC 值分别为:高级别肝损伤 0.98(范围:0.96-0.98),高级别脾损伤 0.98(范围:0.97-0.99),高级别肾损伤 0.98(范围:0.97-0.98)。在检测肠道/肠膜损伤和活动性外渗方面,模型的平均 AUC 值分别为 0.85(范围:0.74-0.73)和 0.85(范围:0.79-0.89)。结论 在人工智能挑战赛中获奖的模型在检测 CT 扫描中的腹部创伤,尤其是高级别创伤方面表现出了很强的性能。这些模型可作为未来研究和算法的性能基线。©RSNA,2024。
{"title":"RSNA 2023 Abdominal Trauma AI Challenge: Review and Outcomes.","authors":"Sebastiaan Hermans, Zixuan Hu, Robyn L Ball, Hui Ming Lin, Luciano M Prevedello, Ferco H Berger, Ibrahim Yusuf, Jeffrey D Rudie, Maryam Vazirabad, Adam E Flanders, George Shih, John Mongan, Savvas Nicolaou, Brett S Marinelli, Melissa A Davis, Kirti Magudia, Ervin Sejdić, Errol Colak","doi":"10.1148/ryai.240334","DOIUrl":"10.1148/ryai.240334","url":null,"abstract":"<p><p>Purpose To evaluate the performance of the winning machine learning models from the 2023 RSNA Abdominal Trauma Detection AI Challenge. Materials and Methods The competition was hosted on Kaggle and took place between July 26 and October 15, 2023. The multicenter competition dataset consisted of 4274 abdominal trauma CT scans, in which solid organs (liver, spleen, and kidneys) were annotated as healthy, low-grade, or high-grade injury. Studies were labeled as positive or negative for the presence of bowel and mesenteric injury and active extravasation. In this study, performances of the eight award-winning models were retrospectively assessed and compared using various metrics, including the area under the receiver operating characteristic curve (AUC), for each injury category. The reported mean values of these metrics were calculated by averaging the performance across all models for each specified injury type. Results The models exhibited strong performance in detecting solid organ injuries, particularly high-grade injuries. For binary detection of injuries, the models demonstrated mean AUC values of 0.92 (range, 0.90-0.94) for liver, 0.91 (range, 0.87-0.93) for splenic, and 0.94 (range, 0.93-0.95) for kidney injuries. The models achieved mean AUC values of 0.98 (range, 0.96-0.98) for high-grade liver, 0.98 (range, 0.97-0.99) for high-grade splenic, and 0.98 (range, 0.97-0.98) for high-grade kidney injuries. For the detection of bowel and mesenteric injuries and active extravasation, the models demonstrated mean AUC values of 0.85 (range, 0.74-0.93) and 0.85 (range, 0.79-0.89), respectively. Conclusion The award-winning models from the artificial intelligence challenge demonstrated strong performance in the detection of traumatic abdominal injuries on CT scans, particularly high-grade injuries. These models may serve as a performance baseline for future investigations and algorithms. <b>Keywords:</b> Abdominal Trauma, CT, American Association for the Surgery of Trauma, Machine Learning, Artificial Intelligence <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240334"},"PeriodicalIF":8.1,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Radiology-Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1