首页 > 最新文献

Radiology-Artificial Intelligence最新文献

英文 中文
Applying Conformal Prediction to a Deep Learning Model for Intracranial Hemorrhage Detection to Improve Trustworthiness. 在颅内出血检测的深度学习模型中应用共形预测,提高可信度。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-27 DOI: 10.1148/ryai.240032
Cooper Gamble, Shahriar Faghani, Bradley J Erickson

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To apply conformal prediction to a deep learning (DL) model for intracranial hemorrhage (ICH) detection and evaluate model performance in detection as well as model accuracy in identifying challenging cases. Materials and Methods This was a retrospective (November 2017 through December 2017) study of 491 noncontrast head CT volumes from the CQ500 dataset in which three senior radiologists annotated sections containing ICH. The dataset was split into definite and challenging (uncertain) subsets, where challenging images were defined as those in which there was disagreement among readers. A DL model was trained on 146 patients (mean age = 45.7, 70 females, 76 males) from the definite data (training dataset) to perform ICH localization and classification into five classes. To develop an uncertainty-aware DL model, 1,546 sections of the definite data (calibration dataset) was used for Mondrian conformal prediction (MCP). The uncertainty-aware DL model was tested on 8,401 definite and challenging sections to assess its ability to identify challenging sections. The difference in predictive performance (P value) and ability to identify challenging sections (accuracy) were reported. Results After the MCP procedure, the model achieved an F1 score of 0.920 for ICH classification on the test dataset. Additionally, it correctly identified 6,837 of the 6,856 total challenging sections as challenging (99.7% accuracy). It did not incorrectly label any definite sections as challenging. Conclusion The uncertainty-aware MCP-augmented DL model achieved high performance in ICH detection and high accuracy in identifying challenging sections, suggesting its usefulness in automated ICH detection and potential to increase trustworthiness of DL models in radiology. ©RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现一些错误,从而影响文章内容。目的 将保形预测应用于颅内出血(ICH)检测的深度学习(DL)模型,并评估模型在检测方面的性能以及模型在识别挑战性病例方面的准确性。材料与方法 这是一项回顾性研究(2017 年 11 月至 2017 年 12 月),研究对象是 CQ500 数据集中的 491 张非对比头部 CT 卷,其中有三位资深放射科医生对含有 ICH 的切片进行了注释。数据集被分成确定和具有挑战性(不确定)的子集,其中具有挑战性的图像被定义为读者之间存在分歧的图像。对明确数据(训练数据集)中的 146 名患者(平均年龄 45.7 岁,女性 70 人,男性 76 人)进行了 DL 模型训练,以进行 ICH 定位并将其分为五类。为了开发不确定性感知 DL 模型,使用了 1,546 个明确数据(校准数据集)进行蒙德里安共形预测 (MCP)。不确定性感知 DL 模型在 8,401 个确定断面和挑战断面上进行了测试,以评估其识别挑战断面的能力。报告了预测性能的差异(P 值)和识别挑战性路段的能力(准确性)。结果 经过 MCP 程序后,该模型在测试数据集上的非物质文化遗产分类 F1 得分为 0.920。此外,在总共 6856 个具有挑战性的部分中,该模型正确识别了 6837 个具有挑战性的部分(准确率为 99.7%)。它没有错误地将任何明确的部分标记为具有挑战性。结论 不确定性感知的 MCP 增强 DL 模型在 ICH 检测中取得了很高的性能,在识别具有挑战性的切片方面也有很高的准确性,这表明它在自动 ICH 检测中非常有用,并有可能提高 DL 模型在放射学中的可信度。©RSNA,2024。
{"title":"Applying Conformal Prediction to a Deep Learning Model for Intracranial Hemorrhage Detection to Improve Trustworthiness.","authors":"Cooper Gamble, Shahriar Faghani, Bradley J Erickson","doi":"10.1148/ryai.240032","DOIUrl":"https://doi.org/10.1148/ryai.240032","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To apply conformal prediction to a deep learning (DL) model for intracranial hemorrhage (ICH) detection and evaluate model performance in detection as well as model accuracy in identifying challenging cases. Materials and Methods This was a retrospective (November 2017 through December 2017) study of 491 noncontrast head CT volumes from the CQ500 dataset in which three senior radiologists annotated sections containing ICH. The dataset was split into definite and challenging (uncertain) subsets, where challenging images were defined as those in which there was disagreement among readers. A DL model was trained on 146 patients (mean age = 45.7, 70 females, 76 males) from the definite data (training dataset) to perform ICH localization and classification into five classes. To develop an uncertainty-aware DL model, 1,546 sections of the definite data (calibration dataset) was used for Mondrian conformal prediction (MCP). The uncertainty-aware DL model was tested on 8,401 definite and challenging sections to assess its ability to identify challenging sections. The difference in predictive performance (<i>P</i> value) and ability to identify challenging sections (accuracy) were reported. Results After the MCP procedure, the model achieved an F1 score of 0.920 for ICH classification on the test dataset. Additionally, it correctly identified 6,837 of the 6,856 total challenging sections as challenging (99.7% accuracy). It did not incorrectly label any definite sections as challenging. Conclusion The uncertainty-aware MCP-augmented DL model achieved high performance in ICH detection and high accuracy in identifying challenging sections, suggesting its usefulness in automated ICH detection and potential to increase trustworthiness of DL models in radiology. ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240032"},"PeriodicalIF":8.1,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142732987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Achieving More with Less: Combining Strong and Weak Labels for Intracranial Hemorrhage Detection. 事半功倍:结合强弱标签检测颅内出血。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-01 DOI: 10.1148/ryai.240670
Tugba Akinci D'Antonoli, Jeffrey D Rudie
{"title":"Achieving More with Less: Combining Strong and Weak Labels for Intracranial Hemorrhage Detection.","authors":"Tugba Akinci D'Antonoli, Jeffrey D Rudie","doi":"10.1148/ryai.240670","DOIUrl":"10.1148/ryai.240670","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 6","pages":"e240670"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11605141/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Addressing the Generalizability of AI in Radiology Using a Novel Data Augmentation Framework with Synthetic Patient Image Data: Proof-of-Concept and External Validation for Classification Tasks in Multiple Sclerosis. 利用合成患者图像数据的新型数据增强框架解决放射学中人工智能的通用性问题:多发性硬化症的概念验证和外部验证分类任务。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-01 DOI: 10.1148/ryai.230514
Gianluca Brugnara, Chandrakanth Jayachandran Preetha, Katerina Deike, Robert Haase, Thomas Pinetz, Martha Foltyn-Dumitru, Mustafa A Mahmutoglu, Brigitte Wildemann, Ricarda Diem, Wolfgang Wick, Alexander Radbruch, Martin Bendszus, Hagen Meredig, Aditya Rastogi, Philipp Vollmuth

Artificial intelligence (AI) models often face performance drops after deployment to external datasets. This study evaluated the potential of a novel data augmentation framework based on generative adversarial networks (GANs) that creates synthetic patient image data for model training to improve model generalizability. Model development and external testing were performed for a given classification task, namely the detection of new fluid-attenuated inversion recovery lesions at MRI during longitudinal follow-up of patients with multiple sclerosis (MS). An internal dataset of 669 patients with MS (n = 3083 examinations) was used to develop an attention-based network, trained both with and without the inclusion of the GAN-based synthetic data augmentation framework. External testing was performed on 134 patients with MS from a different institution, with MR images acquired using different scanners and protocols than images used during training. Models trained using synthetic data augmentation showed a significant performance improvement when applied on external data (area under the receiver operating characteristic curve [AUC], 83.6% without synthetic data vs 93.3% with synthetic data augmentation; P = .03), achieving comparable results to the internal test set (AUC, 95.0%; P = .53), whereas models without synthetic data augmentation demonstrated a performance drop upon external testing (AUC, 93.8% on internal dataset vs 83.6% on external data; P = .03). Data augmentation with synthetic patient data substantially improved performance of AI models on unseen MRI data and may be extended to other clinical conditions or tasks to mitigate domain shift, limit class imbalance, and enhance the robustness of AI applications in medical imaging. Keywords: Brain, Brain Stem, Multiple Sclerosis, Synthetic Data Augmentation, Generative Adversarial Network Supplemental material is available for this article. © RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现一些可能影响内容的错误。人工智能(AI)模型在部署到外部数据集后往往会面临性能下降的问题。本研究评估了基于生成式对抗网络(GAN)的新型数据增强框架的潜力,该框架可在模型训练期间创建合成患者图像数据,从而提高模型的通用性。研究针对一项特定的分类任务进行了模型开发和外部测试,该任务是在多发性硬化症(MS)患者的纵向随访过程中检测磁共振成像上的新流体增强反转恢复(FLAIR)病灶。669 名多发性硬化症患者(n = 3083 次检查)的内部数据集被用于开发基于注意力的网络,该网络在使用或未使用基于 GAN 的合成数据增强框架的情况下均得到了训练。外部测试是在来自不同机构的 134 名多发性硬化症患者身上进行的,他们使用不同的扫描仪和方案获取磁共振图像,与训练时使用的图像不同。使用合成数据增强训练的模型在应用于外部数据时表现出显著的性能提升(无合成数据时的AUC为83.6%,有合成数据增强时的AUC为93.3%,P = .03),达到了与内部测试集相当的结果(AUC为95.5%,P = .53),而无合成数据增强的模型在外部测试时表现出性能下降(内部数据集的AUC为93.8%,外部数据集的AUC为83.6%,P = .03)。用合成患者数据增强数据大大提高了人工智能模型在未见核磁共振成像数据上的性能,并可扩展到其他临床条件或任务,以减轻领域偏移、限制类不平衡,并增强人工智能在医学成像应用中的稳健性。©RSNA,2024。
{"title":"Addressing the Generalizability of AI in Radiology Using a Novel Data Augmentation Framework with Synthetic Patient Image Data: Proof-of-Concept and External Validation for Classification Tasks in Multiple Sclerosis.","authors":"Gianluca Brugnara, Chandrakanth Jayachandran Preetha, Katerina Deike, Robert Haase, Thomas Pinetz, Martha Foltyn-Dumitru, Mustafa A Mahmutoglu, Brigitte Wildemann, Ricarda Diem, Wolfgang Wick, Alexander Radbruch, Martin Bendszus, Hagen Meredig, Aditya Rastogi, Philipp Vollmuth","doi":"10.1148/ryai.230514","DOIUrl":"10.1148/ryai.230514","url":null,"abstract":"<p><p>Artificial intelligence (AI) models often face performance drops after deployment to external datasets. This study evaluated the potential of a novel data augmentation framework based on generative adversarial networks (GANs) that creates synthetic patient image data for model training to improve model generalizability. Model development and external testing were performed for a given classification task, namely the detection of new fluid-attenuated inversion recovery lesions at MRI during longitudinal follow-up of patients with multiple sclerosis (MS). An internal dataset of 669 patients with MS (<i>n</i> = 3083 examinations) was used to develop an attention-based network, trained both with and without the inclusion of the GAN-based synthetic data augmentation framework. External testing was performed on 134 patients with MS from a different institution, with MR images acquired using different scanners and protocols than images used during training. Models trained using synthetic data augmentation showed a significant performance improvement when applied on external data (area under the receiver operating characteristic curve [AUC], 83.6% without synthetic data vs 93.3% with synthetic data augmentation; <i>P</i> = .03), achieving comparable results to the internal test set (AUC, 95.0%; <i>P</i> = .53), whereas models without synthetic data augmentation demonstrated a performance drop upon external testing (AUC, 93.8% on internal dataset vs 83.6% on external data; <i>P</i> = .03). Data augmentation with synthetic patient data substantially improved performance of AI models on unseen MRI data and may be extended to other clinical conditions or tasks to mitigate domain shift, limit class imbalance, and enhance the robustness of AI applications in medical imaging. <b>Keywords:</b> Brain, Brain Stem, Multiple Sclerosis, Synthetic Data Augmentation, Generative Adversarial Network <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230514"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11605143/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142476382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Boosting Deep Learning for Interpretable Brain MRI Lesion Detection through the Integration of Radiology Report Information. 通过整合放射学报告信息促进深度学习,实现可解释的脑磁共振成像病灶检测。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-01 DOI: 10.1148/ryai.230520
Lisong Dai, Jiayu Lei, Fenglong Ma, Zheng Sun, Haiyan Du, Houwang Zhang, Jingxuan Jiang, Jianyong Wei, Dan Wang, Guang Tan, Xinyu Song, Jinyu Zhu, Qianqian Zhao, Songtao Ai, Ai Shang, Zhaohui Li, Ya Zhang, Yuehua Li

Purpose To guide the attention of a deep learning (DL) model toward MRI characteristics of brain lesions by incorporating radiology report-derived textual features to achieve interpretable lesion detection. Materials and Methods In this retrospective study, 35 282 brain MRI scans (January 2018 to June 2023) and corresponding radiology reports from center 1 were used for training, validation, and internal testing. A total of 2655 brain MRI scans (January 2022 to December 2022) from centers 2-5 were reserved for external testing. Textual features were extracted from radiology reports to guide a DL model (ReportGuidedNet) focusing on lesion characteristics. Another DL model (PlainNet) without textual features was developed for comparative analysis. Both models identified 15 conditions, including 14 diseases and normal brains. Performance of each model was assessed by calculating macro-averaged area under the receiver operating characteristic curve (ma-AUC) and micro-averaged AUC (mi-AUC). Attention maps, which visualized model attention, were assessed with a five-point Likert scale. Results ReportGuidedNet outperformed PlainNet for all diagnoses on both internal (ma-AUC, 0.93 [95% CI: 0.91, 0.95] vs 0.85 [95% CI: 0.81, 0.88]; mi-AUC, 0.93 [95% CI: 0.90, 0.95] vs 0.89 [95% CI: 0.83, 0.92]) and external (ma-AUC, 0.91 [95% CI: 0.88, 0.93] vs 0.75 [95% CI: 0.72, 0.79]; mi-AUC, 0.90 [95% CI: 0.87, 0.92] vs 0.76 [95% CI: 0.72, 0.80]) testing sets. The performance difference between internal and external testing sets was smaller for ReportGuidedNet than for PlainNet (Δma-AUC, 0.03 vs 0.10; Δmi-AUC, 0.02 vs 0.13). The Likert scale score of ReportGuidedNet was higher than that of PlainNet (mean ± SD: 2.50 ± 1.09 vs 1.32 ± 1.20; P < .001). Conclusion The integration of radiology report textual features improved the ability of the DL model to detect brain lesions, thereby enhancing interpretability and generalizability. Keywords: Deep Learning, Computer-aided Diagnosis, Knowledge-driven Model, Radiology Report, Brain MRI Supplemental material is available for this article. Published under a CC BY 4.0 license.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 通过结合放射学报告衍生的文本特征,引导深度学习(DL)模型关注脑部病变 MRI 特征,从而实现可解释的病变检测。材料与方法 在这项回顾性研究中,来自 1 号中心的 35282 份脑 MRI 扫描(2018 年 1 月至 2023 年 6 月)和相应的放射学报告被用于训练、验证和内部测试。第 2-5 中心的 2655 份脑部 MRI 扫描(2022 年 1 月至 2022 年 12 月)保留用于外部测试。从放射学报告中提取了文本特征,以指导一个侧重于病变特征的 DL 模型(ReportGuidedNet)。为进行比较分析,还开发了另一个不含文本特征的 DL 模型(PlainNet)。两个模型都诊断了 15 种情况,包括 14 种疾病和正常大脑。每个模型的性能通过计算接收者工作特征曲线下的宏观和微观平均面积(ma-AUC、mi-AUC)进行评估。注意力图是模型注意力的可视化,采用 5 点李克特量表进行评估。结果 在所有诊断中,ReportGuidedNet 的内部表现均优于 PlainNet(ma-AUC:0.93 [95% CI: 0.91- 0.95] 对 0.85 [95% CI: 0.81-0.88]; mi-AUC:0.93[95%CI:0.90-0.95] 对 0.89 [95% CI:0.83-0.92])和外部(ma-AUC:0.91 [95% CI: 0.88-0.93] 对 0.75 [95% CI: 0.72-0.79]; mi-AUC:0.90 [95% CI: 0.87-0.92] 对 0.76 [95% CI: 0.72-0.80]) 测试集。内部和外部测试集之间的性能差异,ReportGuidedNet 小于 PlainNet(Δma-AUC:0.03 对 0.10;Δmi-AUC:0.02 对 0.13)。ReportGuidedNet的Likert量表评分高于PlainNet(平均±标准差:2.50±1.09对1.32±1.20;P < .001)。结论 整合放射报告文本特征提高了 DL 模型检测脑部病变的能力,增强了可解释性和可推广性。以 CC BY 4.0 许可发布。
{"title":"Boosting Deep Learning for Interpretable Brain MRI Lesion Detection through the Integration of Radiology Report Information.","authors":"Lisong Dai, Jiayu Lei, Fenglong Ma, Zheng Sun, Haiyan Du, Houwang Zhang, Jingxuan Jiang, Jianyong Wei, Dan Wang, Guang Tan, Xinyu Song, Jinyu Zhu, Qianqian Zhao, Songtao Ai, Ai Shang, Zhaohui Li, Ya Zhang, Yuehua Li","doi":"10.1148/ryai.230520","DOIUrl":"10.1148/ryai.230520","url":null,"abstract":"<p><p>Purpose To guide the attention of a deep learning (DL) model toward MRI characteristics of brain lesions by incorporating radiology report-derived textual features to achieve interpretable lesion detection. Materials and Methods In this retrospective study, 35 282 brain MRI scans (January 2018 to June 2023) and corresponding radiology reports from center 1 were used for training, validation, and internal testing. A total of 2655 brain MRI scans (January 2022 to December 2022) from centers 2-5 were reserved for external testing. Textual features were extracted from radiology reports to guide a DL model (ReportGuidedNet) focusing on lesion characteristics. Another DL model (PlainNet) without textual features was developed for comparative analysis. Both models identified 15 conditions, including 14 diseases and normal brains. Performance of each model was assessed by calculating macro-averaged area under the receiver operating characteristic curve (ma-AUC) and micro-averaged AUC (mi-AUC). Attention maps, which visualized model attention, were assessed with a five-point Likert scale. Results ReportGuidedNet outperformed PlainNet for all diagnoses on both internal (ma-AUC, 0.93 [95% CI: 0.91, 0.95] vs 0.85 [95% CI: 0.81, 0.88]; mi-AUC, 0.93 [95% CI: 0.90, 0.95] vs 0.89 [95% CI: 0.83, 0.92]) and external (ma-AUC, 0.91 [95% CI: 0.88, 0.93] vs 0.75 [95% CI: 0.72, 0.79]; mi-AUC, 0.90 [95% CI: 0.87, 0.92] vs 0.76 [95% CI: 0.72, 0.80]) testing sets. The performance difference between internal and external testing sets was smaller for ReportGuidedNet than for PlainNet (Δma-AUC, 0.03 vs 0.10; Δmi-AUC, 0.02 vs 0.13). The Likert scale score of ReportGuidedNet was higher than that of PlainNet (mean ± SD: 2.50 ± 1.09 vs 1.32 ± 1.20; <i>P</i> < .001). Conclusion The integration of radiology report textual features improved the ability of the DL model to detect brain lesions, thereby enhancing interpretability and generalizability. <b>Keywords:</b> Deep Learning, Computer-aided Diagnosis, Knowledge-driven Model, Radiology Report, Brain MRI <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230520"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11605145/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142393849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Precise Image-level Localization of Intracranial Hemorrhage on Head CT Scans with Deep Learning Models Trained on Study-level Labels. 利用研究级标签训练的深度学习模型对头部 CT 扫描颅内出血进行图像级精确定位。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-01 DOI: 10.1148/ryai.230296
Yunan Wu, Michael Iorga, Suvarna Badhe, James Zhang, Donald R Cantrell, Elaine J Tanhehco, Nicholas Szrama, Andrew M Naidech, Michael Drakopoulos, Shamis T Hasan, Kunal M Patel, Tarek A Hijaz, Eric J Russell, Shamal Lalvani, Amit Adate, Todd B Parrish, Aggelos K Katsaggelos, Virginia B Hill

Purpose To develop a highly generalizable weakly supervised model to automatically detect and localize image-level intracranial hemorrhage (ICH) by using study-level labels. Materials and Methods In this retrospective study, the proposed model was pretrained on the image-level Radiological Society of North America dataset and fine-tuned on a local dataset by using attention-based bidirectional long short-term memory networks. This local training dataset included 10 699 noncontrast head CT scans in 7469 patients, with ICH study-level labels extracted from radiology reports. Model performance was compared with that of two senior neuroradiologists on 100 random test scans using the McNemar test, and its generalizability was evaluated on an external independent dataset. Results The model achieved a positive predictive value (PPV) of 85.7% (95% CI: 84.0, 87.4) and an area under the receiver operating characteristic curve of 0.96 (95% CI: 0.96, 0.97) on the held-out local test set (n = 7243, 3721 female) and 89.3% (95% CI: 87.8, 90.7) and 0.96 (95% CI: 0.96, 0.97), respectively, on the external test set (n = 491, 178 female). For 100 randomly selected samples, the model achieved performance on par with two neuroradiologists, but with a significantly faster (P < .05) diagnostic time of 5.04 seconds per scan (vs 86 seconds and 22.2 seconds for the two neuroradiologists, respectively). The model's attention weights and heatmaps visually aligned with neuroradiologists' interpretations. Conclusion The proposed model demonstrated high generalizability and high PPVs, offering a valuable tool for expedited ICH detection and prioritization while reducing false-positive interruptions in radiologists' workflows. Keywords: Computer-Aided Diagnosis (CAD), Brain/Brain Stem, Hemorrhage, Convolutional Neural Network (CNN), Transfer Learning Supplemental material is available for this article. © RSNA, 2024 See also the commentary by Akinci D'Antonoli and Rudie in this issue.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响文章内容的错误。目的 建立一个高度通用的弱监督模型,利用研究级标签自动检测和定位图像级颅内出血(ICH)。材料与方法 在这项回顾性研究中,利用基于注意力的双向长短期记忆网络,在图像级 RSNA 数据集上对所提出的模型进行了预训练,并在本地数据集上对其进行了微调。该本地训练数据集包括来自 7469 名患者的 10,699 张非对比头部 CT 扫描图像,这些图像带有从放射学报告中提取的 ICH 研究级标签。使用 McNemar 检验将模型的性能与两位资深神经放射学专家在 100 个随机测试扫描中的性能进行了比较,并在外部独立数据集上评估了模型的普适性。结果 在本地测试集(n = 7243,3721 名女性)上,该模型的阳性预测值(PPV)为 85.7%(95% CI:[84.0%, 87.4%]),AUC 为 0.96(95% CI:[0.96, 0.97]);在外部测试集(n = 491,178 名女性)上,该模型的阳性预测值(PPV)为 89.3%(95% CI:[87.8%, 90.7%]),AUC 为 0.96(95% CI:[0.96, 0.97])。在随机抽取的 100 个样本中,该模型的表现与两名神经放射科医生相当,但诊断时间明显更快(P < .05),每次扫描仅需 5.04 秒(而两名神经放射科医生的诊断时间分别为 86 秒和 22.2 秒)。该模型的注意力权重和热图与神经放射科医生的解释一致。结论 所提出的模型具有很高的普适性和 PPV 值,为加快 ICH 检测和优先排序提供了有价值的工具,同时减少了放射医师工作流程中假阳性的中断。©RSNA,2024。
{"title":"Precise Image-level Localization of Intracranial Hemorrhage on Head CT Scans with Deep Learning Models Trained on Study-level Labels.","authors":"Yunan Wu, Michael Iorga, Suvarna Badhe, James Zhang, Donald R Cantrell, Elaine J Tanhehco, Nicholas Szrama, Andrew M Naidech, Michael Drakopoulos, Shamis T Hasan, Kunal M Patel, Tarek A Hijaz, Eric J Russell, Shamal Lalvani, Amit Adate, Todd B Parrish, Aggelos K Katsaggelos, Virginia B Hill","doi":"10.1148/ryai.230296","DOIUrl":"10.1148/ryai.230296","url":null,"abstract":"<p><p>Purpose To develop a highly generalizable weakly supervised model to automatically detect and localize image-level intracranial hemorrhage (ICH) by using study-level labels. Materials and Methods In this retrospective study, the proposed model was pretrained on the image-level Radiological Society of North America dataset and fine-tuned on a local dataset by using attention-based bidirectional long short-term memory networks. This local training dataset included 10 699 noncontrast head CT scans in 7469 patients, with ICH study-level labels extracted from radiology reports. Model performance was compared with that of two senior neuroradiologists on 100 random test scans using the McNemar test, and its generalizability was evaluated on an external independent dataset. Results The model achieved a positive predictive value (PPV) of 85.7% (95% CI: 84.0, 87.4) and an area under the receiver operating characteristic curve of 0.96 (95% CI: 0.96, 0.97) on the held-out local test set (<i>n</i> = 7243, 3721 female) and 89.3% (95% CI: 87.8, 90.7) and 0.96 (95% CI: 0.96, 0.97), respectively, on the external test set (<i>n</i> = 491, 178 female). For 100 randomly selected samples, the model achieved performance on par with two neuroradiologists, but with a significantly faster (<i>P</i> < .05) diagnostic time of 5.04 seconds per scan (vs 86 seconds and 22.2 seconds for the two neuroradiologists, respectively). The model's attention weights and heatmaps visually aligned with neuroradiologists' interpretations. Conclusion The proposed model demonstrated high generalizability and high PPVs, offering a valuable tool for expedited ICH detection and prioritization while reducing false-positive interruptions in radiologists' workflows. <b>Keywords:</b> Computer-Aided Diagnosis (CAD), Brain/Brain Stem, Hemorrhage, Convolutional Neural Network (CNN), Transfer Learning <i>Supplemental material is available for this article.</i> © RSNA, 2024 See also the commentary by Akinci D'Antonoli and Rudie in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230296"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11605431/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142081915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WAW-TACE: A Hepatocellular Carcinoma Multiphase CT Dataset with Segmentations, Radiomics Features, and Clinical Data. WAW-TACE:包含分割、放射组学特征和临床数据的肝细胞癌多相 CT 数据集。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-01 DOI: 10.1148/ryai.240296
Krzysztof Bartnik, Tomasz Bartczak, Mateusz Krzyziński, Krzysztof Korzeniowski, Krzysztof Lamparski, Piotr Węgrzyn, Eric Lam, Mateusz Bartkowiak, Tadeusz Wróblewski, Katarzyna Mech, Magdalena Januszewicz, Przemysław Biecek
{"title":"WAW-TACE: A Hepatocellular Carcinoma Multiphase CT Dataset with Segmentations, Radiomics Features, and Clinical Data.","authors":"Krzysztof Bartnik, Tomasz Bartczak, Mateusz Krzyziński, Krzysztof Korzeniowski, Krzysztof Lamparski, Piotr Węgrzyn, Eric Lam, Mateusz Bartkowiak, Tadeusz Wróblewski, Katarzyna Mech, Magdalena Januszewicz, Przemysław Biecek","doi":"10.1148/ryai.240296","DOIUrl":"10.1148/ryai.240296","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240296"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11605144/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142509377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The RSNA Abdominal Traumatic Injury CT (RATIC) Dataset. RSNA 腹部创伤 CT (RATIC) 数据集。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-01 DOI: 10.1148/ryai.240101
Jeffrey D Rudie, Hui-Ming Lin, Robyn L Ball, Sabeena Jalal, Luciano M Prevedello, Savvas Nicolaou, Brett S Marinelli, Adam E Flanders, Kirti Magudia, George Shih, Melissa A Davis, John Mongan, Peter D Chang, Ferco H Berger, Sebastiaan Hermans, Meng Law, Tyler Richards, Jan-Peter Grunz, Andreas Steven Kunz, Shobhit Mathur, Sandro Galea-Soler, Andrew D Chung, Saif Afat, Chin-Chi Kuo, Layal Aweidah, Ana Villanueva Campos, Arjuna Somasundaram, Felipe Antonio Sanchez Tijmes, Attaporn Jantarangkoon, Leonardo Kayat Bittencourt, Michael Brassil, Ayoub El Hajjami, Hakan Dogan, Muris Becircic, Agrahara G Bharatkumar, Eduardo Moreno Júdice de Mattos Farina, Errol Colak

Supplemental material is available for this article.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。RSNA 腹部创伤 CT (RATIC) 数据集包含 4,274 项与创伤相关的腹部 CT 研究注释,可在 https://imaging.rsna.org/dataset/5 和 https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection 上查阅。©RSNA,2024。
{"title":"The RSNA Abdominal Traumatic Injury CT (RATIC) Dataset.","authors":"Jeffrey D Rudie, Hui-Ming Lin, Robyn L Ball, Sabeena Jalal, Luciano M Prevedello, Savvas Nicolaou, Brett S Marinelli, Adam E Flanders, Kirti Magudia, George Shih, Melissa A Davis, John Mongan, Peter D Chang, Ferco H Berger, Sebastiaan Hermans, Meng Law, Tyler Richards, Jan-Peter Grunz, Andreas Steven Kunz, Shobhit Mathur, Sandro Galea-Soler, Andrew D Chung, Saif Afat, Chin-Chi Kuo, Layal Aweidah, Ana Villanueva Campos, Arjuna Somasundaram, Felipe Antonio Sanchez Tijmes, Attaporn Jantarangkoon, Leonardo Kayat Bittencourt, Michael Brassil, Ayoub El Hajjami, Hakan Dogan, Muris Becircic, Agrahara G Bharatkumar, Eduardo Moreno Júdice de Mattos Farina, Errol Colak","doi":"10.1148/ryai.240101","DOIUrl":"10.1148/ryai.240101","url":null,"abstract":"<p><p>\u0000 <i>Supplemental material is available for this article.</i>\u0000 </p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240101"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11605137/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142509376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Breaking Ground on the Application of AI to HCC: It's All about Data. 将人工智能应用于 HCC 的突破性进展:关键在于数据。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-01 DOI: 10.1148/ryai.240660
Ryan Bitar, Julius Chapiro
{"title":"Breaking Ground on the Application of AI to HCC: It's All about Data.","authors":"Ryan Bitar, Julius Chapiro","doi":"10.1148/ryai.240660","DOIUrl":"10.1148/ryai.240660","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 6","pages":"e240660"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11605136/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142733009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Watch Your Back! How Deep Learning Is Cracking the Real World of CT for Cervical Spine Fractures. 小心背后!深度学习如何破解颈椎骨折 CT 的真实世界。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-01 DOI: 10.1148/ryai.240604
Riccardo Levi, Letterio S Politi
{"title":"Watch Your Back! How Deep Learning Is Cracking the Real World of CT for Cervical Spine Fractures.","authors":"Riccardo Levi, Letterio S Politi","doi":"10.1148/ryai.240604","DOIUrl":"10.1148/ryai.240604","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 6","pages":"e240604"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11605139/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142733028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-integrated Screening to Replace Double Reading of Mammograms: A Population-wide Accuracy and Feasibility Study. 人工智能整合筛查取代乳房 X 光片双读:全人口准确性和可行性研究。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-01 DOI: 10.1148/ryai.230529
Mohammad T Elhakim, Sarah W Stougaard, Ole Graumann, Mads Nielsen, Oke Gerke, Lisbet B Larsen, Benjamin S B Rasmussen

Mammography screening supported by deep learning-based artificial intelligence (AI) solutions can potentially reduce workload without compromising breast cancer detection accuracy, but the site of deployment in the workflow might be crucial. This retrospective study compared three simulated AI-integrated screening scenarios with standard double reading with arbitration in a sample of 249 402 mammograms from a representative screening population. A commercial AI system replaced the first reader (scenario 1: integrated AIfirst), the second reader (scenario 2: integrated AIsecond), or both readers for triaging of low- and high-risk cases (scenario 3: integrated AItriage). AI threshold values were chosen based partly on previous validation and setting the screen-read volume reduction at approximately 50% across scenarios. Detection accuracy measures were calculated. Compared with standard double reading, integrated AIfirst showed no evidence of a difference in accuracy metrics except for a higher arbitration rate (+0.99%, P < .001). Integrated AIsecond had lower sensitivity (-1.58%, P < .001), negative predictive value (NPV) (-0.01%, P < .001), and recall rate (-0.06%, P = .04) but a higher positive predictive value (PPV) (+0.03%, P < .001) and arbitration rate (+1.22%, P < .001). Integrated AItriage achieved higher sensitivity (+1.33%, P < .001), PPV (+0.36%, P = .03), and NPV (+0.01%, P < .001) but lower arbitration rate (-0.88%, P < .001). Replacing one or both readers with AI seems feasible; however, the site of application in the workflow can have clinically relevant effects on accuracy and workload. Keywords: Mammography, Breast, Neoplasms-Primary, Screening, Epidemiology, Diagnosis, Convolutional Neural Network (CNN) Supplemental material is available for this article. Published under a CC BY 4.0 license.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。基于深度学习的人工智能(AI)解决方案支持的乳腺放射摄影筛查有可能在不影响乳腺癌检测准确性的情况下减少工作量,但工作流程中的部署地点可能至关重要。这项回顾性研究比较了三种模拟的人工智能集成筛查场景和标准双读与仲裁,样本来自具有代表性的筛查人群的 249,402 张乳房 X 光照片。商业人工智能系统取代了第一位读片员(情景 1:集成人工智能第一读片员)、第二位读片员(情景 2:集成人工智能第二读片员)或两位读片员,对低风险和高风险病例进行分流(集成人工智能分流)。人工智能阈值的部分选择是基于先前的验证,并将各种情况下的读屏量固定在大约 50%。计算了检测准确率。与标准双读相比,除了仲裁率较高(+0.99%;P < .001)外,综合人工智能第一在准确性指标上没有显示出差异。综合 AIsecond 的灵敏度 (-1.58%; P < 0.001)、阴性预测值 (NPV) (- 0.01%; P < 0.001) 和召回率 (< 0.06%; P = 0.04) 较低,但阳性预测值 (PPV) (+0.03%; P < 0.001) 和仲裁率 (+1.22%; P < 0.001) 较高。综合 AItriage 实现了更高的灵敏度(+1.33%;P < .001)、PPV(+0.36%;P = .03)和 NPV(+0.01%;P < .001),但仲裁率较低(-0.88%;P < .001)。用人工智能取代一台或两台读码器似乎是可行的,但工作流程中的应用位置会对准确性和工作量产生临床相关影响。©RSNA,2024。
{"title":"AI-integrated Screening to Replace Double Reading of Mammograms: A Population-wide Accuracy and Feasibility Study.","authors":"Mohammad T Elhakim, Sarah W Stougaard, Ole Graumann, Mads Nielsen, Oke Gerke, Lisbet B Larsen, Benjamin S B Rasmussen","doi":"10.1148/ryai.230529","DOIUrl":"10.1148/ryai.230529","url":null,"abstract":"<p><p>Mammography screening supported by deep learning-based artificial intelligence (AI) solutions can potentially reduce workload without compromising breast cancer detection accuracy, but the site of deployment in the workflow might be crucial. This retrospective study compared three simulated AI-integrated screening scenarios with standard double reading with arbitration in a sample of 249 402 mammograms from a representative screening population. A commercial AI system replaced the first reader (scenario 1: integrated AI<sub>first</sub>), the second reader (scenario 2: integrated AI<sub>second</sub>), or both readers for triaging of low- and high-risk cases (scenario 3: integrated AI<sub>triage</sub>). AI threshold values were chosen based partly on previous validation and setting the screen-read volume reduction at approximately 50% across scenarios. Detection accuracy measures were calculated. Compared with standard double reading, integrated AI<sub>first</sub> showed no evidence of a difference in accuracy metrics except for a higher arbitration rate (+0.99%, <i>P</i> < .001). Integrated AI<sub>second</sub> had lower sensitivity (-1.58%, <i>P</i> < .001), negative predictive value (NPV) (-0.01%, <i>P</i> < .001), and recall rate (-0.06%, <i>P</i> = .04) but a higher positive predictive value (PPV) (+0.03%, <i>P</i> < .001) and arbitration rate (+1.22%, <i>P</i> < .001). Integrated AI<sub>triage</sub> achieved higher sensitivity (+1.33%, <i>P</i> < .001), PPV (+0.36%, <i>P</i> = .03), and NPV (+0.01%, <i>P</i> < .001) but lower arbitration rate (-0.88%, <i>P</i> < .001). Replacing one or both readers with AI seems feasible; however, the site of application in the workflow can have clinically relevant effects on accuracy and workload. <b>Keywords:</b> Mammography, Breast, Neoplasms-Primary, Screening, Epidemiology, Diagnosis, Convolutional Neural Network (CNN) <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230529"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11605135/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142126863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Radiology-Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1