首页 > 最新文献

Radiology-Artificial Intelligence最新文献

英文 中文
The Duke Lung Cancer Screening (DLCS) Dataset: A Reference Dataset of Annotated Low-Dose Screening Thoracic CT. 杜克肺癌筛查(dlc)数据集:注释低剂量筛查胸部CT的参考数据集。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-01 DOI: 10.1148/ryai.240248
Avivah J Wang, Fakrul Islam Tushar, Michael R Harowicz, Betty C Tong, Kyle J Lafata, Tina D Tailor, Joseph Y Lo
{"title":"The Duke Lung Cancer Screening (DLCS) Dataset: A Reference Dataset of Annotated Low-Dose Screening Thoracic CT.","authors":"Avivah J Wang, Fakrul Islam Tushar, Michael R Harowicz, Betty C Tong, Kyle J Lafata, Tina D Tailor, Joseph Y Lo","doi":"10.1148/ryai.240248","DOIUrl":"10.1148/ryai.240248","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240248"},"PeriodicalIF":13.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12319698/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144053087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Retrieval-Augmented Generation with Large Language Models in Radiology: From Theory to Practice. 放射学中大语言模型的检索增强生成:从理论到实践。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-01 DOI: 10.1148/ryai.240790
Anna Fink, Alexander Rau, Marco Reisert, Fabian Bamberg, Maximilian F Russe

Large language models (LLMs) hold substantial promise in addressing the growing workload in radiology, but recent studies also reveal limitations, such as hallucinations and opacity in sources for LLM responses. Retrieval-augmented generation (RAG)-based LLMs offer a promising approach to streamline radiology workflows by integrating reliable, verifiable, and customizable information. Ongoing refinement is critical in order to enable RAG models to manage large amounts of input data and to engage in complex multiagent dialogues. This report provides an overview of recent advances in LLM architecture, including few-shot and zero-shot learning, RAG integration, multistep reasoning, and agentic RAG, and identifies future research directions. Exemplary cases demonstrate the practical application of these techniques in radiology practice. Keywords: Artificial Intelligence, Deep Learning, Natural Language Processing, Tomography, x-Ray © RSNA, 2025.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。大型语言模型(LLM)在解决放射学日益增长的工作量方面有着巨大的希望,但最近的研究也揭示了局限性,例如LLM响应的幻觉和来源不透明。基于检索增强生成(RAG)的llm通过集成可靠、可验证和可定制的信息,为简化放射学工作流程提供了一种很有前途的方法。持续改进对于使RAG模型能够管理大量输入数据和参与复杂的多代理对话至关重要。本报告概述了LLM架构的最新进展,包括少射和零射学习、RAG集成、多步推理和代理RAG,并确定了未来的研究方向。示例案例演示了这些技术在放射学实践中的实际应用。©RSNA, 2025年。
{"title":"Retrieval-Augmented Generation with Large Language Models in Radiology: From Theory to Practice.","authors":"Anna Fink, Alexander Rau, Marco Reisert, Fabian Bamberg, Maximilian F Russe","doi":"10.1148/ryai.240790","DOIUrl":"10.1148/ryai.240790","url":null,"abstract":"<p><p>Large language models (LLMs) hold substantial promise in addressing the growing workload in radiology, but recent studies also reveal limitations, such as hallucinations and opacity in sources for LLM responses. Retrieval-augmented generation (RAG)-based LLMs offer a promising approach to streamline radiology workflows by integrating reliable, verifiable, and customizable information. Ongoing refinement is critical in order to enable RAG models to manage large amounts of input data and to engage in complex multiagent dialogues. This report provides an overview of recent advances in LLM architecture, including few-shot and zero-shot learning, RAG integration, multistep reasoning, and agentic RAG, and identifies future research directions. Exemplary cases demonstrate the practical application of these techniques in radiology practice. <b>Keywords:</b> Artificial Intelligence, Deep Learning, Natural Language Processing, Tomography, x-Ray © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240790"},"PeriodicalIF":13.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144217084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The BraTS-Africa Dataset: Expanding the Brain Tumor Segmentation Data to Capture African Populations. BraTS- africa数据集:扩展脑肿瘤分割(BraTS)数据以捕获非洲人口。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-01 DOI: 10.1148/ryai.240528
Maruf Adewole, Jeffrey D Rudie, Anu Gbadamosi, Dong Zhang, Confidence Raymond, James Ajigbotoshso, Oluyemisi Toyobo, Kenneth Aguh, Olubukola Omidiji, Rachel Akinola, Mohammad Abba Suwaid, Adaobi Emegoakor, Nancy Ojo, Chinasa Kalaiwo, Gabriel Babatunde, Afolabi Ogunleye, Yewande Gbadamosi, Kator Iorpagher, Mayomi Onuwaje, Bamidele Betiku, Jasmine Cakmak, Björn Menze, Ujjwal Baid, Spyridon Bakas, Farouk Dako, Abiodun Fatade, Udunna C Anazodo
{"title":"The BraTS-Africa Dataset: Expanding the Brain Tumor Segmentation Data to Capture African Populations.","authors":"Maruf Adewole, Jeffrey D Rudie, Anu Gbadamosi, Dong Zhang, Confidence Raymond, James Ajigbotoshso, Oluyemisi Toyobo, Kenneth Aguh, Olubukola Omidiji, Rachel Akinola, Mohammad Abba Suwaid, Adaobi Emegoakor, Nancy Ojo, Chinasa Kalaiwo, Gabriel Babatunde, Afolabi Ogunleye, Yewande Gbadamosi, Kator Iorpagher, Mayomi Onuwaje, Bamidele Betiku, Jasmine Cakmak, Björn Menze, Ujjwal Baid, Spyridon Bakas, Farouk Dako, Abiodun Fatade, Udunna C Anazodo","doi":"10.1148/ryai.240528","DOIUrl":"10.1148/ryai.240528","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240528"},"PeriodicalIF":13.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12319694/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143989079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of Scanner Manufacturer, Endorectal Coil Use, and Clinical Variables on Deep Learning-assisted Prostate Cancer Classification Using Multiparametric MRI. 扫描仪制造商、直肠内线圈使用和临床变量对多参数MRI深度学习辅助前列腺癌分类的影响。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-01 DOI: 10.1148/ryai.230555
José Guilherme de Almeida, Nuno M Rodrigues, Ana Sofia Castro Verde, Ana Mascarenhas Gaivão, Carlos Bilreiro, Inês Santiago, Joana Ip, Sara Belião, Celso Matos, Sara Silva, Manolis Tsiknakis, Kostantinos Marias, Daniele Regge, Nikolaos Papanikolaou

Purpose To assess the effect of scanner manufacturer and scanning protocol on the performance of deep learning models to classify aggressiveness of prostate cancer (PCa) at biparametric MRI (bpMRI). Materials and Methods In this retrospective study, 5478 cases from ProstateNet, a PCa bpMRI dataset with examinations from 13 centers, were used to develop five deep learning (DL) models to predict PCa aggressiveness with minimal lesion information and test how using data from different subgroups-scanner manufacturers and endorectal coil (ERC) use (Siemens, Philips, GE with and without ERC, and the full dataset)-affects model performance. Performance was assessed using the area under the receiver operating characteristic curve (AUC). The effect of clinical features (age, prostate-specific antigen level, Prostate Imaging Reporting and Data System score) on model performance was also evaluated. Results DL models were trained on 4328 bpMRI cases, and the best model achieved an AUC of 0.73 when trained and tested using data from all manufacturers. Held-out test set performance was higher when models trained with data from a manufacturer were tested on the same manufacturer (within- and between-manufacturer AUC differences of 0.05 on average, P < .001). The addition of clinical features did not improve performance (P = .24). Learning curve analyses showed that performance remained stable as training data increased. Analysis of DL features showed that scanner manufacturer and scanning protocol heavily influenced feature distributions. Conclusion In automated classification of PCa aggressiveness using bpMRI data, scanner manufacturer and ERC use had a major effect on DL model performance and features. Keywords: Convolutional Neural Network (CNN), Computer-aided Diagnosis (CAD), Computer Applications-General (Informatics), Oncology Supplemental material is available for this article. Published under a CC BY 4.0 license. See also commentary by Suri and Hsu in this issue.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的评估扫描仪制造商和扫描方案对深度学习模型在双参数MRI (bpMRI)上对前列腺癌(PCa)侵袭性分类性能的影响。在这项回顾性研究中,来自ProstateNet的5478例病例(来自13个中心的前列腺癌bpMRI数据集)被用于开发五种深度学习(DL)模型,以最少的病变信息预测前列腺癌的侵袭性,并测试使用来自不同亚组的数据-扫描仪制造商和直肠内线圈(ERC)的使用(西门子,飞利浦,GE有或没有ERC和完整数据集)-如何影响模型性能。使用接收器工作特性曲线下面积(AUC)评估性能。临床特征(年龄、前列腺特异性抗原水平、前列腺影像学报告和数据系统评分)对模型性能的影响也进行了评估。结果在4328例bpMRI病例上训练DL模型,使用所有厂商的数据进行训练和测试时,最佳模型AUC = 0.73。当使用来自制造商的数据训练的模型在同一制造商上进行测试时,保留测试集的性能更高(制造商内部和制造商之间的AUC平均差异为0.05,P < .001)。临床特征的增加并没有提高疗效(P = 0.24)。学习曲线分析表明,随着训练数据的增加,性能保持稳定。对DL特征的分析表明,扫描仪制造商和扫描协议对特征分布有很大影响。结论在利用bpMRI数据对前列腺癌侵袭性进行自动分类时,扫描仪制造商和直肠内线圈的使用对DL模型的性能和特征有重要影响。在CC BY 4.0许可下发布。
{"title":"Impact of Scanner Manufacturer, Endorectal Coil Use, and Clinical Variables on Deep Learning-assisted Prostate Cancer Classification Using Multiparametric MRI.","authors":"José Guilherme de Almeida, Nuno M Rodrigues, Ana Sofia Castro Verde, Ana Mascarenhas Gaivão, Carlos Bilreiro, Inês Santiago, Joana Ip, Sara Belião, Celso Matos, Sara Silva, Manolis Tsiknakis, Kostantinos Marias, Daniele Regge, Nikolaos Papanikolaou","doi":"10.1148/ryai.230555","DOIUrl":"10.1148/ryai.230555","url":null,"abstract":"<p><p>Purpose To assess the effect of scanner manufacturer and scanning protocol on the performance of deep learning models to classify aggressiveness of prostate cancer (PCa) at biparametric MRI (bpMRI). Materials and Methods In this retrospective study, 5478 cases from ProstateNet, a PCa bpMRI dataset with examinations from 13 centers, were used to develop five deep learning (DL) models to predict PCa aggressiveness with minimal lesion information and test how using data from different subgroups-scanner manufacturers and endorectal coil (ERC) use (Siemens, Philips, GE with and without ERC, and the full dataset)-affects model performance. Performance was assessed using the area under the receiver operating characteristic curve (AUC). The effect of clinical features (age, prostate-specific antigen level, Prostate Imaging Reporting and Data System score) on model performance was also evaluated. Results DL models were trained on 4328 bpMRI cases, and the best model achieved an AUC of 0.73 when trained and tested using data from all manufacturers. Held-out test set performance was higher when models trained with data from a manufacturer were tested on the same manufacturer (within- and between-manufacturer AUC differences of 0.05 on average, <i>P</i> < .001). The addition of clinical features did not improve performance (<i>P</i> = .24). Learning curve analyses showed that performance remained stable as training data increased. Analysis of DL features showed that scanner manufacturer and scanning protocol heavily influenced feature distributions. Conclusion In automated classification of PCa aggressiveness using bpMRI data, scanner manufacturer and ERC use had a major effect on DL model performance and features. <b>Keywords:</b> Convolutional Neural Network (CNN), Computer-aided Diagnosis (CAD), Computer Applications-General (Informatics), Oncology <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license. See also commentary by Suri and Hsu in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230555"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-based Aligned Strain from Cine Cardiac MRI for Detection of Fibrotic Myocardial Tissue in Patients with Duchenne Muscular Dystrophy. 基于深度学习的Cine心脏MRI对齐应变检测杜氏肌营养不良患者纤维化心肌组织。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-01 DOI: 10.1148/ryai.240303
Sven Koehler, Julian Kuhm, Tyler Huffaker, Daniel Young, Animesh Tandon, Florian André, Norbert Frey, Gerald Greil, Tarique Hussain, Sandy Engelhardt

Purpose To develop a deep learning (DL) model that derives aligned strain values from cine (noncontrast) cardiac MRI and evaluate performance of these values to predict myocardial fibrosis in patients with Duchenne muscular dystrophy (DMD). Materials and Methods This retrospective study included 139 male patients with DMD who underwent cardiac MRI at a single center between February 2018 and April 2023. A DL pipeline was developed to detect five key frames throughout the cardiac cycle and respective dense deformation fields, allowing for phase-specific strain analysis across patients and from one key frame to the next. Effectiveness of these strain values in identifying abnormal deformations associated with fibrotic segments was evaluated in 57 patients (mean age [± SD], 15.2 years ± 3.1), and reproducibility was assessed in 82 patients by comparing the study method with existing feature-tracking and DL-based methods. Statistical analysis compared strain values using t tests, mixed models, and more than 2000 machine learning models; accuracy, F1 score, sensitivity, and specificity are reported. Results DL-based aligned strain identified five times more differences (29 vs five; P < .01) between fibrotic and nonfibrotic segments compared with traditional strain values and identified abnormal diastolic deformation patterns often missed with traditional methods. In addition, aligned strain values enhanced performance of predictive models for myocardial fibrosis detection, improving specificity by 40%, overall accuracy by 17%, and accuracy in patients with preserved ejection fraction by 61%. Conclusion The proposed aligned strain technique enables motion-based detection of myocardial dysfunction at noncontrast cardiac MRI, facilitating detailed interpatient strain analysis and allowing precise tracking of disease progression in DMD. Keywords: Pediatrics, Image Postprocessing, Heart, Cardiac, Convolutional Neural Network (CNN) Duchenne Muscular Dystrophy Supplemental material is available for this article. © RSNA, 2025.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的开发一种深度学习(DL)模型,该模型可从电影(非对比)心脏MRI中获得对齐应变值,并评估这些值的性能,以预测杜氏肌营养不良症(DMD)患者的心肌纤维化。材料与方法本回顾性研究纳入了139例男性DMD患者,这些患者于2018年2月至2023年4月在同一中心接受了心脏MRI检查。开发了一个DL管道来检测整个心脏周期的五个关键帧,以及各自的密集变形场,从而允许在患者之间以及从一个关键帧到下一个关键帧之间进行相位特定的应变分析。在57例患者(15.2±3.1年)中评估这些应变值识别与纤维化节段相关的异常变形的有效性,并在82例患者(12.8±2.7年)中评估再现性,将我们的方法与现有的特征跟踪和基于dl的方法进行比较。统计分析使用t检验、混合模型和2000+ ML模型比较应变值、报告准确性、F1评分、敏感性和特异性。结果与传统应变值相比,基于dl的对准应变在纤维化节段和非纤维化节段之间识别出的差异(29比5,P < 0.01)增加了5倍,并识别出了传统方法经常错过的异常舒张变形模式。此外,对齐的应变值提高了心肌纤维化检测预测模型的性能,特异性提高了40%,总体准确性提高了17%,射血分数保留患者的准确性提高了61%。结论提出的排列应变技术可以在无造影剂的心脏MRI上基于运动检测心肌功能障碍,方便详细的患者间应变分析,并可以精确跟踪DMD的疾病进展。©RSNA, 2025年。
{"title":"Deep Learning-based Aligned Strain from Cine Cardiac MRI for Detection of Fibrotic Myocardial Tissue in Patients with Duchenne Muscular Dystrophy.","authors":"Sven Koehler, Julian Kuhm, Tyler Huffaker, Daniel Young, Animesh Tandon, Florian André, Norbert Frey, Gerald Greil, Tarique Hussain, Sandy Engelhardt","doi":"10.1148/ryai.240303","DOIUrl":"10.1148/ryai.240303","url":null,"abstract":"<p><p>Purpose To develop a deep learning (DL) model that derives aligned strain values from cine (noncontrast) cardiac MRI and evaluate performance of these values to predict myocardial fibrosis in patients with Duchenne muscular dystrophy (DMD). Materials and Methods This retrospective study included 139 male patients with DMD who underwent cardiac MRI at a single center between February 2018 and April 2023. A DL pipeline was developed to detect five key frames throughout the cardiac cycle and respective dense deformation fields, allowing for phase-specific strain analysis across patients and from one key frame to the next. Effectiveness of these strain values in identifying abnormal deformations associated with fibrotic segments was evaluated in 57 patients (mean age [± SD], 15.2 years ± 3.1), and reproducibility was assessed in 82 patients by comparing the study method with existing feature-tracking and DL-based methods. Statistical analysis compared strain values using <i>t</i> tests, mixed models, and more than 2000 machine learning models; accuracy, F1 score, sensitivity, and specificity are reported. Results DL-based aligned strain identified five times more differences (29 vs five; <i>P</i> < .01) between fibrotic and nonfibrotic segments compared with traditional strain values and identified abnormal diastolic deformation patterns often missed with traditional methods. In addition, aligned strain values enhanced performance of predictive models for myocardial fibrosis detection, improving specificity by 40%, overall accuracy by 17%, and accuracy in patients with preserved ejection fraction by 61%. Conclusion The proposed aligned strain technique enables motion-based detection of myocardial dysfunction at noncontrast cardiac MRI, facilitating detailed interpatient strain analysis and allowing precise tracking of disease progression in DMD. <b>Keywords:</b> Pediatrics, Image Postprocessing, Heart, Cardiac, Convolutional Neural Network (CNN) Duchenne Muscular Dystrophy <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240303"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12127955/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143504686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Natural Language Processing for Everyone. 每个人的自然语言处理。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-01 DOI: 10.1148/ryai.250218
Quirin D Strotzer
{"title":"Natural Language Processing for Everyone.","authors":"Quirin D Strotzer","doi":"10.1148/ryai.250218","DOIUrl":"10.1148/ryai.250218","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 3","pages":"e250218"},"PeriodicalIF":13.2,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144053089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Intelligence Is Brittle: We Need to Do Better. 人工智能很脆弱:我们需要做得更好。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-01 DOI: 10.1148/ryai.250081
Abhinav Suri, William Hsu
{"title":"Artificial Intelligence Is Brittle: We Need to Do Better.","authors":"Abhinav Suri, William Hsu","doi":"10.1148/ryai.250081","DOIUrl":"10.1148/ryai.250081","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 3","pages":"e250081"},"PeriodicalIF":13.2,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12127952/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143812643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and Validation of a Sham-AI Model for Intracranial Aneurysm Detection at CT Angiography. 开发并验证用于 CT 血管造影检测颅内动脉瘤的模拟人工智能模型
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-01 DOI: 10.1148/ryai.240140
Zhao Shi, Bin Hu, Mengjie Lu, Manting Zhang, Haiting Yang, Bo He, Jiyao Ma, Chunfeng Hu, Li Lu, Sheng Li, Shiyu Ren, Yonggao Zhang, Jun Li, Mayidili Nijiati, Jiake Dong, Hao Wang, Zhen Zhou, Fandong Zhang, Chengwei Pan, Yizhou Yu, Zijian Chen, Chang Sheng Zhou, Yongyue Wei, Junlin Zhou, Long Jiang Zhang

Purpose To evaluate a sham-artificial intelligence (AI) model acting as a placebo control for a standard-AI model for diagnosis of intracranial aneurysm. Materials and Methods This retrospective crossover, blinded, multireader, multicase study was conducted from November 2022 to March 2023. A sham-AI model with near-zero sensitivity and similar specificity to a standard AI model was developed using 16 422 CT angiography examinations. Digital subtraction angiography-verified CT angiographic examinations from four hospitals were collected, half of which were processed by standard AI and the others by sham AI to generate sequence A; sequence B was generated in the reverse order. Twenty-eight radiologists from seven hospitals were randomly assigned to either sequence and then assigned to the other sequence after a washout period. The diagnostic performances of radiologists alone, radiologists with standard-AI assistance, and radiologists with sham-AI assistance were compared using sensitivity and specificity, and radiologists' susceptibility to sham AI suggestions was assessed. Results The testing dataset included 300 patients (median age, 61.0 years [IQR, 52.0-67.0]; 199 male), 50 of whom had aneurysms. Standard AI and sham AI performed as expected (sensitivity, 96.0% vs 0.0%; specificity, 82.0% vs 76.0%). The differences in sensitivity and specificity between standard AI-assisted and sham AI-assisted readings were 20.7% (95% CI: 15.8, 25.5 [superiority]) and 0.0% (95% CI: -2.0, 2.0 [noninferiority]), respectively. The difference between sham AI-assisted readings and radiologists alone was -2.6% (95% CI: -3.8, -1.4 [noninferiority]) for both sensitivity and specificity. After sham-AI suggestions, 5.3% (44 of 823) of true-positive and 1.2% (seven of 577) of false-negative results of radiologists alone were changed. Conclusion Radiologists' diagnostic performance was not compromised when aided by the proposed sham-AI model compared with their unassisted performance. Keywords: CT Angiography, Vascular, Intracranial Aneurysm, Sham AI Supplemental material is available for this article. Published under a CC BY 4.0 license. See also commentary by Mayfield and Romero in this issue.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的评价Sham-AI模型作为标准ai模型在颅内动脉瘤诊断中的安慰剂对照作用。材料和方法本研究于2022年11月至2023年3月进行回顾性交叉、盲法、多读者多病例研究。通过16,422次CT血管造影(CTA)检查,建立了一个灵敏度接近零、特异性与标准ai模型相似的Sham-AI模型。收集来自四家医院的数字减影血管造影验证的CTA检查,其中一半由Standard-AI处理,另一半由Sham-AI处理以生成序列A;序列B是反向生成的。来自7家医院的28名放射科医生被随机分配到任意一个序列,然后在洗脱期后分配到另一个序列。比较单独放射科医师、standard - ai辅助放射科医师和Sham-AI辅助放射科医师的诊断表现的敏感性和特异性,评估放射科医师对Sham-AI建议的易感性。结果纳入300例患者,中位年龄61 (IQR, 52.0-67.0)岁;199名男性),其中50人有动脉瘤。Standard-AI和Sham-AI表现符合预期(灵敏度:96.0%对0.0%,特异性:82.0%对76.0%)。standard - ai辅助和sham - ai辅助读数的敏感性和特异性差异分别为+20.7% (95%CI: 15.8%-25.5%,优势)和0.0% (95%CI: -2.0%-2.0%,非劣效性)。在敏感性和特异性方面,sham - ai辅助读数与单独放射科医生的差异为-2.6% (95%CI: -3.8%- 1.4%,非效性)。在Sham-AI建议下,仅有5.3%(44/823)的诊断结果为真阳性,1.2%(7/577)的结果为假阴性。结论:在Sham-AI模型的辅助下,放射科医生的诊断表现与无辅助的表现相比没有受到影响。在CC BY 4.0许可下发布。
{"title":"Development and Validation of a Sham-AI Model for Intracranial Aneurysm Detection at CT Angiography.","authors":"Zhao Shi, Bin Hu, Mengjie Lu, Manting Zhang, Haiting Yang, Bo He, Jiyao Ma, Chunfeng Hu, Li Lu, Sheng Li, Shiyu Ren, Yonggao Zhang, Jun Li, Mayidili Nijiati, Jiake Dong, Hao Wang, Zhen Zhou, Fandong Zhang, Chengwei Pan, Yizhou Yu, Zijian Chen, Chang Sheng Zhou, Yongyue Wei, Junlin Zhou, Long Jiang Zhang","doi":"10.1148/ryai.240140","DOIUrl":"10.1148/ryai.240140","url":null,"abstract":"<p><p>Purpose To evaluate a sham-artificial intelligence (AI) model acting as a placebo control for a standard-AI model for diagnosis of intracranial aneurysm. Materials and Methods This retrospective crossover, blinded, multireader, multicase study was conducted from November 2022 to March 2023. A sham-AI model with near-zero sensitivity and similar specificity to a standard AI model was developed using 16 422 CT angiography examinations. Digital subtraction angiography-verified CT angiographic examinations from four hospitals were collected, half of which were processed by standard AI and the others by sham AI to generate sequence A; sequence B was generated in the reverse order. Twenty-eight radiologists from seven hospitals were randomly assigned to either sequence and then assigned to the other sequence after a washout period. The diagnostic performances of radiologists alone, radiologists with standard-AI assistance, and radiologists with sham-AI assistance were compared using sensitivity and specificity, and radiologists' susceptibility to sham AI suggestions was assessed. Results The testing dataset included 300 patients (median age, 61.0 years [IQR, 52.0-67.0]; 199 male), 50 of whom had aneurysms. Standard AI and sham AI performed as expected (sensitivity, 96.0% vs 0.0%; specificity, 82.0% vs 76.0%). The differences in sensitivity and specificity between standard AI-assisted and sham AI-assisted readings were 20.7% (95% CI: 15.8, 25.5 [superiority]) and 0.0% (95% CI: -2.0, 2.0 [noninferiority]), respectively. The difference between sham AI-assisted readings and radiologists alone was -2.6% (95% CI: -3.8, -1.4 [noninferiority]) for both sensitivity and specificity. After sham-AI suggestions, 5.3% (44 of 823) of true-positive and 1.2% (seven of 577) of false-negative results of radiologists alone were changed. Conclusion Radiologists' diagnostic performance was not compromised when aided by the proposed sham-AI model compared with their unassisted performance. <b>Keywords:</b> CT Angiography, Vascular, Intracranial Aneurysm, Sham AI <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license. See also commentary by Mayfield and Romero in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240140"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143658885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Open-Weight Language Models and Retrieval-Augmented Generation for Automated Structured Data Extraction from Diagnostic Reports: Assessment of Approaches and Parameters. 从诊断报告中自动提取结构化数据的开放权重语言模型和检索增强生成:方法和参数的评估。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-01 DOI: 10.1148/ryai.240551
Mohamed Sobhi Jabal, Pranav Warman, Jikai Zhang, Kartikeye Gupta, Ayush Jain, Maciej Mazurowski, Walter Wiggins, Kirti Magudia, Evan Calabrese

Purpose To develop and evaluate an automated system for extracting structured clinical information from unstructured radiology and pathology reports using open-weight language models (LMs) and retrieval-augmented generation (RAG) and to assess the effects of model configuration variables on extraction performance. Materials and Methods This retrospective study used two datasets: 7294 radiology reports annotated for Brain Tumor Reporting and Data System (BT-RADS) scores and 2154 pathology reports annotated for IDH mutation status (January 2017-July 2021). An automated pipeline was developed to benchmark the performance of various LMs and RAG configurations for accuracy of structured data extraction from reports. The effect of model size, quantization, prompting strategies, output formatting, and inference parameters on model accuracy was systematically evaluated. Results The best-performing models achieved up to 98% accuracy in extracting BT-RADS scores from radiology reports and greater than 90% accuracy for extraction of IDH mutation status from pathology reports. The best model was medical fine-tuned Llama 3. Larger, newer, and domain fine-tuned models consistently outperformed older and smaller models (mean accuracy, 86% vs 75%; P < .001). Model quantization had minimal effect on performance. Few-shot prompting significantly improved accuracy (mean [±SD] increase, 32% ± 32; P = .02). RAG improved performance for complex pathology reports by a mean of 48% ± 11 (P = .001) but not for shorter radiology reports (-8% ± 31; P = .39). Conclusion This study demonstrates the potential of open LMs in automated extraction of structured clinical data from unstructured clinical reports with local privacy-preserving application. Careful model selection, prompt engineering, and semiautomated optimization using annotated data are critical for optimal performance. Keywords: Large Language Models, Retrieval-Augmented Generation, Radiology, Pathology, Health Care Reports Supplemental material is available for this article. © RSNA, 2025 See also commentary by Tejani and Rauschecker in this issue.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的:开发和评估一个使用开放权重语言模型(LMs)和检索增强生成(RAG)从非结构化放射学和病理报告中提取结构化临床信息的自动化系统,并评估模型配置变量对提取性能的影响。材料和方法本回顾性研究使用了两个数据集:7,294份脑肿瘤报告和数据系统(BT-RADS)评分注释的放射学报告和2,154份IDH突变状态注释的病理报告(2017年1月至2021年7月)。开发了一个自动化管道来对各种lm和RAG配置的性能进行基准测试,以确保从报告中提取结构化数据的准确性。系统地评估了模型大小、量化、提示策略、输出格式和推理参数对模型精度的影响。结果表现最好的模型在从放射学报告中提取BT-RADS评分方面的准确率高达98%,在从病理报告中提取IDH突变状态方面的准确率超过90%。最好的模型是医疗微调羊驼。较大的、较新的和领域微调的模型始终优于较旧的和较小的模型(平均准确率,86%对75%;P < 0.001)。模型量化对性能的影响最小。少针提示显著提高准确率(平均提高32%±32%,P = 0.02)。对于复杂的病理报告,RAG提高了48%±11% (P = .001),但对于较短的放射学报告,RAG提高了8%±31% (P = .39)。结论本研究展示了开放式LMs在从非结构化临床报告中自动提取结构化临床数据以及本地隐私保护应用方面的潜力。仔细的模型选择、快速的工程设计和使用带注释的数据的半自动优化是实现最佳性能的关键。©RSNA, 2025年。
{"title":"Open-Weight Language Models and Retrieval-Augmented Generation for Automated Structured Data Extraction from Diagnostic Reports: Assessment of Approaches and Parameters.","authors":"Mohamed Sobhi Jabal, Pranav Warman, Jikai Zhang, Kartikeye Gupta, Ayush Jain, Maciej Mazurowski, Walter Wiggins, Kirti Magudia, Evan Calabrese","doi":"10.1148/ryai.240551","DOIUrl":"10.1148/ryai.240551","url":null,"abstract":"<p><p>Purpose To develop and evaluate an automated system for extracting structured clinical information from unstructured radiology and pathology reports using open-weight language models (LMs) and retrieval-augmented generation (RAG) and to assess the effects of model configuration variables on extraction performance. Materials and Methods This retrospective study used two datasets: 7294 radiology reports annotated for Brain Tumor Reporting and Data System (BT-RADS) scores and 2154 pathology reports annotated for <i>IDH</i> mutation status (January 2017-July 2021). An automated pipeline was developed to benchmark the performance of various LMs and RAG configurations for accuracy of structured data extraction from reports. The effect of model size, quantization, prompting strategies, output formatting, and inference parameters on model accuracy was systematically evaluated. Results The best-performing models achieved up to 98% accuracy in extracting BT-RADS scores from radiology reports and greater than 90% accuracy for extraction of <i>IDH</i> mutation status from pathology reports. The best model was medical fine-tuned Llama 3. Larger, newer, and domain fine-tuned models consistently outperformed older and smaller models (mean accuracy, 86% vs 75%; <i>P</i> < .001). Model quantization had minimal effect on performance. Few-shot prompting significantly improved accuracy (mean [±SD] increase, 32% ± 32; <i>P</i> = .02). RAG improved performance for complex pathology reports by a mean of 48% ± 11 (<i>P</i> = .001) but not for shorter radiology reports (-8% ± 31; <i>P</i> = .39). Conclusion This study demonstrates the potential of open LMs in automated extraction of structured clinical data from unstructured clinical reports with local privacy-preserving application. Careful model selection, prompt engineering, and semiautomated optimization using annotated data are critical for optimal performance. <b>Keywords:</b> Large Language Models, Retrieval-Augmented Generation, Radiology, Pathology, Health Care Reports <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Tejani and Rauschecker in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240551"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143606547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised Deep Learning for Blood-Brain Barrier Leakage Detection in Diffuse Glioma Using Dynamic Contrast-enhanced MRI. 无监督深度学习在弥漫性胶质瘤血脑屏障渗漏检测中的应用。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-01 DOI: 10.1148/ryai.240507
Joon Jang, Kyu Sung Choi, Junhyeok Lee, Hyochul Lee, Inpyeong Hwang, Jung Hyun Park, Jin Wook Chung, Seung Hong Choi, Hyeonjin Kim

Purpose To develop an unsupervised deep learning framework for generalizable blood-brain barrier leakage detection using dynamic contrast-enhanced MRI, without requiring pharmacokinetic models and arterial input function estimation. Materials and Methods This retrospective study included data from patients who underwent dynamic contrast-enhanced MRI between April 2010 and December 2020. An autoencoder-based anomaly detection approach identified one-dimensional voxel-wise time-series abnormal signals through reconstruction residuals, separating them into residual leakage signals (RLSs) and residual vascular signals. The RLS maps were evaluated and compared with the volume transfer constant (Ktrans) using the structural similarity index and correlation coefficient. Generalizability was tested on subsampled data, and isocitrate dehydrogenase (IDH) status classification performance was assessed using area under the receiver operating characteristic curve (AUC). Results A total of 274 patients (mean age, 54.4 years ± 14.6 [SD]; 164 male) were included in the study. RLS showed high structural similarity (structural similarity index, 0.91 ± 0.02) and correlation (r = 0.56; P < .001) with Ktrans. On subsampled data, RLS maps showed better correlation with RLS values from the original data (0.89 vs 0.72; P < .001), higher peak signal-to-noise ratio (33.09 dB vs 28.94 dB; P < .001), and higher structural similarity index (0.92 vs 0.87; P < .001) compared with Ktrans maps. RLS maps also outperformed Ktrans maps in predicting IDH mutation status (AUC, 0.87 [95% CI: 0.83, 0.91] vs 0.81 [95% CI: 0.76, 0.85]; P = .02). Conclusion The unsupervised framework effectively detected blood-brain barrier leakage without pharmacokinetic models and arterial input function. Keywords: Dynamic Contrast-enhanced MRI, Unsupervised Learning, Feature Detection, Blood-Brain Barrier Leakage Detection Supplemental material is available for this article. © RSNA, 2025 See also commentary by Júdice de Mattos Farina and Kuriki in this issue.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的开发一种无监督深度学习框架,用于使用动态对比增强(DCE) MRI进行泛化血脑屏障(BBB)泄漏检测,而不需要药代动力学(PK)模型和动脉输入函数(AIF)估计。材料和方法本回顾性研究包括2010年4月至2020年12月期间接受DCE MRI检查的患者的数据。基于自编码器的异常检测(AEAD)通过重建残差识别一维体素时间序列异常信号,并将其分为残余泄漏信号(RLS)和残余血管信号(RVS)。利用结构相似指数(SSIM)和相关系数(r)对RLS图谱进行评价,并与体积传递常数(Ktrans)进行比较。对次采样数据进行了通用性测试,并利用受试者工作特征曲线(aus)下的面积评估了IDH状态分类性能。结果共纳入274例患者,其中男性164例;平均年龄54.23±[SD] 14.66岁)。RLS与Ktrans具有较高的结构相似性(SSIM = 0.91±0.02)和相关性(r = 0.56, P < 0.001)。在次采样数据上,与Ktrans图相比,RLS图与原始数据的RLS值具有更好的相关性(0.89比0.72,P < 0.001),更高的PSNR (33.09 dB比28.94 dB, P < 0.001)和更高的SSIM(0.92比0.87,P < 0.001)。RLS图谱在预测IDH突变状态方面也优于Ktrans图谱(AUC = 0.87 [95% CI: 0.83-0.91] vs . 0.81 [95% CI: 0.76-0.85], P = 0.02)。结论无监督框架在没有PK模型和AIF的情况下能有效检测血脑屏障渗漏。©RSNA, 2025年。
{"title":"Unsupervised Deep Learning for Blood-Brain Barrier Leakage Detection in Diffuse Glioma Using Dynamic Contrast-enhanced MRI.","authors":"Joon Jang, Kyu Sung Choi, Junhyeok Lee, Hyochul Lee, Inpyeong Hwang, Jung Hyun Park, Jin Wook Chung, Seung Hong Choi, Hyeonjin Kim","doi":"10.1148/ryai.240507","DOIUrl":"10.1148/ryai.240507","url":null,"abstract":"<p><p>Purpose To develop an unsupervised deep learning framework for generalizable blood-brain barrier leakage detection using dynamic contrast-enhanced MRI, without requiring pharmacokinetic models and arterial input function estimation. Materials and Methods This retrospective study included data from patients who underwent dynamic contrast-enhanced MRI between April 2010 and December 2020. An autoencoder-based anomaly detection approach identified one-dimensional voxel-wise time-series abnormal signals through reconstruction residuals, separating them into residual leakage signals (RLSs) and residual vascular signals. The RLS maps were evaluated and compared with the volume transfer constant (<i>K</i><sup>trans</sup>) using the structural similarity index and correlation coefficient. Generalizability was tested on subsampled data, and isocitrate dehydrogenase (<i>IDH</i>) status classification performance was assessed using area under the receiver operating characteristic curve (AUC). Results A total of 274 patients (mean age, 54.4 years ± 14.6 [SD]; 164 male) were included in the study. RLS showed high structural similarity (structural similarity index, 0.91 ± 0.02) and correlation (<i>r</i> = 0.56; <i>P</i> < .001) with <i>K</i><sup>trans</sup>. On subsampled data, RLS maps showed better correlation with RLS values from the original data (0.89 vs 0.72; <i>P</i> < .001), higher peak signal-to-noise ratio (33.09 dB vs 28.94 dB; <i>P</i> < .001), and higher structural similarity index (0.92 vs 0.87; <i>P</i> < .001) compared with <i>K</i><sup>trans</sup> maps. RLS maps also outperformed <i>K</i><sup>trans</sup> maps in predicting <i>IDH</i> mutation status (AUC, 0.87 [95% CI: 0.83, 0.91] vs 0.81 [95% CI: 0.76, 0.85]; <i>P</i> = .02). Conclusion The unsupervised framework effectively detected blood-brain barrier leakage without pharmacokinetic models and arterial input function. <b>Keywords:</b> Dynamic Contrast-enhanced MRI, Unsupervised Learning, Feature Detection, Blood-Brain Barrier Leakage Detection <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Júdice de Mattos Farina and Kuriki in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240507"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143764378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Radiology-Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1