首页 > 最新文献

Radiology-Artificial Intelligence最新文献

英文 中文
Open-Weight Language Models and Retrieval-Augmented Generation for Automated Structured Data Extraction from Diagnostic Reports: Assessment of Approaches and Parameters. 从诊断报告中自动提取结构化数据的开放权重语言模型和检索增强生成:方法和参数的评估。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-01 DOI: 10.1148/ryai.240551
Mohamed Sobhi Jabal, Pranav Warman, Jikai Zhang, Kartikeye Gupta, Ayush Jain, Maciej Mazurowski, Walter Wiggins, Kirti Magudia, Evan Calabrese

Purpose To develop and evaluate an automated system for extracting structured clinical information from unstructured radiology and pathology reports using open-weight language models (LMs) and retrieval-augmented generation (RAG) and to assess the effects of model configuration variables on extraction performance. Materials and Methods This retrospective study used two datasets: 7294 radiology reports annotated for Brain Tumor Reporting and Data System (BT-RADS) scores and 2154 pathology reports annotated for IDH mutation status (January 2017-July 2021). An automated pipeline was developed to benchmark the performance of various LMs and RAG configurations for accuracy of structured data extraction from reports. The effect of model size, quantization, prompting strategies, output formatting, and inference parameters on model accuracy was systematically evaluated. Results The best-performing models achieved up to 98% accuracy in extracting BT-RADS scores from radiology reports and greater than 90% accuracy for extraction of IDH mutation status from pathology reports. The best model was medical fine-tuned Llama 3. Larger, newer, and domain fine-tuned models consistently outperformed older and smaller models (mean accuracy, 86% vs 75%; P < .001). Model quantization had minimal effect on performance. Few-shot prompting significantly improved accuracy (mean [±SD] increase, 32% ± 32; P = .02). RAG improved performance for complex pathology reports by a mean of 48% ± 11 (P = .001) but not for shorter radiology reports (-8% ± 31; P = .39). Conclusion This study demonstrates the potential of open LMs in automated extraction of structured clinical data from unstructured clinical reports with local privacy-preserving application. Careful model selection, prompt engineering, and semiautomated optimization using annotated data are critical for optimal performance. Keywords: Large Language Models, Retrieval-Augmented Generation, Radiology, Pathology, Health Care Reports Supplemental material is available for this article. © RSNA, 2025 See also commentary by Tejani and Rauschecker in this issue.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的:开发和评估一个使用开放权重语言模型(LMs)和检索增强生成(RAG)从非结构化放射学和病理报告中提取结构化临床信息的自动化系统,并评估模型配置变量对提取性能的影响。材料和方法本回顾性研究使用了两个数据集:7,294份脑肿瘤报告和数据系统(BT-RADS)评分注释的放射学报告和2,154份IDH突变状态注释的病理报告(2017年1月至2021年7月)。开发了一个自动化管道来对各种lm和RAG配置的性能进行基准测试,以确保从报告中提取结构化数据的准确性。系统地评估了模型大小、量化、提示策略、输出格式和推理参数对模型精度的影响。结果表现最好的模型在从放射学报告中提取BT-RADS评分方面的准确率高达98%,在从病理报告中提取IDH突变状态方面的准确率超过90%。最好的模型是医疗微调羊驼。较大的、较新的和领域微调的模型始终优于较旧的和较小的模型(平均准确率,86%对75%;P < 0.001)。模型量化对性能的影响最小。少针提示显著提高准确率(平均提高32%±32%,P = 0.02)。对于复杂的病理报告,RAG提高了48%±11% (P = .001),但对于较短的放射学报告,RAG提高了8%±31% (P = .39)。结论本研究展示了开放式LMs在从非结构化临床报告中自动提取结构化临床数据以及本地隐私保护应用方面的潜力。仔细的模型选择、快速的工程设计和使用带注释的数据的半自动优化是实现最佳性能的关键。©RSNA, 2025年。
{"title":"Open-Weight Language Models and Retrieval-Augmented Generation for Automated Structured Data Extraction from Diagnostic Reports: Assessment of Approaches and Parameters.","authors":"Mohamed Sobhi Jabal, Pranav Warman, Jikai Zhang, Kartikeye Gupta, Ayush Jain, Maciej Mazurowski, Walter Wiggins, Kirti Magudia, Evan Calabrese","doi":"10.1148/ryai.240551","DOIUrl":"10.1148/ryai.240551","url":null,"abstract":"<p><p>Purpose To develop and evaluate an automated system for extracting structured clinical information from unstructured radiology and pathology reports using open-weight language models (LMs) and retrieval-augmented generation (RAG) and to assess the effects of model configuration variables on extraction performance. Materials and Methods This retrospective study used two datasets: 7294 radiology reports annotated for Brain Tumor Reporting and Data System (BT-RADS) scores and 2154 pathology reports annotated for <i>IDH</i> mutation status (January 2017-July 2021). An automated pipeline was developed to benchmark the performance of various LMs and RAG configurations for accuracy of structured data extraction from reports. The effect of model size, quantization, prompting strategies, output formatting, and inference parameters on model accuracy was systematically evaluated. Results The best-performing models achieved up to 98% accuracy in extracting BT-RADS scores from radiology reports and greater than 90% accuracy for extraction of <i>IDH</i> mutation status from pathology reports. The best model was medical fine-tuned Llama 3. Larger, newer, and domain fine-tuned models consistently outperformed older and smaller models (mean accuracy, 86% vs 75%; <i>P</i> < .001). Model quantization had minimal effect on performance. Few-shot prompting significantly improved accuracy (mean [±SD] increase, 32% ± 32; <i>P</i> = .02). RAG improved performance for complex pathology reports by a mean of 48% ± 11 (<i>P</i> = .001) but not for shorter radiology reports (-8% ± 31; <i>P</i> = .39). Conclusion This study demonstrates the potential of open LMs in automated extraction of structured clinical data from unstructured clinical reports with local privacy-preserving application. Careful model selection, prompt engineering, and semiautomated optimization using annotated data are critical for optimal performance. <b>Keywords:</b> Large Language Models, Retrieval-Augmented Generation, Radiology, Pathology, Health Care Reports <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Tejani and Rauschecker in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240551"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143606547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised Deep Learning for Blood-Brain Barrier Leakage Detection in Diffuse Glioma Using Dynamic Contrast-enhanced MRI. 无监督深度学习在弥漫性胶质瘤血脑屏障渗漏检测中的应用。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-01 DOI: 10.1148/ryai.240507
Joon Jang, Kyu Sung Choi, Junhyeok Lee, Hyochul Lee, Inpyeong Hwang, Jung Hyun Park, Jin Wook Chung, Seung Hong Choi, Hyeonjin Kim

Purpose To develop an unsupervised deep learning framework for generalizable blood-brain barrier leakage detection using dynamic contrast-enhanced MRI, without requiring pharmacokinetic models and arterial input function estimation. Materials and Methods This retrospective study included data from patients who underwent dynamic contrast-enhanced MRI between April 2010 and December 2020. An autoencoder-based anomaly detection approach identified one-dimensional voxel-wise time-series abnormal signals through reconstruction residuals, separating them into residual leakage signals (RLSs) and residual vascular signals. The RLS maps were evaluated and compared with the volume transfer constant (Ktrans) using the structural similarity index and correlation coefficient. Generalizability was tested on subsampled data, and isocitrate dehydrogenase (IDH) status classification performance was assessed using area under the receiver operating characteristic curve (AUC). Results A total of 274 patients (mean age, 54.4 years ± 14.6 [SD]; 164 male) were included in the study. RLS showed high structural similarity (structural similarity index, 0.91 ± 0.02) and correlation (r = 0.56; P < .001) with Ktrans. On subsampled data, RLS maps showed better correlation with RLS values from the original data (0.89 vs 0.72; P < .001), higher peak signal-to-noise ratio (33.09 dB vs 28.94 dB; P < .001), and higher structural similarity index (0.92 vs 0.87; P < .001) compared with Ktrans maps. RLS maps also outperformed Ktrans maps in predicting IDH mutation status (AUC, 0.87 [95% CI: 0.83, 0.91] vs 0.81 [95% CI: 0.76, 0.85]; P = .02). Conclusion The unsupervised framework effectively detected blood-brain barrier leakage without pharmacokinetic models and arterial input function. Keywords: Dynamic Contrast-enhanced MRI, Unsupervised Learning, Feature Detection, Blood-Brain Barrier Leakage Detection Supplemental material is available for this article. © RSNA, 2025 See also commentary by Júdice de Mattos Farina and Kuriki in this issue.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的开发一种无监督深度学习框架,用于使用动态对比增强(DCE) MRI进行泛化血脑屏障(BBB)泄漏检测,而不需要药代动力学(PK)模型和动脉输入函数(AIF)估计。材料和方法本回顾性研究包括2010年4月至2020年12月期间接受DCE MRI检查的患者的数据。基于自编码器的异常检测(AEAD)通过重建残差识别一维体素时间序列异常信号,并将其分为残余泄漏信号(RLS)和残余血管信号(RVS)。利用结构相似指数(SSIM)和相关系数(r)对RLS图谱进行评价,并与体积传递常数(Ktrans)进行比较。对次采样数据进行了通用性测试,并利用受试者工作特征曲线(aus)下的面积评估了IDH状态分类性能。结果共纳入274例患者,其中男性164例;平均年龄54.23±[SD] 14.66岁)。RLS与Ktrans具有较高的结构相似性(SSIM = 0.91±0.02)和相关性(r = 0.56, P < 0.001)。在次采样数据上,与Ktrans图相比,RLS图与原始数据的RLS值具有更好的相关性(0.89比0.72,P < 0.001),更高的PSNR (33.09 dB比28.94 dB, P < 0.001)和更高的SSIM(0.92比0.87,P < 0.001)。RLS图谱在预测IDH突变状态方面也优于Ktrans图谱(AUC = 0.87 [95% CI: 0.83-0.91] vs . 0.81 [95% CI: 0.76-0.85], P = 0.02)。结论无监督框架在没有PK模型和AIF的情况下能有效检测血脑屏障渗漏。©RSNA, 2025年。
{"title":"Unsupervised Deep Learning for Blood-Brain Barrier Leakage Detection in Diffuse Glioma Using Dynamic Contrast-enhanced MRI.","authors":"Joon Jang, Kyu Sung Choi, Junhyeok Lee, Hyochul Lee, Inpyeong Hwang, Jung Hyun Park, Jin Wook Chung, Seung Hong Choi, Hyeonjin Kim","doi":"10.1148/ryai.240507","DOIUrl":"10.1148/ryai.240507","url":null,"abstract":"<p><p>Purpose To develop an unsupervised deep learning framework for generalizable blood-brain barrier leakage detection using dynamic contrast-enhanced MRI, without requiring pharmacokinetic models and arterial input function estimation. Materials and Methods This retrospective study included data from patients who underwent dynamic contrast-enhanced MRI between April 2010 and December 2020. An autoencoder-based anomaly detection approach identified one-dimensional voxel-wise time-series abnormal signals through reconstruction residuals, separating them into residual leakage signals (RLSs) and residual vascular signals. The RLS maps were evaluated and compared with the volume transfer constant (<i>K</i><sup>trans</sup>) using the structural similarity index and correlation coefficient. Generalizability was tested on subsampled data, and isocitrate dehydrogenase (<i>IDH</i>) status classification performance was assessed using area under the receiver operating characteristic curve (AUC). Results A total of 274 patients (mean age, 54.4 years ± 14.6 [SD]; 164 male) were included in the study. RLS showed high structural similarity (structural similarity index, 0.91 ± 0.02) and correlation (<i>r</i> = 0.56; <i>P</i> < .001) with <i>K</i><sup>trans</sup>. On subsampled data, RLS maps showed better correlation with RLS values from the original data (0.89 vs 0.72; <i>P</i> < .001), higher peak signal-to-noise ratio (33.09 dB vs 28.94 dB; <i>P</i> < .001), and higher structural similarity index (0.92 vs 0.87; <i>P</i> < .001) compared with <i>K</i><sup>trans</sup> maps. RLS maps also outperformed <i>K</i><sup>trans</sup> maps in predicting <i>IDH</i> mutation status (AUC, 0.87 [95% CI: 0.83, 0.91] vs 0.81 [95% CI: 0.76, 0.85]; <i>P</i> = .02). Conclusion The unsupervised framework effectively detected blood-brain barrier leakage without pharmacokinetic models and arterial input function. <b>Keywords:</b> Dynamic Contrast-enhanced MRI, Unsupervised Learning, Feature Detection, Blood-Brain Barrier Leakage Detection <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Júdice de Mattos Farina and Kuriki in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240507"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143764378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of Two Deep Learning-based AI Models for Breast Cancer Detection and Localization on Screening Mammograms from BreastScreen Norway. 两种基于深度学习的人工智能模型在筛查乳房x光片上的乳腺癌检测和定位性能
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-01 DOI: 10.1148/ryai.240039
Marit A Martiniussen, Marthe Larsen, Tone Hovda, Merete U Kristiansen, Fredrik A Dahl, Line Eikvil, Olav Brautaset, Atle Bjørnerud, Vessela Kristensen, Marie B Bergan, Solveig Hofvind

Purpose To evaluate cancer detection and marker placement accuracy of two artificial intelligence (AI) models developed for interpretation of screening mammograms. Materials and Methods This retrospective study included data from 129 434 screening examinations (all female patients; mean age, 59.2 years ± 5.8 [SD]) performed between January 2008 and December 2018 in BreastScreen Norway. Model A was commercially available and model B was an in-house model. Area under the receiver operating characteristic curve (AUC) with 95% CIs were calculated. The study defined 3.2% and 11.1% of the examinations with the highest AI scores as positive, threshold 1 and 2, respectively. A radiologic review assessed location of AI markings and classified interval cancers as true or false negative. Results The AUC value was 0.93 (95% CI: 0.92, 0.94) for model A and B when including screen-detected and interval cancers. Model A identified 82.5% (611 of 741) of the screen-detected cancers at threshold 1 and 92.4% (685 of 741) at threshold 2. Model B identified 81.8% (606 of 741) at threshold 1 and 93.7% (694 of 741) at threshold 2. The AI markings were correctly localized for all screen-detected cancers identified by both models and 82% (56 of 68) of the interval cancers for model A and 79% (54 of 68) for model B. At the review, 21.6% (45 of 208) of the interval cancers were identified at the preceding screening by either or both models, correctly localized and classified as false negative (n = 17) or with minimal signs of malignancy (n = 28). Conclusion Both AI models showed promising performance for cancer detection on screening mammograms. The AI markings corresponded well to the true cancer locations. Keywords: Breast, Mammography, Screening, Computed-aided Diagnosis Supplemental material is available for this article. © RSNA, 2025 See also commentary by Cadrin-Chênevert in this issue.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的评价两种人工智能(AI)模型对筛查性乳房x线照片的诊断和标记物定位准确性。材料和方法本回顾性研究包括2008年1月至2018年12月在挪威乳房筛查中心进行的124934次筛查检查(均为女性,平均年龄59.2岁,SD = 5.8)的数据。A型是市售型,B型是内部模型。计算受试者工作特征曲线下面积(AUC)和95%置信区间(ci)。该研究将人工智能得分最高的检查分别定义为3.2%和11.1%为阳性,阈值为1和2。放射学检查评估了AI标记的位置,并将间隔期癌症分类为真阴性或假阴性。结果当包括筛检癌和间隔期癌时,模型A和B的AUC为0.93 (95% CI: 0.92-0.94)。模型A在阈值1和阈值2分别识别出82.5%(611/741)和92.4%(685/741)的筛检癌。型号B分别为81.8%(606/741)和93.7%(694/741)。AI标记对两种模型识别的所有筛查检测到的癌症都正确定位,对模型A和模型b的间隔癌症分别有82%(56/68)和79%(54/68)。在回顾中,21.6%(45/208)的间隔癌症在之前的筛查中被一种或两种模型识别出来,正确定位并归类为假阴性(n = 17)或具有最小恶性肿瘤迹象(n = 28)。结论两种人工智能模型在乳房x线筛查中均表现出良好的癌症检测效果。人工智能标记与真实的癌症位置非常吻合。©RSNA, 2025年。
{"title":"Performance of Two Deep Learning-based AI Models for Breast Cancer Detection and Localization on Screening Mammograms from BreastScreen Norway.","authors":"Marit A Martiniussen, Marthe Larsen, Tone Hovda, Merete U Kristiansen, Fredrik A Dahl, Line Eikvil, Olav Brautaset, Atle Bjørnerud, Vessela Kristensen, Marie B Bergan, Solveig Hofvind","doi":"10.1148/ryai.240039","DOIUrl":"10.1148/ryai.240039","url":null,"abstract":"<p><p>Purpose To evaluate cancer detection and marker placement accuracy of two artificial intelligence (AI) models developed for interpretation of screening mammograms. Materials and Methods This retrospective study included data from 129 434 screening examinations (all female patients; mean age, 59.2 years ± 5.8 [SD]) performed between January 2008 and December 2018 in BreastScreen Norway. Model A was commercially available and model B was an in-house model. Area under the receiver operating characteristic curve (AUC) with 95% CIs were calculated. The study defined 3.2% and 11.1% of the examinations with the highest AI scores as positive, threshold 1 and 2, respectively. A radiologic review assessed location of AI markings and classified interval cancers as true or false negative. Results The AUC value was 0.93 (95% CI: 0.92, 0.94) for model A and B when including screen-detected and interval cancers. Model A identified 82.5% (611 of 741) of the screen-detected cancers at threshold 1 and 92.4% (685 of 741) at threshold 2. Model B identified 81.8% (606 of 741) at threshold 1 and 93.7% (694 of 741) at threshold 2. The AI markings were correctly localized for all screen-detected cancers identified by both models and 82% (56 of 68) of the interval cancers for model A and 79% (54 of 68) for model B. At the review, 21.6% (45 of 208) of the interval cancers were identified at the preceding screening by either or both models, correctly localized and classified as false negative (<i>n</i> = 17) or with minimal signs of malignancy (<i>n</i> = 28). Conclusion Both AI models showed promising performance for cancer detection on screening mammograms. The AI markings corresponded well to the true cancer locations. <b>Keywords:</b> Breast, Mammography, Screening, Computed-aided Diagnosis <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Cadrin-Chênevert in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240039"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143190743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
One System to Rule Them All? Task- and Data-specific Considerations for Automated Data Extraction. 用一个系统来统治所有人?自动数据提取的任务和数据特定考虑因素。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-01 DOI: 10.1148/ryai.250175
Ali S Tejani, Andreas M Rauschecker
{"title":"One System to Rule Them All? Task- and Data-specific Considerations for Automated Data Extraction.","authors":"Ali S Tejani, Andreas M Rauschecker","doi":"10.1148/ryai.250175","DOIUrl":"10.1148/ryai.250175","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 3","pages":"e250175"},"PeriodicalIF":13.2,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144018982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Seeing the Unseen: How Unsupervised Learning Can Predict Genetic Mutations from Radiologic Images. 看到看不见的:无监督学习如何从放射图像预测基因突变。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-01 DOI: 10.1148/ryai.250243
Eduardo Moreno Júdice de Mattos Farina, Paulo Eduardo de Aguiar Kuriki
{"title":"Seeing the Unseen: How Unsupervised Learning Can Predict Genetic Mutations from Radiologic Images.","authors":"Eduardo Moreno Júdice de Mattos Farina, Paulo Eduardo de Aguiar Kuriki","doi":"10.1148/ryai.250243","DOIUrl":"10.1148/ryai.250243","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 3","pages":"e250243"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144001837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Dual-Task Deep Learning for Automated Thyroid Cancer Triaging at Screening US. 自适应双任务深度学习用于筛查美国甲状腺癌的自动分类。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-01 DOI: 10.1148/ryai.240271
Shao-Hong Wu, Ming-De Li, Wen-Juan Tong, Yi-Hao Liu, Rui Cui, Jin-Bo Hu, Mei-Qing Cheng, Wei-Ping Ke, Xin-Xin Lin, Jia-Yi Lv, Long-Zhong Liu, Jie Ren, Guang-Jian Liu, Hong Yang, Wei Wang

Purpose To develop an adaptive dual-task deep learning model (ThyNet-S) for detection and classification of thyroid lesions at US screening. Materials and Methods This retrospective study used a multicenter dataset comprising 35 008 thyroid US images of 23 294 individual examinations (mean age, 40.4 years ± 13.1 [SD]; 17 587 female) from seven medical centers from January 2009 to December 2021. Of these, 29 004 images were used for model development and 6004 images for validation. The model determined cancer risk for each image and automatically triaged images with normal thyroid and benign nodules by dynamically integrating lesion detection through pixel-level feature analysis and lesion classification through deep semantic features analysis. Diagnostic performance of screening assisted by the model (ThyNet-S triaged screening) and traditional screening (radiologists alone) was assessed by comparing sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve using the McNemar test and DeLong test. The influence of ThyNet-S on radiologist workload and clinical decision-making was also assessed. Results ThyNet-S-assisted triaged screening achieved a higher area under the receiver operating characteristic curve than original screening with six senior and six junior radiologists (0.93 vs 0.91 and 0.92 vs 0.88, respectively; all P < .001). The model improved sensitivity for junior radiologists (88.2% vs 86.8%; P < .001). Notably, the model reduced radiologists' workload by triaging 60.4% of cases as not potentially malignant, which did not require further interpretation. The model simultaneously decreased the unnecessary fine needle aspiration rate from 38.7% to 14.9% and 11.5% when used independently or in combination with the Thyroid Imaging Reporting and Data System, respectively. Conclusion ThyNet-S improved the efficiency of thyroid cancer screening and optimized clinical decision-making. Keywords: Artificial Intelligence, Adaptive, Dual Task, Thyroid Cancer, Screening, Ultrasound Supplemental material is available for this article. © RSNA, 2025.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的建立一种自适应双任务深度学习模型(ThyNet-S),用于超声筛查中甲状腺病变的检测和分类。材料与方法回顾性研究采用多中心数据集,包括2009年1月至2021年12月来自7个医疗中心的23294例个体检查(平均年龄40.4岁±13.1[SD], 17587例女性)的35008张甲状腺超声图像。其中,29004张图像用于模型开发,6004张图像用于验证。该模型通过像素级特征分析动态整合病变检测和深度语义特征分析的病变分类,确定每张图像的癌症风险,并自动对正常甲状腺结节和良性结节进行分类。通过比较McNemar试验和Delong试验的敏感性、特异性、准确性和AUC,评估模型辅助筛查(ThyNet-S分级筛查)和传统筛查(放射科医师单独筛查)的诊断性能。还评估了ThyNet-S对放射科医生工作量和临床决策的影响。结果6名高级放射科医师和6名初级放射科医师的thynet - s辅助分类筛查的AUC高于原始筛查(分别为0.93比0.91,0.92比0.88,P均< 0.001)。该模型提高了初级放射科医生的敏感性(88.2%对86.8%,P
{"title":"Adaptive Dual-Task Deep Learning for Automated Thyroid Cancer Triaging at Screening US.","authors":"Shao-Hong Wu, Ming-De Li, Wen-Juan Tong, Yi-Hao Liu, Rui Cui, Jin-Bo Hu, Mei-Qing Cheng, Wei-Ping Ke, Xin-Xin Lin, Jia-Yi Lv, Long-Zhong Liu, Jie Ren, Guang-Jian Liu, Hong Yang, Wei Wang","doi":"10.1148/ryai.240271","DOIUrl":"10.1148/ryai.240271","url":null,"abstract":"<p><p>Purpose To develop an adaptive dual-task deep learning model (ThyNet-S) for detection and classification of thyroid lesions at US screening. Materials and Methods This retrospective study used a multicenter dataset comprising 35 008 thyroid US images of 23 294 individual examinations (mean age, 40.4 years ± 13.1 [SD]; 17 587 female) from seven medical centers from January 2009 to December 2021. Of these, 29 004 images were used for model development and 6004 images for validation. The model determined cancer risk for each image and automatically triaged images with normal thyroid and benign nodules by dynamically integrating lesion detection through pixel-level feature analysis and lesion classification through deep semantic features analysis. Diagnostic performance of screening assisted by the model (ThyNet-S triaged screening) and traditional screening (radiologists alone) was assessed by comparing sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve using the McNemar test and DeLong test. The influence of ThyNet-S on radiologist workload and clinical decision-making was also assessed. Results ThyNet-S-assisted triaged screening achieved a higher area under the receiver operating characteristic curve than original screening with six senior and six junior radiologists (0.93 vs 0.91 and 0.92 vs 0.88, respectively; all <i>P</i> < .001). The model improved sensitivity for junior radiologists (88.2% vs 86.8%; <i>P</i> < .001). Notably, the model reduced radiologists' workload by triaging 60.4% of cases as not potentially malignant, which did not require further interpretation. The model simultaneously decreased the unnecessary fine needle aspiration rate from 38.7% to 14.9% and 11.5% when used independently or in combination with the Thyroid Imaging Reporting and Data System, respectively. Conclusion ThyNet-S improved the efficiency of thyroid cancer screening and optimized clinical decision-making. <b>Keywords:</b> Artificial Intelligence, Adaptive, Dual Task, Thyroid Cancer, Screening, Ultrasound <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240271"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143812639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pixels to Prognosis: Using Deep Learning to Rethink Cardiac Risk Prediction from CT Angiography. 像素到预后:利用深度学习重新思考CT血管造影的心脏风险预测。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-01 DOI: 10.1148/ryai.250260
Rohit Reddy
{"title":"Pixels to Prognosis: Using Deep Learning to Rethink Cardiac Risk Prediction from CT Angiography.","authors":"Rohit Reddy","doi":"10.1148/ryai.250260","DOIUrl":"10.1148/ryai.250260","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 3","pages":"e250260"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144162461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Quantification of Serial PET/CT Images for Pediatric Hodgkin Lymphoma Using a Longitudinally Aware Segmentation Network. 基于纵向感知分割网络的儿童霍奇金淋巴瘤系列PET/CT图像自动量化。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-01 DOI: 10.1148/ryai.240229
Xin Tie, Muheon Shin, Changhee Lee, Scott B Perlman, Zachary Huemann, Amy J Weisman, Sharon M Castellino, Kara M Kelly, Kathleen M McCarten, Adina L Alazraki, Junjie Hu, Steve Y Cho, Tyler J Bradshaw

Purpose To develop a longitudinally aware segmentation network (LAS-Net) that can quantify serial PET/CT images for pediatric patients with Hodgkin lymphoma. Materials and Methods This retrospective study included baseline (PET1) and interim (PET2) PET/CT images from 297 pediatric patients enrolled in two Children's Oncology Group clinical trials (AHOD1331 and AHOD0831). The internal dataset included 200 patients (enrolled between March 2015 and August 2019; median age, 15.4 years [range, 5.6-22.0 years]; 107 male), and the external testing dataset included 97 patients (enrolled between December 2009 and January 2012; median age, 15.8 years [range, 5.2-21.4 years]; 59 male). LAS-Net incorporates longitudinal cross-attention, allowing relevant features from PET1 to inform the analysis of PET2. The model's lesion segmentation performance on PET1 images was evaluated using Dice coefficients, and lesion detection performance on PET2 images was evaluated using F1 scores. In addition, quantitative PET metrics, including metabolic tumor volume (MTV) and total lesion glycolysis (TLG) in PET1, as well as qPET and percentage difference between baseline and interim maximum standardized uptake value (∆SUVmax) in PET2, were extracted and compared against physician-derived measurements. Agreement between model and physician-derived measurements was quantified using Spearman correlation, and bootstrap resampling was used for statistical analysis. Results LAS-Net detected residual lymphoma on PET2 scans with an F1 score of 0.61 (precision/recall: 0.62/0.60), outperforming all comparator methods (P < .01). For baseline segmentation, LAS-Net achieved a mean Dice score of 0.77. In PET quantification, LAS-Net's measurements of qPET, ∆SUVmax, MTV, and TLG were strongly correlated with physician measurements, with Spearman ρ values of 0.78, 0.80, 0.93, and 0.96, respectively. The quantification performance remained high, with a slight decrease, in an external testing cohort. Conclusion LAS-Net demonstrated significant improvements in quantifying PET metrics across serial scans in pediatric patients with Hodgkin lymphoma, highlighting the value of longitudinal awareness in evaluating multi-time-point imaging datasets. Keywords: Pediatrics, PET/CT, Lymphoma, Segmentation, Quantification, Supervised Learning, Convolutional Neural Network (CNN), Quantitative PET, Longitudinal Analysis, Deep Learning, Image Segmentation Supplemental material is available for this article. Clinical trial registration no. NCT02166463 and NCT01026220 © RSNA, 2025 See also commentary by Khosravi and Gichoya in this issue.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的建立纵向感知的分割网络(LAS-Net),用于量化儿童霍奇金淋巴瘤患者的连续PET/CT图像。材料和方法本回顾性研究纳入了297名儿童肿瘤组临床试验(AHOD1331和AHOD0831)的儿童患者的基线(PET1)和中期(PET2) PET/CT图像。内部数据集包括200名患者(2015年3月至2019年8月;中位年龄15.4岁[IQR: 5.6, 22.0]岁;外部测试数据集包括97例患者(2009年12月至2012年1月入组;中位年龄15.8岁[IQR: 5.2, 21.4]岁;59岁男性)。LAS-Net结合了纵向交叉注意,允许PET1的相关特征为PET2的分析提供信息。使用Dice系数评价模型对PET1图像的病灶分割性能,使用F1分数评价模型对PET2图像的病灶检测性能。此外,提取定量PET指标,包括PET1中的代谢肿瘤体积(MTV)和总病变糖酵解(TLG),以及PET2中的qPET和∆SUVmax,并与医生提供的测量结果进行比较。采用Spearman相关对模型和医生测量结果之间的一致性进行量化,并采用自举重采样进行统计分析。结果LAS-Net在PET2扫描中检测到残留淋巴瘤,F1评分为0.61(精密度/召回率:0.62/0.60),优于所有比较方法(P < 0.01)。对于基线分割,LAS-Net的平均Dice得分为0.77。在PET定量中,LAS-Net测量的qPET、∆SUVmax、MTV和TLG与医生测量值密切相关,Spearman的ρ值分别为0.78、0.80、0.93和0.96。在外部测试队列中,量化表现仍然很高,略有下降。结论:LAS-Net在量化霍奇金淋巴瘤儿童患者串行扫描的PET指标方面有显著改善,突出了纵向感知在评估多时间点成像数据集中的价值。©RSNA, 2025年。
{"title":"Automatic Quantification of Serial PET/CT Images for Pediatric Hodgkin Lymphoma Using a Longitudinally Aware Segmentation Network.","authors":"Xin Tie, Muheon Shin, Changhee Lee, Scott B Perlman, Zachary Huemann, Amy J Weisman, Sharon M Castellino, Kara M Kelly, Kathleen M McCarten, Adina L Alazraki, Junjie Hu, Steve Y Cho, Tyler J Bradshaw","doi":"10.1148/ryai.240229","DOIUrl":"10.1148/ryai.240229","url":null,"abstract":"<p><p>Purpose To develop a longitudinally aware segmentation network (LAS-Net) that can quantify serial PET/CT images for pediatric patients with Hodgkin lymphoma. Materials and Methods This retrospective study included baseline (PET1) and interim (PET2) PET/CT images from 297 pediatric patients enrolled in two Children's Oncology Group clinical trials (AHOD1331 and AHOD0831). The internal dataset included 200 patients (enrolled between March 2015 and August 2019; median age, 15.4 years [range, 5.6-22.0 years]; 107 male), and the external testing dataset included 97 patients (enrolled between December 2009 and January 2012; median age, 15.8 years [range, 5.2-21.4 years]; 59 male). LAS-Net incorporates longitudinal cross-attention, allowing relevant features from PET1 to inform the analysis of PET2. The model's lesion segmentation performance on PET1 images was evaluated using Dice coefficients, and lesion detection performance on PET2 images was evaluated using F1 scores. In addition, quantitative PET metrics, including metabolic tumor volume (MTV) and total lesion glycolysis (TLG) in PET1, as well as qPET and percentage difference between baseline and interim maximum standardized uptake value (∆SUV<sub>max</sub>) in PET2, were extracted and compared against physician-derived measurements. Agreement between model and physician-derived measurements was quantified using Spearman correlation, and bootstrap resampling was used for statistical analysis. Results LAS-Net detected residual lymphoma on PET2 scans with an F1 score of 0.61 (precision/recall: 0.62/0.60), outperforming all comparator methods (<i>P</i> < .01). For baseline segmentation, LAS-Net achieved a mean Dice score of 0.77. In PET quantification, LAS-Net's measurements of qPET, ∆SUV<sub>max</sub>, MTV, and TLG were strongly correlated with physician measurements, with Spearman ρ values of 0.78, 0.80, 0.93, and 0.96, respectively. The quantification performance remained high, with a slight decrease, in an external testing cohort. Conclusion LAS-Net demonstrated significant improvements in quantifying PET metrics across serial scans in pediatric patients with Hodgkin lymphoma, highlighting the value of longitudinal awareness in evaluating multi-time-point imaging datasets. <b>Keywords:</b> Pediatrics, PET/CT, Lymphoma, Segmentation, Quantification, Supervised Learning, Convolutional Neural Network (CNN), Quantitative PET, Longitudinal Analysis, Deep Learning, Image Segmentation <i>Supplemental material is available for this article.</i> Clinical trial registration no. NCT02166463 and NCT01026220 © RSNA, 2025 See also commentary by Khosravi and Gichoya in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240229"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12127956/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143450322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Pipeline for Automated Quality Control of Chest Radiographs. 胸片自动质量控制的流水线。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-01 DOI: 10.1148/ryai.240003
Ian A Selby, Eduardo González Solares, Anna Breger, Michael Roberts, Lorena Escudero Sánchez, Judith Babar, James H F Rudd, Nicholas A Walton, Evis Sala, Carola-Bibiane Schönlieb, Jonathan R Weir-McCall
{"title":"A Pipeline for Automated Quality Control of Chest Radiographs.","authors":"Ian A Selby, Eduardo González Solares, Anna Breger, Michael Roberts, Lorena Escudero Sánchez, Judith Babar, James H F Rudd, Nicholas A Walton, Evis Sala, Carola-Bibiane Schönlieb, Jonathan R Weir-McCall","doi":"10.1148/ryai.240003","DOIUrl":"10.1148/ryai.240003","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240003"},"PeriodicalIF":13.2,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12127945/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143558237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Large Language Models with Retrieval-Augmented Generation: A Radiology-Specific Approach. 用检索增强生成增强大型语言模型:一种放射学专用方法。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-01 DOI: 10.1148/ryai.240313
Dane A Weinert, Andreas M Rauschecker

Retrieval-augmented generation (RAG) is a strategy to improve the performance of large language models (LLMs) by providing an LLM with an updated corpus of knowledge that can be used for answer generation in real time. RAG may improve LLM performance and clinical applicability in radiology by providing citable, up-to-date information without requiring model fine-tuning. In this retrospective study, a radiology-specific RAG system was developed using a vector database of 3689 RadioGraphics articles published from January 1999 to December 2023. Performance of five LLMs with (RAG-Systems) and without RAG on a 192-question radiology examination was compared. RAG significantly improved examination scores for GPT-4 (OpenAI; 81.2% vs 75.5%, P = .04) and Command R+ (Cohere; 70.3% vs 62.0%, P = .02), but not for Claude Opus (Anthropic), Mixtral (Mistral AI), or Gemini 1.5 Pro (Google DeepMind). RAG-Systems performed significantly better than pure LLMs on a 24-question subset directly sourced from RadioGraphics (85% vs 76%, P = .03). RAG-Systems retrieved 21 of 24 (87.5%, P < .001) relevant RadioGraphics references cited in the examination's answer explanations and successfully cited them in 18 of 21 (85.7%, P < .001) outputs. The results suggest that RAG is a promising approach to enhance LLM capabilities for radiology knowledge tasks, providing transparent, domain-specific information retrieval. Keywords: Computer Applications-General (Informatics), Technology Assessment Supplemental material is available for this article. © RSNA, 2025 See also commentary by Mansuri and Gichoya in this issue.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。检索增强生成(RAG)是一种提高大型语言模型(LLM)性能的策略,它为LLM提供可用于实时生成答案的更新知识语料库。RAG可以通过提供可引用的、最新的信息而不需要模型微调来提高LLM在放射学中的性能和临床适用性。在这项回顾性研究中,利用1999年1月至2023年12月发表的3,689篇放射学文章的矢量数据库开发了放射学特异性RAG。我们比较了5例有和没有RAG的LLMs在192个问题的放射学检查中的表现。RAG显著提高了GPT-4(81.2%对75.5%,P = .04)和Command R+(70.3%对62.0%,P = .02)的考试成绩,但对Claude Opus、Mixtral或Gemini 1.5 Pro没有显著提高。在直接来自RadioGraphics的24个问题子集上,RAG-System的表现明显优于纯LLMs(85%对76%,P = .03)。ragg系统检索了考试答案解释中引用的21/24 (87.5%,P < .001)篇相关放射学文献,并在18/21 (85.7%,P < .001)篇输出中成功引用了这些文献。结果表明,RAG是一种很有前途的方法,可以提高LLM在放射学知识任务中的能力,提供透明的、特定领域的信息检索。©RSNA, 2025年。
{"title":"Enhancing Large Language Models with Retrieval-Augmented Generation: A Radiology-Specific Approach.","authors":"Dane A Weinert, Andreas M Rauschecker","doi":"10.1148/ryai.240313","DOIUrl":"10.1148/ryai.240313","url":null,"abstract":"<p><p>Retrieval-augmented generation (RAG) is a strategy to improve the performance of large language models (LLMs) by providing an LLM with an updated corpus of knowledge that can be used for answer generation in real time. RAG may improve LLM performance and clinical applicability in radiology by providing citable, up-to-date information without requiring model fine-tuning. In this retrospective study, a radiology-specific RAG system was developed using a vector database of 3689 <i>RadioGraphics</i> articles published from January 1999 to December 2023. Performance of five LLMs with (RAG-Systems) and without RAG on a 192-question radiology examination was compared. RAG significantly improved examination scores for GPT-4 (OpenAI; 81.2% vs 75.5%, <i>P</i> = .04) and Command R+ (Cohere; 70.3% vs 62.0%, <i>P</i> = .02), but not for Claude Opus (Anthropic), Mixtral (Mistral AI), or Gemini 1.5 Pro (Google DeepMind). RAG-Systems performed significantly better than pure LLMs on a 24-question subset directly sourced from <i>RadioGraphics</i> (85% vs 76%, <i>P</i> = .03). RAG-Systems retrieved 21 of 24 (87.5%, <i>P</i> < .001) relevant <i>RadioGraphics</i> references cited in the examination's answer explanations and successfully cited them in 18 of 21 (85.7%, <i>P</i> < .001) outputs. The results suggest that RAG is a promising approach to enhance LLM capabilities for radiology knowledge tasks, providing transparent, domain-specific information retrieval. <b>Keywords:</b> Computer Applications-General (Informatics), Technology Assessment <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Mansuri and Gichoya in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240313"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143606543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Radiology-Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1