首页 > 最新文献

Radiology-Artificial Intelligence最新文献

英文 中文
AI-assisted Analysis to Facilitate Detection of Humeral Lesions on Chest Radiographs. 人工智能辅助分析,帮助检测胸片上的肱骨病变。
IF 9.8 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-05-01 DOI: 10.1148/ryai.230094
Harim Kim, Kyungsu Kim, Seong Je Oh, Sungjoo Lee, Jung Han Woo, Jong Hee Kim, Yoon Ki Cha, Kyunga Kim, Myung Jin Chung

Purpose To develop an artificial intelligence (AI) system for humeral tumor detection on chest radiographs (CRs) and evaluate the impact on reader performance. Materials and Methods In this retrospective study, 14 709 CRs (January 2000 to December 2021) were collected from 13 468 patients, including CT-proven normal (n = 13 116) and humeral tumor (n = 1593) cases. The data were divided into training and test groups. A novel training method called false-positive activation area reduction (FPAR) was introduced to enhance the diagnostic performance by focusing on the humeral region. The AI program and 10 radiologists were assessed using holdout test set 1, wherein the radiologists were tested twice (with and without AI test results). The performance of the AI system was evaluated using holdout test set 2, comprising 10 497 normal images. Receiver operating characteristic analyses were conducted for evaluating model performance. Results FPAR application in the AI program improved its performance compared with a conventional model based on the area under the receiver operating characteristic curve (0.87 vs 0.82, P = .04). The proposed AI system also demonstrated improved tumor localization accuracy (80% vs 57%, P < .001). In holdout test set 2, the proposed AI system exhibited a false-positive rate of 2%. AI assistance improved the radiologists' sensitivity, specificity, and accuracy by 8.9%, 1.2%, and 3.5%, respectively (P < .05 for all). Conclusion The proposed AI tool incorporating FPAR improved humeral tumor detection on CRs and reduced false-positive results in tumor visualization. It may serve as a supportive diagnostic tool to alert radiologists about humeral abnormalities. Keywords: Artificial Intelligence, Conventional Radiography, Humerus, Machine Learning, Shoulder, Tumor Supplemental material is available for this article. © RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 开发一种人工智能(AI)系统,用于检测胸片(CR)上的肱骨肿瘤,并评估其对读者表现的影响。材料和方法 在这项回顾性研究中,收集了 13,468 名患者的 14,709 张 CR(2000 年 1 月至 2021 年 12 月),其中包括经 CT 证实的正常病例(n = 13,116 例)和肱骨肿瘤病例(n = 1,593 例)。数据分为训练组和测试组。其中引入了一种名为 "减少假阳性激活区(FPAR)"的新型训练方法,通过聚焦肱骨区域来提高诊断性能。人工智能程序和十位放射科医生使用保留测试集 1 进行了评估,其中放射科医生接受了两次测试(有人工智能测试结果和无人工智能测试结果)。人工智能系统的性能则通过由 10,497 张正常图像组成的保留测试集 2 进行评估。为评估模型性能进行了接收者操作特征(ROC)分析。结果 根据接收者操作特征曲线下面积(0.87 对 0.82,P = 0.04),与传统模型相比,人工智能程序中应用 FPAR 提高了其性能。拟议的人工智能系统还提高了肿瘤定位的准确性(80% 对 57%,P < .001)。在保留测试集 2 中,拟议的人工智能系统的假阳性率为 2%。在人工智能的帮助下,放射科医生的灵敏度、特异性和准确性分别提高了 8.9%、1.2% 和 3.5%(P < .05)。结论 结合 FPAR 的拟议人工智能工具提高了 CR 上肱骨肿瘤的检测率,减少了肿瘤可视化的假阳性。它可作为辅助诊断工具,提醒放射科医生注意肱骨异常。©RSNA,2024。
{"title":"AI-assisted Analysis to Facilitate Detection of Humeral Lesions on Chest Radiographs.","authors":"Harim Kim, Kyungsu Kim, Seong Je Oh, Sungjoo Lee, Jung Han Woo, Jong Hee Kim, Yoon Ki Cha, Kyunga Kim, Myung Jin Chung","doi":"10.1148/ryai.230094","DOIUrl":"10.1148/ryai.230094","url":null,"abstract":"<p><p>Purpose To develop an artificial intelligence (AI) system for humeral tumor detection on chest radiographs (CRs) and evaluate the impact on reader performance. Materials and Methods In this retrospective study, 14 709 CRs (January 2000 to December 2021) were collected from 13 468 patients, including CT-proven normal (<i>n</i> = 13 116) and humeral tumor (<i>n</i> = 1593) cases. The data were divided into training and test groups. A novel training method called <i>false-positive activation area reduction</i> (FPAR) was introduced to enhance the diagnostic performance by focusing on the humeral region. The AI program and 10 radiologists were assessed using holdout test set 1, wherein the radiologists were tested twice (with and without AI test results). The performance of the AI system was evaluated using holdout test set 2, comprising 10 497 normal images. Receiver operating characteristic analyses were conducted for evaluating model performance. Results FPAR application in the AI program improved its performance compared with a conventional model based on the area under the receiver operating characteristic curve (0.87 vs 0.82, <i>P</i> = .04). The proposed AI system also demonstrated improved tumor localization accuracy (80% vs 57%, <i>P</i> < .001). In holdout test set 2, the proposed AI system exhibited a false-positive rate of 2%. AI assistance improved the radiologists' sensitivity, specificity, and accuracy by 8.9%, 1.2%, and 3.5%, respectively (<i>P</i> < .05 for all). Conclusion The proposed AI tool incorporating FPAR improved humeral tumor detection on CRs and reduced false-positive results in tumor visualization. It may serve as a supportive diagnostic tool to alert radiologists about humeral abnormalities. <b>Keywords:</b> Artificial Intelligence, Conventional Radiography, Humerus, Machine Learning, Shoulder, Tumor <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230094"},"PeriodicalIF":9.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140509/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140040502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assistive AI in Lung Cancer Screening: A Retrospective Multinational Study in the United States and Japan. 肺癌筛查中的辅助人工智能:美国和日本的一项多国回顾性研究。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-05-01 DOI: 10.1148/ryai.230079
Atilla P Kiraly, Corbin A Cunningham, Ryan Najafi, Zaid Nabulsi, Jie Yang, Charles Lau, Joseph R Ledsam, Wenxing Ye, Diego Ardila, Scott M McKinney, Rory Pilgrim, Yun Liu, Hiroaki Saito, Yasuteru Shimamura, Mozziyar Etemadi, David Melnick, Sunny Jansen, Greg S Corrado, Lily Peng, Daniel Tse, Shravya Shetty, Shruthi Prabhakara, David P Naidich, Neeral Beladia, Krish Eswaran

Purpose To evaluate the impact of an artificial intelligence (AI) assistant for lung cancer screening on multinational clinical workflows. Materials and Methods An AI assistant for lung cancer screening was evaluated on two retrospective randomized multireader multicase studies where 627 (141 cancer-positive cases) low-dose chest CT cases were each read twice (with and without AI assistance) by experienced thoracic radiologists (six U.S.-based or six Japan-based radiologists), resulting in a total of 7524 interpretations. Positive cases were defined as those within 2 years before a pathology-confirmed lung cancer diagnosis. Negative cases were defined as those without any subsequent cancer diagnosis for at least 2 years and were enriched for a spectrum of diverse nodules. The studies measured the readers' level of suspicion (on a 0-100 scale), country-specific screening system scoring categories, and management recommendations. Evaluation metrics included the area under the receiver operating characteristic curve (AUC) for level of suspicion and sensitivity and specificity of recall recommendations. Results With AI assistance, the radiologists' AUC increased by 0.023 (0.70 to 0.72; P = .02) for the U.S. study and by 0.023 (0.93 to 0.96; P = .18) for the Japan study. Scoring system specificity for actionable findings increased 5.5% (57% to 63%; P < .001) for the U.S. study and 6.7% (23% to 30%; P < .001) for the Japan study. There was no evidence of a difference in corresponding sensitivity between unassisted and AI-assisted reads for the U.S. (67.3% to 67.5%; P = .88) and Japan (98% to 100%; P > .99) studies. Corresponding stand-alone AI AUC system performance was 0.75 (95% CI: 0.70, 0.81) and 0.88 (95% CI: 0.78, 0.97) for the U.S.- and Japan-based datasets, respectively. Conclusion The concurrent AI interface improved lung cancer screening specificity in both U.S.- and Japan-based reader studies, meriting further study in additional international screening environments. Keywords: Assistive Artificial Intelligence, Lung Cancer Screening, CT Supplemental material is available for this article. Published under a CC BY 4.0 license.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响文章内容的错误。目的 评估肺癌筛查人工智能(AI)助手对跨国临床工作流程的影响。材料与方法 在两项回顾性随机多机多病例研究中对肺癌筛查人工智能助手进行了评估,由经验丰富的胸部放射科医生(6 位美国医生或 6 位日本医生)对 627 例(141 例癌症阳性)低剂量胸部 CT 病例分别进行了两次解读(有人工智能助手和无人工智能助手),共得出 7524 个解读结果。阳性病例定义为病理确诊肺癌前两年内的病例。阴性病例是指至少两年内没有任何后续癌症诊断的病例,并包含各种不同的结节。这些研究衡量了读者的怀疑程度(LoS,0-100 分)、特定国家筛查系统评分类别和管理建议。评估指标包括 LoS 的接收者操作特征曲线下面积(AUC)以及召回建议的灵敏度和特异性。结果 在人工智能的协助下,美国研究中放射科医生的 AUC 增加了 0.023(0.70 至 0.72,P = .02),日本研究中增加了 0.023(0.93 至 0.96,P = .18)。在美国研究中,评分系统对可操作结果的特异性提高了 5.5%(57%-63%,P < .001),在日本研究中提高了 6.7%(23%-30%,P < .001)。在美国(67.3%-67.5%,P = .88)和日本(98%-100%,P > .99)的研究中,没有证据表明无辅助读取和人工智能辅助读取的相应灵敏度存在差异。美国和日本数据集的相应独立人工智能 AUC 系统性能分别为 0.75 95%CI [0.70-0.81] 和 0.88 95%CI [0.78-0.97]。结论 同步人工智能界面提高了美国和日本读者研究中的 LCS 特异性,值得在其他国际筛查环境中进一步研究。©RSNA,2024。
{"title":"Assistive AI in Lung Cancer Screening: A Retrospective Multinational Study in the United States and Japan.","authors":"Atilla P Kiraly, Corbin A Cunningham, Ryan Najafi, Zaid Nabulsi, Jie Yang, Charles Lau, Joseph R Ledsam, Wenxing Ye, Diego Ardila, Scott M McKinney, Rory Pilgrim, Yun Liu, Hiroaki Saito, Yasuteru Shimamura, Mozziyar Etemadi, David Melnick, Sunny Jansen, Greg S Corrado, Lily Peng, Daniel Tse, Shravya Shetty, Shruthi Prabhakara, David P Naidich, Neeral Beladia, Krish Eswaran","doi":"10.1148/ryai.230079","DOIUrl":"10.1148/ryai.230079","url":null,"abstract":"<p><p>Purpose To evaluate the impact of an artificial intelligence (AI) assistant for lung cancer screening on multinational clinical workflows. Materials and Methods An AI assistant for lung cancer screening was evaluated on two retrospective randomized multireader multicase studies where 627 (141 cancer-positive cases) low-dose chest CT cases were each read twice (with and without AI assistance) by experienced thoracic radiologists (six U.S.-based or six Japan-based radiologists), resulting in a total of 7524 interpretations. Positive cases were defined as those within 2 years before a pathology-confirmed lung cancer diagnosis. Negative cases were defined as those without any subsequent cancer diagnosis for at least 2 years and were enriched for a spectrum of diverse nodules. The studies measured the readers' level of suspicion (on a 0-100 scale), country-specific screening system scoring categories, and management recommendations. Evaluation metrics included the area under the receiver operating characteristic curve (AUC) for level of suspicion and sensitivity and specificity of recall recommendations. Results With AI assistance, the radiologists' AUC increased by 0.023 (0.70 to 0.72; <i>P</i> = .02) for the U.S. study and by 0.023 (0.93 to 0.96; <i>P</i> = .18) for the Japan study. Scoring system specificity for actionable findings increased 5.5% (57% to 63%; <i>P</i> < .001) for the U.S. study and 6.7% (23% to 30%; <i>P</i> < .001) for the Japan study. There was no evidence of a difference in corresponding sensitivity between unassisted and AI-assisted reads for the U.S. (67.3% to 67.5%; <i>P</i> = .88) and Japan (98% to 100%; <i>P</i> > .99) studies. Corresponding stand-alone AI AUC system performance was 0.75 (95% CI: 0.70, 0.81) and 0.88 (95% CI: 0.78, 0.97) for the U.S.- and Japan-based datasets, respectively. Conclusion The concurrent AI interface improved lung cancer screening specificity in both U.S.- and Japan-based reader studies, meriting further study in additional international screening environments. <b>Keywords:</b> Assistive Artificial Intelligence, Lung Cancer Screening, CT <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230079"},"PeriodicalIF":8.1,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140517/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140111548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2023 Manuscript Reviewers: A Note of Thanks. 2023 审稿人:感谢信。
IF 9.8 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-01 DOI: 10.1148/ryai.240138
Curtis P Langlotz, Charles E Kahn
{"title":"2023 Manuscript Reviewers: A Note of Thanks.","authors":"Curtis P Langlotz, Charles E Kahn","doi":"10.1148/ryai.240138","DOIUrl":"10.1148/ryai.240138","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 2","pages":"e240138"},"PeriodicalIF":9.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982905/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140294780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Curated and Annotated Dataset of Lung US Images in Zambian Children with Clinical Pneumonia. 赞比亚临床肺炎患儿肺部 US 图像的编辑和注释数据集。
IF 9.8 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-01 DOI: 10.1148/ryai.230147
Lauren Etter, Margrit Betke, Ingrid Y Camelo, Christopher J Gill, Rachel Pieciak, Russell Thompson, Libertario Demi, Umair Khan, Alyse Wheelock, Janet Katanga, Bindu N Setty, Ilse Castro-Aragon

See also the commentary by Sitek in this issue. Supplemental material is available for this article.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终稿件的过程中,可能会发现影响内容的错误。所提供的肺部 US 数据集包含从 200 名患有肺炎或重症肺炎的赞比亚儿童以及 200 名年龄和性别匹配的健康儿童身上获取的图像;此外,PedLUS 数据集中还注释了 57 名肺炎儿童的肺部合并模式。©RSNA,2024 年。
{"title":"Curated and Annotated Dataset of Lung US Images in Zambian Children with Clinical Pneumonia.","authors":"Lauren Etter, Margrit Betke, Ingrid Y Camelo, Christopher J Gill, Rachel Pieciak, Russell Thompson, Libertario Demi, Umair Khan, Alyse Wheelock, Janet Katanga, Bindu N Setty, Ilse Castro-Aragon","doi":"10.1148/ryai.230147","DOIUrl":"10.1148/ryai.230147","url":null,"abstract":"<p><p>See also the commentary by Sitek in this issue. <i>Supplemental material is available for this article.</i></p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230147"},"PeriodicalIF":9.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982815/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139913646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative Large Language Models for Detection of Speech Recognition Errors in Radiology Reports. 生成大型语言模型,用于检测放射学报告中的语音识别错误。
IF 9.8 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-01 DOI: 10.1148/ryai.230205
Reuben A Schmidt, Jarrel C Y Seah, Ke Cao, Lincoln Lim, Wei Lim, Justin Yeung

This study evaluated the ability of generative large language models (LLMs) to detect speech recognition errors in radiology reports. A dataset of 3233 CT and MRI reports was assessed by radiologists for speech recognition errors. Errors were categorized as clinically significant or not clinically significant. Performances of five generative LLMs-GPT-3.5-turbo, GPT-4, text-davinci-003, Llama-v2-70B-chat, and Bard-were compared in detecting these errors, using manual error detection as the reference standard. Prompt engineering was used to optimize model performance. GPT-4 demonstrated high accuracy in detecting clinically significant errors (precision, 76.9%; recall, 100%; F1 score, 86.9%) and not clinically significant errors (precision, 93.9%; recall, 94.7%; F1 score, 94.3%). Text-davinci-003 achieved F1 scores of 72% and 46.6% for clinically significant and not clinically significant errors, respectively. GPT-3.5-turbo obtained 59.1% and 32.2% F1 scores, while Llama-v2-70B-chat scored 72.8% and 47.7%. Bard showed the lowest accuracy, with F1 scores of 47.5% and 20.9%. GPT-4 effectively identified challenging errors of nonsense phrases and internally inconsistent statements. Longer reports, resident dictation, and overnight shifts were associated with higher error rates. In conclusion, advanced generative LLMs show potential for automatic detection of speech recognition errors in radiology reports. Keywords: CT, Large Language Model, Machine Learning, MRI, Natural Language Processing, Radiology Reports, Speech, Unsupervised Learning Supplemental material is available for this article.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。本研究评估了生成式大语言模型(LLM)检测放射学报告中语音识别错误的能力。放射科医生对 3,233 份 CT 和 MRI 报告数据集进行了语音识别错误评估。错误被分为有临床意义和无临床意义。以人工错误检测为参考标准,比较了五种生成式 LLM-GPT-3.5-turbo、GPT-4、text-davinci-003、Llama-v2-70B-chat 和 Bard 在检测这些错误方面的性能。及时工程用于优化模型性能。GPT-4 在检测有临床意义的错误(精确度为 76.9%,召回率为 100%,F1 为 86.9%)和无临床意义的错误(精确度为 93.9%,召回率为 94.7%,F1 为 94.3%)方面表现出很高的准确性。Text-davinci-003对临床重大错误和非临床重大错误的F1得分分别为72%和46.6%。GPT-3.5-turbo的F1得分分别为59.1%和32.2%,而Llama-v2-70B-chat的F1得分分别为72.8%和47.7%。Bard 的准确率最低,F1 分数分别为 47.5% 和 20.9%。GPT-4 能有效识别无意义短语和内部不一致语句等高难度错误。较长的报告、住院医生口述和通宵轮班与较高的错误率有关。总之,先进的生成式 LLM 显示出自动检测放射学报告中语音识别错误的潜力。©RSNA,2024。
{"title":"Generative Large Language Models for Detection of Speech Recognition Errors in Radiology Reports.","authors":"Reuben A Schmidt, Jarrel C Y Seah, Ke Cao, Lincoln Lim, Wei Lim, Justin Yeung","doi":"10.1148/ryai.230205","DOIUrl":"10.1148/ryai.230205","url":null,"abstract":"<p><p>This study evaluated the ability of generative large language models (LLMs) to detect speech recognition errors in radiology reports. A dataset of 3233 CT and MRI reports was assessed by radiologists for speech recognition errors. Errors were categorized as clinically significant or not clinically significant. Performances of five generative LLMs-GPT-3.5-turbo, GPT-4, text-davinci-003, Llama-v2-70B-chat, and Bard-were compared in detecting these errors, using manual error detection as the reference standard. Prompt engineering was used to optimize model performance. GPT-4 demonstrated high accuracy in detecting clinically significant errors (precision, 76.9%; recall, 100%; F1 score, 86.9%) and not clinically significant errors (precision, 93.9%; recall, 94.7%; F1 score, 94.3%). Text-davinci-003 achieved F1 scores of 72% and 46.6% for clinically significant and not clinically significant errors, respectively. GPT-3.5-turbo obtained 59.1% and 32.2% F1 scores, while Llama-v2-70B-chat scored 72.8% and 47.7%. Bard showed the lowest accuracy, with F1 scores of 47.5% and 20.9%. GPT-4 effectively identified challenging errors of nonsense phrases and internally inconsistent statements. Longer reports, resident dictation, and overnight shifts were associated with higher error rates. In conclusion, advanced generative LLMs show potential for automatic detection of speech recognition errors in radiology reports. <b>Keywords:</b> CT, Large Language Model, Machine Learning, MRI, Natural Language Processing, Radiology Reports, Speech, Unsupervised Learning <i>Supplemental material is available for this article</i>.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230205"},"PeriodicalIF":9.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982816/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139543220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editor's Recognition Awards. 编辑表彰奖。
IF 9.8 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-01 DOI: 10.1148/ryai.240139
Charles E Kahn
{"title":"Editor's Recognition Awards.","authors":"Charles E Kahn","doi":"10.1148/ryai.240139","DOIUrl":"10.1148/ryai.240139","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 2","pages":"e240139"},"PeriodicalIF":9.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982908/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140294781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multicenter Evaluation of a Weakly Supervised Deep Learning Model for Lymph Node Diagnosis in Rectal Cancer at MRI. 多中心评估磁共振成像上直肠癌淋巴结诊断的弱监督深度学习模型
IF 9.8 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-01 DOI: 10.1148/ryai.230152
Wei Xia, Dandan Li, Wenguang He, Perry J Pickhardt, Junming Jian, Rui Zhang, Junjie Zhang, Ruirui Song, Tong Tong, Xiaotang Yang, Xin Gao, Yanfen Cui

Purpose To develop a Weakly supervISed model DevelOpment fraMework (WISDOM) model to construct a lymph node (LN) diagnosis model for patients with rectal cancer (RC) that uses preoperative MRI data coupled with postoperative patient-level pathologic information. Materials and Methods In this retrospective study, the WISDOM model was built using MRI (T2-weighted and diffusion-weighted imaging) and patient-level pathologic information (the number of postoperatively confirmed metastatic LNs and resected LNs) based on the data of patients with RC between January 2016 and November 2017. The incremental value of the model in assisting radiologists was investigated. The performances in binary and ternary N staging were evaluated using area under the receiver operating characteristic curve (AUC) and the concordance index (C index), respectively. Results A total of 1014 patients (median age, 62 years; IQR, 54-68 years; 590 male) were analyzed, including the training cohort (n = 589) and internal test cohort (n = 146) from center 1 and two external test cohorts (cohort 1: 117; cohort 2: 162) from centers 2 and 3. The WISDOM model yielded an overall AUC of 0.81 and C index of 0.765, significantly outperforming junior radiologists (AUC = 0.69, P < .001; C index = 0.689, P < .001) and performing comparably with senior radiologists (AUC = 0.79, P = .21; C index = 0.788, P = .22). Moreover, the model significantly improved the performance of junior radiologists (AUC = 0.80, P < .001; C index = 0.798, P < .001) and senior radiologists (AUC = 0.88, P < .001; C index = 0.869, P < .001). Conclusion This study demonstrates the potential of WISDOM as a useful LN diagnosis method using routine rectal MRI data. The improved radiologist performance observed with model assistance highlights the potential clinical utility of WISDOM in practice. Keywords: MR Imaging, Abdomen/GI, Rectum, Computer Applications-Detection/Diagnosis Supplemental material is available for this article. Published under a CC BY 4.0 license.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现一些错误,从而影响文章内容。目的 开发一种弱监督模型开发框架(WISDOM)来构建直肠癌(RC)患者的淋巴结(LN)诊断模型,该模型使用术前 MRI 数据和术后患者水平的病理信息。材料与方法 在这项回顾性研究中,根据 2016 年 1 月至 2017 年 11 月期间的 RC 患者数据,利用 MRI(T2 加权和弥散加权成像)和患者层面的病理信息(术后确诊的转移性 LN 和切除的 LN 数量)构建了 WISDOM 模型。研究了该模型在协助放射科医生方面的增量价值。分别使用接收者操作曲线下面积(AUC)和一致性指数(C-index)评估了二元N分期和三元N分期的性能。结果 共分析了 1014 例患者(中位年龄 62 岁;IQR 54-68 岁;男性 590 例),包括第一中心的训练队列(n = 589)和内部测试队列(n = 146),以及第二和第三中心的两个外部测试队列(队列 1:n = 117;队列 2:n = 162)。WISDOM 模型的总体 AUC 为 0.81,C-index 为 0.765,明显优于初级放射科医生(AUC = 0.69,P < .001;C-index = 0.689,P < .001),与高级放射科医生的表现相当(AUC = 0.79,P = .21;C-index = 0.788,P = .22)。此外,该模型还大大提高了初级放射科医生(AUC = 0.80,P < .001;C-index = 0.798,P < .001)和高级放射科医生(AUC = 0.88,P < .001;C-index = 0.869,P < .001)的工作绩效。结论 本研究证明了 WISDOM 作为使用常规直肠 MRI 数据诊断 LN 的有用方法的潜力。在模型的帮助下,放射科医生的工作效率得到了提高,这凸显了 WISDOM 在临床实践中的潜在作用。以 CC BY 4.0 许可发布。
{"title":"Multicenter Evaluation of a Weakly Supervised Deep Learning Model for Lymph Node Diagnosis in Rectal Cancer at MRI.","authors":"Wei Xia, Dandan Li, Wenguang He, Perry J Pickhardt, Junming Jian, Rui Zhang, Junjie Zhang, Ruirui Song, Tong Tong, Xiaotang Yang, Xin Gao, Yanfen Cui","doi":"10.1148/ryai.230152","DOIUrl":"10.1148/ryai.230152","url":null,"abstract":"<p><p>Purpose To develop a Weakly supervISed model DevelOpment fraMework (WISDOM) model to construct a lymph node (LN) diagnosis model for patients with rectal cancer (RC) that uses preoperative MRI data coupled with postoperative patient-level pathologic information. Materials and Methods In this retrospective study, the WISDOM model was built using MRI (T2-weighted and diffusion-weighted imaging) and patient-level pathologic information (the number of postoperatively confirmed metastatic LNs and resected LNs) based on the data of patients with RC between January 2016 and November 2017. The incremental value of the model in assisting radiologists was investigated. The performances in binary and ternary N staging were evaluated using area under the receiver operating characteristic curve (AUC) and the concordance index (C index), respectively. Results A total of 1014 patients (median age, 62 years; IQR, 54-68 years; 590 male) were analyzed, including the training cohort (<i>n</i> = 589) and internal test cohort (<i>n</i> = 146) from center 1 and two external test cohorts (cohort 1: 117; cohort 2: 162) from centers 2 and 3. The WISDOM model yielded an overall AUC of 0.81 and C index of 0.765, significantly outperforming junior radiologists (AUC = 0.69, <i>P</i> < .001; C index = 0.689, <i>P</i> < .001) and performing comparably with senior radiologists (AUC = 0.79, <i>P</i> = .21; C index = 0.788, <i>P</i> = .22). Moreover, the model significantly improved the performance of junior radiologists (AUC = 0.80, <i>P</i> < .001; C index = 0.798, <i>P</i> < .001) and senior radiologists (AUC = 0.88, <i>P</i> < .001; C index = 0.869, <i>P</i> < .001). Conclusion This study demonstrates the potential of WISDOM as a useful LN diagnosis method using routine rectal MRI data. The improved radiologist performance observed with model assistance highlights the potential clinical utility of WISDOM in practice. <b>Keywords:</b> MR Imaging, Abdomen/GI, Rectum, Computer Applications-Detection/Diagnosis <i>Supplemental material is available for this article</i>. Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230152"},"PeriodicalIF":9.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982819/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139730558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identification of Precise 3D CT Radiomics for Habitat Computation by Machine Learning in Cancer. 通过癌症中的机器学习识别用于人居计算的精确 3D CT 放射线组学。
IF 9.8 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-01 DOI: 10.1148/ryai.230118
Olivia Prior, Carlos Macarro, Víctor Navarro, Camilo Monreal, Marta Ligero, Alonso Garcia-Ruiz, Garazi Serna, Sara Simonetti, Irene Braña, Maria Vieito, Manuel Escobar, Jaume Capdevila, Annette T Byrne, Rodrigo Dienstmann, Rodrigo Toledo, Paolo Nuciforo, Elena Garralda, Francesco Grussu, Kinga Bernatowicz, Raquel Perez-Lopez

Purpose To identify precise three-dimensional radiomics features in CT images that enable computation of stable and biologically meaningful habitats with machine learning for cancer heterogeneity assessment. Materials and Methods This retrospective study included 2436 liver or lung lesions from 605 CT scans (November 2010-December 2021) in 331 patients with cancer (mean age, 64.5 years ± 10.1 [SD]; 185 male patients). Three-dimensional radiomics were computed from original and perturbed (simulated retest) images with different combinations of feature computation kernel radius and bin size. The lower 95% confidence limit (LCL) of the intraclass correlation coefficient (ICC) was used to measure repeatability and reproducibility. Precise features were identified by combining repeatability and reproducibility results (LCL of ICC ≥ 0.50). Habitats were obtained with Gaussian mixture models in original and perturbed data using precise radiomics features and compared with habitats obtained using all features. The Dice similarity coefficient (DSC) was used to assess habitat stability. Biologic correlates of CT habitats were explored in a case study, with a cohort of 13 patients with CT, multiparametric MRI, and tumor biopsies. Results Three-dimensional radiomics showed poor repeatability (LCL of ICC: median [IQR], 0.442 [0.312-0.516]) and poor reproducibility against kernel radius (LCL of ICC: median [IQR], 0.440 [0.33-0.526]) but excellent reproducibility against bin size (LCL of ICC: median [IQR], 0.929 [0.853-0.988]). Twenty-six radiomics features were precise, differing in lung and liver lesions. Habitats obtained with precise features (DSC: median [IQR], 0.601 [0.494-0.712] and 0.651 [0.52-0.784] for lung and liver lesions, respectively) were more stable than those obtained with all features (DSC: median [IQR], 0.532 [0.424-0.637] and 0.587 [0.465-0.703] for lung and liver lesions, respectively; P < .001). In the case study, CT habitats correlated quantitatively and qualitatively with heterogeneity observed in multiparametric MRI habitats and histology. Conclusion Precise three-dimensional radiomics features were identified on CT images that enabled tumor heterogeneity assessment through stable tumor habitat computation. Keywords: CT, Diffusion-weighted Imaging, Dynamic Contrast-enhanced MRI, MRI, Radiomics, Unsupervised Learning, Oncology, Liver, Lung Supplemental material is available for this article. © RSNA, 2024 See also the commentary by Sagreiya in this issue.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 找出 CT 图像中精确的三维放射组学特征,以便利用机器学习计算稳定且具有生物学意义的生境,用于癌症异质性评估。材料和方法 该回顾性研究纳入了 318 名癌症患者(平均年龄为 64.5 岁 ± 10.1 [SD];185 名男性患者)的 605 次 CT 扫描(2010 年 11 月至 2021 年 12 月)中的 2436 个肝脏或肺部病变。根据原始图像和扰动(模拟重测)图像计算三维放射组学,并采用不同的特征计算内核半径(R)和二进制大小(B)组合。类内相关系数的 95% 置信下限 (LCL) 用于测量重复性和再现性。结合重复性和再现性结果(LCL ≥ 0.50)确定精确特征。在原始数据和扰动数据中使用高斯混合模型,利用精确的辐射组学特征获得栖息地,并与利用所有特征获得的栖息地进行比较。戴斯相似系数(DSC)用于评估栖息地的稳定性。在一项病例研究中探讨了 CT 生境的生物学相关性,研究对象是一组 13 名患者,他们都接受过 CT、多参数 MRI(mpMRI)和肿瘤活检。结果 三维放射组学显示可重复性差(中位数LCL[IQR]为0.442[0.312-0.516]),与R的可重复性差(0.44[0.33-0.526]),但与B的可重复性极佳(0.929[0.853-0.988])。有 26 个放射组学特征是精确的,在肺部和肝脏病变中有所不同。用精确特征(肺部和肝脏病变的 DSC 值分别为 0.601 (0.494-0.712) 和 0.651 (0.52-0.784))获得的栖息地比用所有特征(DSC 值分别为 0.532 (0.424-0.637) 和 0.587 (0.465-0.703); P < .001)获得的栖息地更稳定。在该病例研究中,CT 生理学与 mpMRI 生理学和组织学观察到的异质性具有定量和定性相关性。结论 在 CT 上识别出精确的三维放射组学特征,可通过稳定的肿瘤生境计算进行肿瘤异质性评估。©RSNA,2024。
{"title":"Identification of Precise 3D CT Radiomics for Habitat Computation by Machine Learning in Cancer.","authors":"Olivia Prior, Carlos Macarro, Víctor Navarro, Camilo Monreal, Marta Ligero, Alonso Garcia-Ruiz, Garazi Serna, Sara Simonetti, Irene Braña, Maria Vieito, Manuel Escobar, Jaume Capdevila, Annette T Byrne, Rodrigo Dienstmann, Rodrigo Toledo, Paolo Nuciforo, Elena Garralda, Francesco Grussu, Kinga Bernatowicz, Raquel Perez-Lopez","doi":"10.1148/ryai.230118","DOIUrl":"10.1148/ryai.230118","url":null,"abstract":"<p><p>Purpose To identify precise three-dimensional radiomics features in CT images that enable computation of stable and biologically meaningful habitats with machine learning for cancer heterogeneity assessment. Materials and Methods This retrospective study included 2436 liver or lung lesions from 605 CT scans (November 2010-December 2021) in 331 patients with cancer (mean age, 64.5 years ± 10.1 [SD]; 185 male patients). Three-dimensional radiomics were computed from original and perturbed (simulated retest) images with different combinations of feature computation kernel radius and bin size. The lower 95% confidence limit (LCL) of the intraclass correlation coefficient (ICC) was used to measure repeatability and reproducibility. Precise features were identified by combining repeatability and reproducibility results (LCL of ICC ≥ 0.50). Habitats were obtained with Gaussian mixture models in original and perturbed data using precise radiomics features and compared with habitats obtained using all features. The Dice similarity coefficient (DSC) was used to assess habitat stability. Biologic correlates of CT habitats were explored in a case study, with a cohort of 13 patients with CT, multiparametric MRI, and tumor biopsies. Results Three-dimensional radiomics showed poor repeatability (LCL of ICC: median [IQR], 0.442 [0.312-0.516]) and poor reproducibility against kernel radius (LCL of ICC: median [IQR], 0.440 [0.33-0.526]) but excellent reproducibility against bin size (LCL of ICC: median [IQR], 0.929 [0.853-0.988]). Twenty-six radiomics features were precise, differing in lung and liver lesions. Habitats obtained with precise features (DSC: median [IQR], 0.601 [0.494-0.712] and 0.651 [0.52-0.784] for lung and liver lesions, respectively) were more stable than those obtained with all features (DSC: median [IQR], 0.532 [0.424-0.637] and 0.587 [0.465-0.703] for lung and liver lesions, respectively; <i>P</i> < .001). In the case study, CT habitats correlated quantitatively and qualitatively with heterogeneity observed in multiparametric MRI habitats and histology. Conclusion Precise three-dimensional radiomics features were identified on CT images that enabled tumor heterogeneity assessment through stable tumor habitat computation. <b>Keywords:</b> CT, Diffusion-weighted Imaging, Dynamic Contrast-enhanced MRI, MRI, Radiomics, Unsupervised Learning, Oncology, Liver, Lung <i>Supplemental material is available for this article</i>. © RSNA, 2024 See also the commentary by Sagreiya in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230118"},"PeriodicalIF":9.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982821/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139643050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Intelligence in Radiology: Bridging Global Health Care Gaps through Innovation and Inclusion. 放射学中的人工智能:通过创新和包容弥合全球医疗差距。
IF 9.8 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-01 DOI: 10.1148/ryai.240093
Arkadiusz Sitek
{"title":"Artificial Intelligence in Radiology: Bridging Global Health Care Gaps through Innovation and Inclusion.","authors":"Arkadiusz Sitek","doi":"10.1148/ryai.240093","DOIUrl":"10.1148/ryai.240093","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 2","pages":"e240093"},"PeriodicalIF":9.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982909/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140111551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can AI Predict the Need for Surgery in Traumatic Brain Injury? 人工智能能否预测创伤性脑损伤患者的手术需求?
IF 9.8 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-01 DOI: 10.1148/ryai.230587
Sven Haller
{"title":"Can AI Predict the Need for Surgery in Traumatic Brain Injury?","authors":"Sven Haller","doi":"10.1148/ryai.230587","DOIUrl":"10.1148/ryai.230587","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 2","pages":"e230587"},"PeriodicalIF":9.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982907/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139730559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Radiology-Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1