首页 > 最新文献

Radiology-Artificial Intelligence最新文献

英文 中文
AI-integrated Screening to Replace Double Reading of Mammograms: A Population-wide Accuracy and Feasibility Study. 人工智能整合筛查取代乳房 X 光片双读:全人口准确性和可行性研究。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-04 DOI: 10.1148/ryai.230529
Mohammad T Elhakim, Sarah W Stougaard, Ole Graumann, Mads Nielsen, Oke Gerke, Lisbet B Larsen, Benjamin S B Rasmussen

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Mammography screening supported by deep learning-based artificial intelligence (AI) solutions can potentially reduce workload without compromising breast cancer detection accuracy, but the site of deployment in the workflow might be crucial. This retrospective study compared three simulated AI-integrated screening scenarios with standard double reading with arbitration in a sample of 249,402 mammograms from a representative screening population. A commercial AI system replaced the first reader (Scenario 1: Integrated AIfirst), the second reader (Scenario 2: Integrated AIsecond), or both readers for triaging of low- and high-risk cases (Integrated AItriage). AI threshold values were partly chosen based on previous validation and fixing screen-read volume reduction at approximately 50% across scenarios. Detection accuracy measures were calculated. Compared with standard double reading, Integrated AIfirst showed no evidence of a difference in accuracy metrics except for a higher arbitration rate (+0.99%; P < .001). Integrated AIsecond had lower sensitivity (-1.58%; P < 0.001), negative predictive value (NPV) (- 0.01%; P < .001) and recall rate (< 0.06%; P = 0.04), but a higher positive predictive value (PPV) (+0.03%; P < .001) and arbitration rate (+1.22%; P < .001). Integrated AItriage achieved higher sensitivity (+1.33%; P < .001), PPV (+0.36%; P = .03), and NPV (+0.01%; P < .001) but lower arbitration rate (-0.88%; P < .001). Replacing one or both readers with AI seems feasible, however, the site of application in the workflow can have clinically relevant effects on accuracy and workload. ©RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。基于深度学习的人工智能(AI)解决方案支持的乳腺放射摄影筛查有可能在不影响乳腺癌检测准确性的情况下减少工作量,但工作流程中的部署地点可能至关重要。这项回顾性研究比较了三种模拟的人工智能集成筛查场景和标准双读与仲裁,样本来自具有代表性的筛查人群的 249,402 张乳房 X 光照片。商业人工智能系统取代了第一位读片员(情景 1:集成人工智能第一读片员)、第二位读片员(情景 2:集成人工智能第二读片员)或两位读片员,对低风险和高风险病例进行分流(集成人工智能分流)。人工智能阈值的部分选择是基于先前的验证,并将各种情况下的读屏量固定在大约 50%。计算了检测准确率。与标准双读相比,除了仲裁率较高(+0.99%;P < .001)外,综合人工智能第一在准确性指标上没有显示出差异。综合 AIsecond 的灵敏度 (-1.58%; P < 0.001)、阴性预测值 (NPV) (- 0.01%; P < 0.001) 和召回率 (< 0.06%; P = 0.04) 较低,但阳性预测值 (PPV) (+0.03%; P < 0.001) 和仲裁率 (+1.22%; P < 0.001) 较高。综合 AItriage 实现了更高的灵敏度(+1.33%;P < .001)、PPV(+0.36%;P = .03)和 NPV(+0.01%;P < .001),但仲裁率较低(-0.88%;P < .001)。用人工智能取代一台或两台读码器似乎是可行的,但工作流程中的应用位置会对准确性和工作量产生临床相关影响。©RSNA,2024。
{"title":"AI-integrated Screening to Replace Double Reading of Mammograms: A Population-wide Accuracy and Feasibility Study.","authors":"Mohammad T Elhakim, Sarah W Stougaard, Ole Graumann, Mads Nielsen, Oke Gerke, Lisbet B Larsen, Benjamin S B Rasmussen","doi":"10.1148/ryai.230529","DOIUrl":"https://doi.org/10.1148/ryai.230529","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Mammography screening supported by deep learning-based artificial intelligence (AI) solutions can potentially reduce workload without compromising breast cancer detection accuracy, but the site of deployment in the workflow might be crucial. This retrospective study compared three simulated AI-integrated screening scenarios with standard double reading with arbitration in a sample of 249,402 mammograms from a representative screening population. A commercial AI system replaced the first reader (Scenario 1: Integrated AI<sub>first</sub>), the second reader (Scenario 2: Integrated AI<sub>second</sub>), or both readers for triaging of low- and high-risk cases (Integrated AI<sub>triage</sub>). AI threshold values were partly chosen based on previous validation and fixing screen-read volume reduction at approximately 50% across scenarios. Detection accuracy measures were calculated. Compared with standard double reading, Integrated AI<sub>first</sub> showed no evidence of a difference in accuracy metrics except for a higher arbitration rate (+0.99%; <i>P</i> < .001). Integrated AI<sub>second</sub> had lower sensitivity (-1.58%; <i>P</i> < 0.001), negative predictive value (NPV) (- 0.01%; <i>P</i> < .001) and recall rate (< 0.06%; <i>P</i> = 0.04), but a higher positive predictive value (PPV) (+0.03%; <i>P</i> < .001) and arbitration rate (+1.22%; <i>P</i> < .001). Integrated AI<sub>triage</sub> achieved higher sensitivity (+1.33%; <i>P</i> < .001), PPV (+0.36%; <i>P</i> = .03), and NPV (+0.01%; <i>P</i> < .001) but lower arbitration rate (-0.88%; <i>P</i> < .001). Replacing one or both readers with AI seems feasible, however, the site of application in the workflow can have clinically relevant effects on accuracy and workload. ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142126863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Precise Image-level Localization of Intracranial Hemorrhage on Head CT Scans with Deep Learning Models Trained on Study-level Labels. 利用研究级标签训练的深度学习模型对头部 CT 扫描颅内出血进行图像级精确定位。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-28 DOI: 10.1148/ryai.230296
Yunan Wu, Michael Iorga, Suvarna Badhe, James Zhang, Donald R Cantrell, Elaine J Tanhehco, Nicholas Szrama, Andrew M Naidech, Michael Drakopoulos, Shamis T Hasan, Kunal M Patel, Tarek A Hijaz, Eric J Russell, Shamal Lalvani, Amit Adate, Todd B Parrish, Aggelos K Katsaggelos, Virginia B Hill

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To develop a highly generalizable weakly supervised model to automatically detect and localize image- level intracranial hemorrhage (ICH) using study-level labels. Materials and Methods In this retrospective study, the proposed model was pretrained on the image-level RSNA dataset and fine-tuned on a local dataset using attention-based bidirectional long-short-term memory networks. This local training dataset included 10,699 noncontrast head CT scans from 7469 patients with ICH study-level labels extracted from radiology reports. Model performance was compared with that of two senior neuroradiologists on 100 random test scans using the McNemar test, and its generalizability was evaluated on an external independent dataset. Results The model achieved a positive predictive value (PPV) of 85.7% (95% CI: [84.0%, 87.4%]) and an AUC of 0.96 (95% CI: [0.96, 0.97]) on the held-out local test set (n = 7243, 3721 female) and 89.3% (95% CI: [87.8%, 90.7%]) and 0.96 (95% CI: [0.96, 0.97]), respectively, on the external test set (n = 491, 178 female). For 100 randomly selected samples, the model achieved performance on par with two neuroradiologists, but with a significantly faster (P < .05) diagnostic time of 5.04 seconds per scan (versus 86 seconds and 22.2 seconds for the two neuroradiologists, respectively). The model's attention weights and heatmaps visually aligned with neuroradiologists' interpretations. Conclusion The proposed model demonstrated high generalizability and high PPVs, offering a valuable tool for expedited ICH detection and prioritization while reducing false-positive interruptions in radiologists' workflows. ©RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响文章内容的错误。目的 建立一个高度通用的弱监督模型,利用研究级标签自动检测和定位图像级颅内出血(ICH)。材料与方法 在这项回顾性研究中,利用基于注意力的双向长短期记忆网络,在图像级 RSNA 数据集上对所提出的模型进行了预训练,并在本地数据集上对其进行了微调。该本地训练数据集包括来自 7469 名患者的 10,699 张非对比头部 CT 扫描图像,这些图像带有从放射学报告中提取的 ICH 研究级标签。使用 McNemar 检验将模型的性能与两位资深神经放射学专家在 100 个随机测试扫描中的性能进行了比较,并在外部独立数据集上评估了模型的普适性。结果 在本地测试集(n = 7243,3721 名女性)上,该模型的阳性预测值(PPV)为 85.7%(95% CI:[84.0%, 87.4%]),AUC 为 0.96(95% CI:[0.96, 0.97]);在外部测试集(n = 491,178 名女性)上,该模型的阳性预测值(PPV)为 89.3%(95% CI:[87.8%, 90.7%]),AUC 为 0.96(95% CI:[0.96, 0.97])。在随机抽取的 100 个样本中,该模型的表现与两名神经放射科医生相当,但诊断时间明显更快(P < .05),每次扫描仅需 5.04 秒(而两名神经放射科医生的诊断时间分别为 86 秒和 22.2 秒)。该模型的注意力权重和热图与神经放射科医生的解释一致。结论 所提出的模型具有很高的普适性和 PPV 值,为加快 ICH 检测和优先排序提供了有价值的工具,同时减少了放射医师工作流程中假阳性的中断。©RSNA,2024。
{"title":"Precise Image-level Localization of Intracranial Hemorrhage on Head CT Scans with Deep Learning Models Trained on Study-level Labels.","authors":"Yunan Wu, Michael Iorga, Suvarna Badhe, James Zhang, Donald R Cantrell, Elaine J Tanhehco, Nicholas Szrama, Andrew M Naidech, Michael Drakopoulos, Shamis T Hasan, Kunal M Patel, Tarek A Hijaz, Eric J Russell, Shamal Lalvani, Amit Adate, Todd B Parrish, Aggelos K Katsaggelos, Virginia B Hill","doi":"10.1148/ryai.230296","DOIUrl":"https://doi.org/10.1148/ryai.230296","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop a highly generalizable weakly supervised model to automatically detect and localize image- level intracranial hemorrhage (ICH) using study-level labels. Materials and Methods In this retrospective study, the proposed model was pretrained on the image-level RSNA dataset and fine-tuned on a local dataset using attention-based bidirectional long-short-term memory networks. This local training dataset included 10,699 noncontrast head CT scans from 7469 patients with ICH study-level labels extracted from radiology reports. Model performance was compared with that of two senior neuroradiologists on 100 random test scans using the McNemar test, and its generalizability was evaluated on an external independent dataset. Results The model achieved a positive predictive value (PPV) of 85.7% (95% CI: [84.0%, 87.4%]) and an AUC of 0.96 (95% CI: [0.96, 0.97]) on the held-out local test set (<i>n</i> = 7243, 3721 female) and 89.3% (95% CI: [87.8%, 90.7%]) and 0.96 (95% CI: [0.96, 0.97]), respectively, on the external test set (<i>n</i> = 491, 178 female). For 100 randomly selected samples, the model achieved performance on par with two neuroradiologists, but with a significantly faster (<i>P</i> < .05) diagnostic time of 5.04 seconds per scan (versus 86 seconds and 22.2 seconds for the two neuroradiologists, respectively). The model's attention weights and heatmaps visually aligned with neuroradiologists' interpretations. Conclusion The proposed model demonstrated high generalizability and high PPVs, offering a valuable tool for expedited ICH detection and prioritization while reducing false-positive interruptions in radiologists' workflows. ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142081915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nn-Unet-based Segmentation of Tumor Subcompartments in Pediatric Medulloblastoma Using Multiparametric MRI: A Multi-institutional Study. 基于 Nn-Unet 的多参数磁共振成像对小儿髓母细胞瘤肿瘤亚区的分割:一项多机构研究
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-21 DOI: 10.1148/ryai.230115
Rohan Bareja, Marwa Ismail, Douglas Martin, Ameya Nayate, Ipsa Yadav, Murad Labbad, Prateek Dullur, Sanya Garg, Benita Tamrazi, Ralph Salloum, Ashley Margol, Alexander Judkins, Sukanya Raj Iyer, Peter de Blank, Pallavi Tiwari

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To evaluate nn-Unet-based segmentation models for automated delineation of medulloblastoma (MB) tumors on multi-institutional MRI scans. Materials and Methods This retrospective study included 78 pediatric patients (52 male, 26 female), with ages ranging from 2-18 years, with MB tumors from three different sites (28 from Hospital A, 18 from Hospital B, 32 from Hospital C), who had data from three clinical MRI protocols (gadolinium-enhanced T1-weighted, T2-weighted, FLAIR) available. The scans were retrospectively collected from the year 2000 until May 2019. Reference standard annotations of the tumor habitat, including enhancing tumor, edema, and cystic core + nonenhancing tumor subcompartments, were performed by two experienced neuroradiologists. Preprocessing included registration to age-appropriate atlases, skull stripping, bias correction, and intensity matching. The two models were trained as follows: (1) transfer learning nn-Unet model was pretrained on an adult glioma cohort (n = 484) and fine-tuned on MB studies using Models Genesis, and (2) direct deep learning nn-Unet model was trained directly on the MB datasets, across five-fold cross-validation. Model robustness was evaluated on the three datasets when using different combinations of training and test sets, with data from 2 sites at a time used for training and data from the third site used for testing. Results Analysis on the 3 test sites yielded Dice scores of 0.81, 0.86, 0.86 and 0.80, 0.86, 0.85 for tumor habitat; 0.68, 0.84, 0.77 and 0.67, 0.83, 0.76 for enhancing tumor; 0.56, 0.71, 0.69 and 0.56, 0.71, 0.70 for edema; and 0.32, 0.48, 0.43 and 0.29, 0.44, 0.41 for cystic core + nonenhancing tumor for the transfer learning-and direct-nn-Unet models, respectively. The models were largely robust to site-specific variations. Conclusion nn-Unet segmentation models hold promise for accurate, robust automated delineation of MB tumor subcompartments, potentially leading to more effective radiation therapy planning in pediatric MB. ©RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 评估基于 nn-Unet 的分割模型在多机构 MRI 扫描中自动划分髓母细胞瘤(MB)肿瘤的情况。材料与方法 这项回顾性研究纳入了 78 名儿科患者(52 名男性,26 名女性),他们的年龄在 2-18 岁之间,患有来自三个不同部位的 MB 肿瘤(28 名来自 A 医院,18 名来自 B 医院,32 名来自 C 医院),他们拥有三种临床 MRI 方案(钆增强 T1 加权、T2 加权、FLAIR)的数据。这些扫描数据是回顾性收集的,时间从 2000 年至 2019 年 5 月。肿瘤生境的参考标准注释,包括增强肿瘤、水肿、囊性核心+非增强肿瘤亚分区,由两位经验丰富的神经放射科医生完成。预处理包括与年龄相适应的图谱配准、头骨剥离、偏差校正和强度匹配。两个模型的训练方法如下:(1) 转移学习 nn-Unet 模型在成人胶质瘤队列(n = 484)上进行预训练,并使用 Models Genesis 在 MB 研究上进行微调;(2) 直接深度学习 nn-Unet 模型直接在 MB 数据集上进行训练,并进行五倍交叉验证。使用不同的训练集和测试集组合在三个数据集上评估了模型的鲁棒性,每次使用两个站点的数据进行训练,使用第三个站点的数据进行测试。结果 对 3 个测试点进行分析后发现,肿瘤生境的 Dice 分数分别为 0.81、0.86、0.86 和 0.80、0.86、0.85;肿瘤增强的 Dice 分数分别为 0.68、0.84、0.77 和 0.67、0.83、0.76;肿瘤生长的 Dice 分数分别为 0.56、0.对于转移学习模型和直接 nn-Unet 模型,水肿分别为 0.56、0.71、0.69 和 0.56、0.71、0.70;囊核 + 非增强肿瘤分别为 0.32、0.48、0.43 和 0.29、0.44、0.41。这些模型对特定部位的变化基本没有影响。结论 nn-Unet 分割模型有望准确、稳健地自动划分 MB 肿瘤亚分区,从而更有效地制定小儿 MB 放疗计划。©RSNA,2024。
{"title":"Nn-Unet-based Segmentation of Tumor Subcompartments in Pediatric Medulloblastoma Using Multiparametric MRI: A Multi-institutional Study.","authors":"Rohan Bareja, Marwa Ismail, Douglas Martin, Ameya Nayate, Ipsa Yadav, Murad Labbad, Prateek Dullur, Sanya Garg, Benita Tamrazi, Ralph Salloum, Ashley Margol, Alexander Judkins, Sukanya Raj Iyer, Peter de Blank, Pallavi Tiwari","doi":"10.1148/ryai.230115","DOIUrl":"https://doi.org/10.1148/ryai.230115","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content</i>. Purpose To evaluate nn-Unet-based segmentation models for automated delineation of medulloblastoma (MB) tumors on multi-institutional MRI scans. Materials and Methods This retrospective study included 78 pediatric patients (52 male, 26 female), with ages ranging from 2-18 years, with MB tumors from three different sites (28 from Hospital A, 18 from Hospital B, 32 from Hospital C), who had data from three clinical MRI protocols (gadolinium-enhanced T1-weighted, T2-weighted, FLAIR) available. The scans were retrospectively collected from the year 2000 until May 2019. Reference standard annotations of the tumor habitat, including enhancing tumor, edema, and cystic core + nonenhancing tumor subcompartments, were performed by two experienced neuroradiologists. Preprocessing included registration to age-appropriate atlases, skull stripping, bias correction, and intensity matching. The two models were trained as follows: (1) transfer learning nn-Unet model was pretrained on an adult glioma cohort (<i>n</i> = 484) and fine-tuned on MB studies using Models Genesis, and (2) direct deep learning nn-Unet model was trained directly on the MB datasets, across five-fold cross-validation. Model robustness was evaluated on the three datasets when using different combinations of training and test sets, with data from 2 sites at a time used for training and data from the third site used for testing. Results Analysis on the 3 test sites yielded Dice scores of 0.81, 0.86, 0.86 and 0.80, 0.86, 0.85 for tumor habitat; 0.68, 0.84, 0.77 and 0.67, 0.83, 0.76 for enhancing tumor; 0.56, 0.71, 0.69 and 0.56, 0.71, 0.70 for edema; and 0.32, 0.48, 0.43 and 0.29, 0.44, 0.41 for cystic core + nonenhancing tumor for the transfer learning-and direct-nn-Unet models, respectively. The models were largely robust to site-specific variations. Conclusion nn-Unet segmentation models hold promise for accurate, robust automated delineation of MB tumor subcompartments, potentially leading to more effective radiation therapy planning in pediatric MB. ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142018900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-based Unsupervised Domain Adaptation via a Unified Model for Prostate Lesion Detection Using Multisite Biparametric MRI Datasets. 利用多部位双参数磁共振成像数据集,通过统一模型进行前列腺病变检测的基于深度学习的无监督领域适应。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-21 DOI: 10.1148/ryai.230521
Hao Li, Han Liu, Heinrich von Busch, Robert Grimm, Henkjan Huisman, Angela Tong, David Winkel, Tobias Penzkofer, Ivan Shabunin, Moon Hyung Choi, Qingsong Yang, Dieter Szolar, Steven Shea, Fergus Coakley, Mukesh Harisinghani, Ipek Oguz, Dorin Comaniciu, Ali Kamen, Bin Lou

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To determine whether the unsupervised domain adaptation (UDA) method with generated images improves the performance of a supervised learning (SL) model for prostate cancer (PCa) detection using multisite bp-MRI datasets. Materials and Methods This retrospective study included data from 5,150 patients (14,191 samples) collected across nine different imaging centers. A novel UDA method using a unified generative model was developed for PCa detection using multisite bp-MRI datasets. This method translates diffusion-weighted imaging (DWI) acquisitions, including apparent diffusion coefficient (ADC) and individual DW images acquired using various b-values, to align with the style of images acquired using b-values recommended by Prostate Imaging Reporting and Data System (PI-RADS) guidelines. The generated ADC and DW images replace the original images for PCa detection. An independent set of 1,692 test cases (2,393 samples) was used for evaluation. The area under the receiver operating characteristic curve (AUC) was used as the primary metric, and statistical analysis was performed via bootstrapping. Results For all test cases, the AUC values for baseline SL and UDA methods were 0.73 and 0.79 (P < .001), respectively, for PI-RADS ≥ 3, and 0.77 and 0.80 (P < .001) for PI-RADS ≥ 4 PCa lesions. In the 361 test cases under the most unfavorable image acquisition setting, the AUC values for baseline SL and UDA were 0.49 and 0.76 (P < .001) for PI-RADS ≥ 3, and 0.50 and 0.77 (P < .001) for PI-RADS ≥ 4 PCa lesions. Conclusion UDA with generated images improved the performance of SL methods in PCa lesion detection across multisite datasets with various b values, especially for images acquired with significant deviations from the PI-RADS recommended DWI protocol (eg, with an extremely high b-value). ©RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响文章内容的错误。目的 确定使用生成图像的无监督领域适应(UDA)方法是否能提高使用多部位 bp-MRI 数据集进行前列腺癌(PCa)检测的监督学习(SL)模型的性能。材料与方法 这项回顾性研究包括九个不同成像中心收集的 5,150 名患者(14,191 个样本)的数据。研究人员使用统一生成模型开发了一种新型 UDA 方法,用于使用多部位 bp-MRI 数据集检测 PCa。该方法将扩散加权成像(DWI)采集数据(包括表观扩散系数(ADC)和使用不同 b 值采集的单个 DW 图像)转换为前列腺成像报告和数据系统(PI-RADS)指南推荐的 b 值采集图像样式。生成的 ADC 和 DW 图像取代了用于 PCa 检测的原始图像。评估使用了一组独立的 1,692 个测试案例(2,393 个样本)。接收者操作特征曲线下面积(AUC)被用作主要指标,统计分析通过引导法进行。结果 在所有测试病例中,对于 PI-RADS ≥ 3 的 PCa 病变,基线 SL 和 UDA 方法的 AUC 值分别为 0.73 和 0.79(P < .001);对于 PI-RADS ≥ 4 的 PCa 病变,基线 SL 和 UDA 方法的 AUC 值分别为 0.77 和 0.80(P < .001)。在最不利的图像采集设置下的 361 个测试病例中,基线 SL 和 UDA 的 AUC 值分别为:PI-RADS ≥ 3 为 0.49 和 0.76(P < .001),PI-RADS ≥ 4 PCa 病变为 0.50 和 0.77(P < .001)。结论 使用生成图像的 UDA 提高了 SL 方法在不同 b 值的多部位数据集上检测 PCa 病灶的性能,尤其是在采集的图像明显偏离 PI-RADS 推荐的 DWI 方案(如具有极高 b 值)时。©RSNA,2024。
{"title":"Deep Learning-based Unsupervised Domain Adaptation via a Unified Model for Prostate Lesion Detection Using Multisite Biparametric MRI Datasets.","authors":"Hao Li, Han Liu, Heinrich von Busch, Robert Grimm, Henkjan Huisman, Angela Tong, David Winkel, Tobias Penzkofer, Ivan Shabunin, Moon Hyung Choi, Qingsong Yang, Dieter Szolar, Steven Shea, Fergus Coakley, Mukesh Harisinghani, Ipek Oguz, Dorin Comaniciu, Ali Kamen, Bin Lou","doi":"10.1148/ryai.230521","DOIUrl":"https://doi.org/10.1148/ryai.230521","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To determine whether the unsupervised domain adaptation (UDA) method with generated images improves the performance of a supervised learning (SL) model for prostate cancer (PCa) detection using multisite bp-MRI datasets. Materials and Methods This retrospective study included data from 5,150 patients (14,191 samples) collected across nine different imaging centers. A novel UDA method using a unified generative model was developed for PCa detection using multisite bp-MRI datasets. This method translates diffusion-weighted imaging (DWI) acquisitions, including apparent diffusion coefficient (ADC) and individual DW images acquired using various b-values, to align with the style of images acquired using b-values recommended by Prostate Imaging Reporting and Data System (PI-RADS) guidelines. The generated ADC and DW images replace the original images for PCa detection. An independent set of 1,692 test cases (2,393 samples) was used for evaluation. The area under the receiver operating characteristic curve (AUC) was used as the primary metric, and statistical analysis was performed via bootstrapping. Results For all test cases, the AUC values for baseline SL and UDA methods were 0.73 and 0.79 (<i>P</i> < .001), respectively, for PI-RADS ≥ 3, and 0.77 and 0.80 (<i>P</i> < .001) for PI-RADS ≥ 4 PCa lesions. In the 361 test cases under the most unfavorable image acquisition setting, the AUC values for baseline SL and UDA were 0.49 and 0.76 (<i>P</i> < .001) for PI-RADS ≥ 3, and 0.50 and 0.77 (<i>P</i> < .001) for PI-RADS ≥ 4 PCa lesions. Conclusion UDA with generated images improved the performance of SL methods in PCa lesion detection across multisite datasets with various b values, especially for images acquired with significant deviations from the PI-RADS recommended DWI protocol (eg, with an extremely high b-value). ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142018898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Fairness of Automated Chest Radiograph Diagnosis by Contrastive Learning. 通过对比学习提高胸片自动诊断的公平性
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-21 DOI: 10.1148/ryai.230342
Mingquan Lin, Tianhao Li, Zhaoyi Sun, Gregory Holste, Ying Ding, Fei Wang, George Shih, Yifan Peng

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To develop an artificial intelligence model that utilizes supervised contrastive learning to minimize bias in chest radiograph (CXR) diagnosis. Materials and Methods In this retrospective study, the proposed method was evaluated on two datasets: the Medical Imaging and Data Resource Center (MIDRC) dataset with 77,887 CXRs from 27,796 patients collected as of April 20, 2023 for COVID-19 diagnosis, and the NIH Chest x-ray 14 (NIH-CXR) dataset with 112,120 CXRs from 30,805 patients collected between 1992 and 2015. In the NIH-CXR dataset, thoracic abnormalities included atelectasis, cardiomegaly, effusion, infiltration, mass, nodule, pneumonia, pneumothorax, consolidation, edema, emphysema, fibrosis, pleural thickening, or hernia. The proposed method utilized supervised contrastive learning with carefully selected positive and negative samples to generate fair image embeddings, which were fine-tuned for subsequent tasks to reduce bias in CXR diagnosis. The method was evaluated using the marginal area under the receiver operating characteristic curve (AUC) difference (ΔmAUC). Results The proposed model showed a significant decrease in bias across all subgroups compared with the baseline models, as evidenced by a paired T-test (P < .001). The ΔmAUCs obtained by the proposed method were 0.01 (95% CI, 0.01-0.01), 0.21 (95% CI, 0.21-0.21), and 0.10 (95% CI, 0.10-0.10) for sex, race, and age subgroups, respectively, on MIDRC, and 0.01 (95% CI, 0.01-0.01) and 0.05 (95% CI, 0.05-0.05) for sex and age subgroups, respectively, on NIH-CXR. Conclusion Employing supervised contrastive learning can mitigate bias in CXR diagnosis, addressing concerns of fairness and reliability in deep learning-based diagnostic methods. ©RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 开发一种人工智能模型,利用有监督的对比学习最大程度地减少胸片(CXR)诊断中的偏差。材料与方法 在这项回顾性研究中,我们在两个数据集上对所提出的方法进行了评估:医学影像和数据资源中心(MIDRC)数据集,其中包含截至 2023 年 4 月 20 日为 COVID-19 诊断收集的 27,796 名患者的 77,887 张 CXR;以及美国国立卫生研究院胸部 X 光 14(NIH-CXR)数据集,其中包含 1992 年至 2015 年收集的 30,805 名患者的 112,120 张 CXR。在 NIH-CXR 数据集中,胸部异常包括肺不张、心脏肿大、渗出、浸润、肿块、结节、肺炎、气胸、合并症、水肿、肺气肿、纤维化、胸膜增厚或疝气。所提出的方法利用监督对比学习和精心挑选的正负样本生成公平的图像嵌入,并在后续任务中对其进行微调,以减少 CXR 诊断中的偏差。使用接收者工作特征曲线下的边际面积(AUC)差值(ΔmAUC)对该方法进行了评估。结果 经配对 T 检验(P < .001)显示,与基线模型相比,所提出的模型在所有亚组中的偏倚率均显著降低。在 MIDRC 上,所提方法获得的性别、种族和年龄分组的 ΔmAUCs 分别为 0.01(95% CI,0.01-0.01)、0.21(95% CI,0.21-0.21)和 0.10(95% CI,0.10-0.10);在 NIH-CXR 上,性别和年龄分组的 ΔmAUCs 分别为 0.01(95% CI,0.01-0.01)和 0.05(95% CI,0.05-0.05)。结论 采用有监督的对比学习可以减轻 CXR 诊断中的偏差,解决基于深度学习的诊断方法的公平性和可靠性问题。©RSNA,2024。
{"title":"Improving Fairness of Automated Chest Radiograph Diagnosis by Contrastive Learning.","authors":"Mingquan Lin, Tianhao Li, Zhaoyi Sun, Gregory Holste, Ying Ding, Fei Wang, George Shih, Yifan Peng","doi":"10.1148/ryai.230342","DOIUrl":"https://doi.org/10.1148/ryai.230342","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop an artificial intelligence model that utilizes supervised contrastive learning to minimize bias in chest radiograph (CXR) diagnosis. Materials and Methods In this retrospective study, the proposed method was evaluated on two datasets: the Medical Imaging and Data Resource Center (MIDRC) dataset with 77,887 CXRs from 27,796 patients collected as of April 20, 2023 for COVID-19 diagnosis, and the NIH Chest x-ray 14 (NIH-CXR) dataset with 112,120 CXRs from 30,805 patients collected between 1992 and 2015. In the NIH-CXR dataset, thoracic abnormalities included atelectasis, cardiomegaly, effusion, infiltration, mass, nodule, pneumonia, pneumothorax, consolidation, edema, emphysema, fibrosis, pleural thickening, or hernia. The proposed method utilized supervised contrastive learning with carefully selected positive and negative samples to generate fair image embeddings, which were fine-tuned for subsequent tasks to reduce bias in CXR diagnosis. The method was evaluated using the marginal area under the receiver operating characteristic curve (AUC) difference (ΔmAUC). Results The proposed model showed a significant decrease in bias across all subgroups compared with the baseline models, as evidenced by a paired T-test (<i>P</i> < .001). The ΔmAUCs obtained by the proposed method were 0.01 (95% CI, 0.01-0.01), 0.21 (95% CI, 0.21-0.21), and 0.10 (95% CI, 0.10-0.10) for sex, race, and age subgroups, respectively, on MIDRC, and 0.01 (95% CI, 0.01-0.01) and 0.05 (95% CI, 0.05-0.05) for sex and age subgroups, respectively, on NIH-CXR. Conclusion Employing supervised contrastive learning can mitigate bias in CXR diagnosis, addressing concerns of fairness and reliability in deep learning-based diagnostic methods. ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142018899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning Segmentation of Infiltrative and Enhancing Cellular Tumor on Pre- and Posttreatment Multishell Diffusion MRI of Glioblastoma. 胶质母细胞瘤治疗前和治疗后多壳体弥散 MRI 上浸润性和增强型细胞肿瘤的深度学习分割
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-21 DOI: 10.1148/ryai.230489
Louis Gagnon, Diviya Gupta, George Mastorakos, Nathan White, Vanessa Goodwill, Carrie R McDonald, Thomas Beaumont, Christopher Conlin, Tyler M Seibert, Uyen Nguyen, Jona Hattangadi-Gluth, Santosh Kesari, Jessica D Schulte, David Piccioni, Kathleen M Schmainda, Nikdokht Farid, Anders M Dale, Jeffrey D Rudie

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To develop and validate a deep learning (DL) method to detect and segment enhancing and nonenhancing cellular tumor on pre- and posttreatment MRI scans of patients with glioblastoma and to predict overall survival (OS) and progression-free survival (PFS). Materials and Methods This retrospective study included 1397 MRIs in 1297 patients with glioblastoma, including an internal cohort of 243 MRIs (January 2010-June 2022) for model training and cross-validation and four external test cohorts. Cellular tumor maps were segmented by two radiologists based on imaging, clinical history, and pathology. Multimodal MRI with perfusion and multishell diffusion imaging were inputted into a nnU-Net DL model to segment cellular tumor. Segmentation performance (Dice score) and performance in detecting recurrent tumor from posttreatment changes (area under the receiver operating characteristic curve [AUC]) were quantified. Model performance in predicting OS and PFS was assessed using Cox multivariable analysis. Results A cohort of 178 patients (mean age, 56 years ± [SD]13; 121 male, 57 female) with 243 MRI timepoints, as well as four external datasets with 55, 70, 610 and 419 MRI timepoints, respectively, were evaluated. The median Dice score was 0.79 (IQR:0.53-0.89) and the AUC for detecting residual/recurrent tumor was 0.84 (95% CI:0.79- 0.89). In the internal test set, estimated cellular tumor volume was significantly associated with OS (hazard ratio [HR] = 1.04/mL, P < .001) and PFS (HR = 1.04/mL, P < .001) when adjusting for age, sex and gross total resection status. In the external test sets, estimated cellular tumor volume was significantly associated with OS (HR = 1.01/mL, P < .001) when adjusting for age, sex and gross total resection status. Conclusion A DL model incorporating advanced imaging could accurately segment enhancing and nonenhancing cellular tumor, classify recurrent/residual tumor from posttreatment changes, and predict OS and PFS in patients with glioblastoma. ©RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现一些可能影响内容的错误。目的 开发并验证一种深度学习(DL)方法,用于检测和分割胶质母细胞瘤患者治疗前和治疗后 MRI 扫描中的增强和非增强细胞肿瘤,并预测总生存期(OS)和无进展生存期(PFS)。材料与方法 这项回顾性研究包括 1297 名胶质母细胞瘤患者的 1397 次核磁共振成像,其中包括用于模型训练和交叉验证的 243 次核磁共振成像内部队列(2010 年 1 月至 2022 年 6 月)和四个外部测试队列。细胞肿瘤图由两名放射科医生根据成像、临床病史和病理学进行分割。多模态 MRI 灌注和多壳体扩散成像被输入 nnU-Net DL 模型,以分割细胞肿瘤。对分割性能(Dice评分)和从治疗后变化中检测复发肿瘤的性能(接收器操作特征曲线下面积[AUC])进行了量化。使用 Cox 多变量分析评估了模型预测 OS 和 PFS 的性能。结果 评估了一组 178 例患者(平均年龄 56 岁 ± [SD]13;男性 121 例,女性 57 例),共 243 个 MRI 时间点,以及四个外部数据集,分别有 55、70、610 和 419 个 MRI 时间点。Dice 评分的中位数为 0.79(IQR:0.53-0.89),检测残留/复发肿瘤的 AUC 为 0.84(95% CI:0.79-0.89)。在内部测试组中,当调整年龄、性别和总切除状态时,估计的细胞肿瘤体积与OS(危险比[HR] = 1.04/mL,P < .001)和PFS(HR = 1.04/mL,P < .001)显著相关。在外部测试集中,当调整年龄、性别和大体全切除状态时,估计的细胞肿瘤体积与 OS 显著相关(HR = 1.01/mL,P < .001)。结论 结合先进成像技术的 DL 模型可准确分割增强和非增强细胞肿瘤,根据治疗后的变化对复发/残留肿瘤进行分类,并预测胶质母细胞瘤患者的 OS 和 PFS。©RSNA,2024。
{"title":"Deep Learning Segmentation of Infiltrative and Enhancing Cellular Tumor on Pre- and Posttreatment Multishell Diffusion MRI of Glioblastoma.","authors":"Louis Gagnon, Diviya Gupta, George Mastorakos, Nathan White, Vanessa Goodwill, Carrie R McDonald, Thomas Beaumont, Christopher Conlin, Tyler M Seibert, Uyen Nguyen, Jona Hattangadi-Gluth, Santosh Kesari, Jessica D Schulte, David Piccioni, Kathleen M Schmainda, Nikdokht Farid, Anders M Dale, Jeffrey D Rudie","doi":"10.1148/ryai.230489","DOIUrl":"10.1148/ryai.230489","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content</i>. Purpose To develop and validate a deep learning (DL) method to detect and segment enhancing and nonenhancing cellular tumor on pre- and posttreatment MRI scans of patients with glioblastoma and to predict overall survival (OS) and progression-free survival (PFS). Materials and Methods This retrospective study included 1397 MRIs in 1297 patients with glioblastoma, including an internal cohort of 243 MRIs (January 2010-June 2022) for model training and cross-validation and four external test cohorts. Cellular tumor maps were segmented by two radiologists based on imaging, clinical history, and pathology. Multimodal MRI with perfusion and multishell diffusion imaging were inputted into a nnU-Net DL model to segment cellular tumor. Segmentation performance (Dice score) and performance in detecting recurrent tumor from posttreatment changes (area under the receiver operating characteristic curve [AUC]) were quantified. Model performance in predicting OS and PFS was assessed using Cox multivariable analysis. Results A cohort of 178 patients (mean age, 56 years ± [SD]13; 121 male, 57 female) with 243 MRI timepoints, as well as four external datasets with 55, 70, 610 and 419 MRI timepoints, respectively, were evaluated. The median Dice score was 0.79 (IQR:0.53-0.89) and the AUC for detecting residual/recurrent tumor was 0.84 (95% CI:0.79- 0.89). In the internal test set, estimated cellular tumor volume was significantly associated with OS (hazard ratio [HR] = 1.04/mL, <i>P</i> < .001) and PFS (HR = 1.04/mL, <i>P</i> < .001) when adjusting for age, sex and gross total resection status. In the external test sets, estimated cellular tumor volume was significantly associated with OS (HR = 1.01/mL, <i>P</i> < .001) when adjusting for age, sex and gross total resection status. Conclusion A DL model incorporating advanced imaging could accurately segment enhancing and nonenhancing cellular tumor, classify recurrent/residual tumor from posttreatment changes, and predict OS and PFS in patients with glioblastoma. ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142018897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Computer-aided Detection for Digital Breast Tomosynthesis by Incorporating Temporal Change. 通过纳入时间变化改进数字乳腺断层合成的计算机辅助检测。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-14 DOI: 10.1148/ryai.230391
Yinhao Ren, Zisheng Liang, Jun Ge, Xiaoming Xu, Jonathan Go, Derek L Nguyen, Joseph Y Lo, Lars J Grimm

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To develop a deep learning algorithm that uses temporal information to improve the performance of a previously published framework of cancer lesion detection for digital breast tomosynthesis (DBT). Materials and Methods This retrospective study analyzed the current and the 1-year prior Hologic DBT screening examinations from 8 different institutions between 2016 to 2020. The dataset contained 973 cancer and 7123 noncancer cases. The front-end of this algorithm was an existing deep learning framework that performed singleview lesion detection followed by ipsilateral view matching. For this study, PriorNet was implemented as a cascaded deep learning module that used the additional growth information to refine the final probability of malignancy. Data from seven of the eight sites were used for training and validation, while the eighth site was reserved for external testing. Model performance was evaluated using localization receiver operating characteristic (ROC) curves. Results On the validation set, PriorNet showed an area under the ROC curve (AUC) of 0.931 (95% CI 0.930- 0.931), which outperformed both baseline models using single-view detection (AUC, 0.892 (95% CI 0.891-0.892), P < .001) and ipsilateral matching (AUC, 0.915 (95% CI 0.914-0.915), P < .001). On the external test set, PriorNet achieved an AUC of 0.896 (95% CI 0.885-0.896), outperforming both baselines (AUCs, 0.846 (95% CI 0.846-0.847, P < .001) and 0.865 (95% CI 0.865-0.866) P < .001, respectively). In the high sensitivity range of 0.9 to 1.0, the partial AUC of PriorNet was significantly higher (P < .001) relative to both baselines. Conclusion PriorNet using temporal information further improved the breast cancer detection performance of an existing DBT cancer detection framework. ©RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现一些可能影响内容的错误。目的 开发一种利用时间信息的深度学习算法,以提高以前发表的数字乳腺断层合成(DBT)癌症病灶检测框架的性能。材料与方法 这项回顾性研究分析了 8 家不同机构在 2016 年至 2020 年期间进行的当前和之前 1 年的 Hologic DBT 筛查检查。数据集包含 973 例癌症病例和 7123 例非癌症病例。该算法的前端是一个现有的深度学习框架,可进行单视图病变检测,然后进行同侧视图匹配。在本研究中,PriorNet 是作为级联深度学习模块实施的,它使用额外的生长信息来完善恶性肿瘤的最终概率。八个部位中七个部位的数据用于训练和验证,而第八个部位则用于外部测试。使用定位接收器操作特征曲线(ROC)对模型性能进行评估。结果 在验证集上,PriorNet 的 ROC 曲线下面积(AUC)为 0.931(95% CI 0.930-0.931),优于使用单视角检测(AUC,0.892(95% CI 0.891-0.892),P < .001)和同侧匹配(AUC,0.915(95% CI 0.914-0.915),P < .001)的两个基线模型。在外部测试集上,PriorNet 的 AUC 为 0.896(95% CI 0.885-0.896),优于两个基线(AUC 分别为 0.846(95% CI 0.846-0.847,P < .001)和 0.865(95% CI 0.865-0.866),P < .001)。在 0.9 至 1.0 的高灵敏度范围内,PriorNet 的部分 AUC 明显高于两种基线(P < .001)。结论 使用时间信息的 PriorNet 进一步提高了现有 DBT 癌症检测框架的乳腺癌检测性能。©RSNA, 2024.
{"title":"Improving Computer-aided Detection for Digital Breast Tomosynthesis by Incorporating Temporal Change.","authors":"Yinhao Ren, Zisheng Liang, Jun Ge, Xiaoming Xu, Jonathan Go, Derek L Nguyen, Joseph Y Lo, Lars J Grimm","doi":"10.1148/ryai.230391","DOIUrl":"https://doi.org/10.1148/ryai.230391","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop a deep learning algorithm that uses temporal information to improve the performance of a previously published framework of cancer lesion detection for digital breast tomosynthesis (DBT). Materials and Methods This retrospective study analyzed the current and the 1-year prior Hologic DBT screening examinations from 8 different institutions between 2016 to 2020. The dataset contained 973 cancer and 7123 noncancer cases. The front-end of this algorithm was an existing deep learning framework that performed singleview lesion detection followed by ipsilateral view matching. For this study, PriorNet was implemented as a cascaded deep learning module that used the additional growth information to refine the final probability of malignancy. Data from seven of the eight sites were used for training and validation, while the eighth site was reserved for external testing. Model performance was evaluated using localization receiver operating characteristic (ROC) curves. Results On the validation set, PriorNet showed an area under the ROC curve (AUC) of 0.931 (95% CI 0.930- 0.931), which outperformed both baseline models using single-view detection (AUC, 0.892 (95% CI 0.891-0.892), <i>P</i> < .001) and ipsilateral matching (AUC, 0.915 (95% CI 0.914-0.915), <i>P</i> < .001). On the external test set, PriorNet achieved an AUC of 0.896 (95% CI 0.885-0.896), outperforming both baselines (AUCs, 0.846 (95% CI 0.846-0.847, <i>P</i> < .001) and 0.865 (95% CI 0.865-0.866) <i>P</i> < .001, respectively). In the high sensitivity range of 0.9 to 1.0, the partial AUC of PriorNet was significantly higher (<i>P</i> < .001) relative to both baselines. Conclusion PriorNet using temporal information further improved the breast cancer detection performance of an existing DBT cancer detection framework. ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141976812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of an Open-Source Large Language Model in Extracting Information from Free-Text Radiology Reports. 开源大语言模型从自由文本放射学报告中提取信息的性能。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-01 DOI: 10.1148/ryai.230364
Bastien Le Guellec, Alexandre Lefèvre, Charlotte Geay, Lucas Shorten, Cyril Bruge, Lotfi Hacein-Bey, Philippe Amouyel, Jean-Pierre Pruvo, Gregory Kuchcinski, Aghiles Hamroun

Purpose To assess the performance of a local open-source large language model (LLM) in various information extraction tasks from real-life emergency brain MRI reports. Materials and Methods All consecutive emergency brain MRI reports written in 2022 from a French quaternary center were retrospectively reviewed. Two radiologists identified MRI scans that were performed in the emergency department for headaches. Four radiologists scored the reports' conclusions as either normal or abnormal. Abnormalities were labeled as either headache-causing or incidental. Vicuna (LMSYS Org), an open-source LLM, performed the same tasks. Vicuna's performance metrics were evaluated using the radiologists' consensus as the reference standard. Results Among the 2398 reports during the study period, radiologists identified 595 that included headaches in the indication (median age of patients, 35 years [IQR, 26-51 years]; 68% [403 of 595] women). A positive finding was reported in 227 of 595 (38%) cases, 136 of which could explain the headache. The LLM had a sensitivity of 98.0% (95% CI: 96.5, 99.0) and specificity of 99.3% (95% CI: 98.8, 99.7) for detecting the presence of headache in the clinical context, a sensitivity of 99.4% (95% CI: 98.3, 99.9) and specificity of 98.6% (95% CI: 92.2, 100.0) for the use of contrast medium injection, a sensitivity of 96.0% (95% CI: 92.5, 98.2) and specificity of 98.9% (95% CI: 97.2, 99.7) for study categorization as either normal or abnormal, and a sensitivity of 88.2% (95% CI: 81.6, 93.1) and specificity of 73% (95% CI: 62, 81) for causal inference between MRI findings and headache. Conclusion An open-source LLM was able to extract information from free-text radiology reports with excellent accuracy without requiring further training. Keywords: Large Language Model (LLM), Generative Pretrained Transformers (GPT), Open Source, Information Extraction, Report, Brain, MRI Supplemental material is available for this article. Published under a CC BY 4.0 license. See also the commentary by Akinci D'Antonoli and Bluethgen in this issue.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 评估本地开源大语言模型(LLM)在实际急诊脑部核磁共振成像报告的各种信息提取任务中的表现。材料与方法 回顾性审查了法国一家四级中心 2022 年撰写的所有连续急诊脑部 MRI 报告。两名放射科医生确定了因头痛而进行的磁共振成像。四位放射科医生将报告结论分为正常或异常。异常被标记为导致头痛或偶发。开源 LLM Vicuna 也执行了同样的任务。以放射科医生的共识作为参考标准,对 Vicuna 的性能指标进行了评估。结果 在研究期间的 2398 份报告中,放射科医生发现有 595 份报告的适应症包括头痛(患者年龄中位数为 35 岁 [IQR,26-51],68%(403/595)为女性)。227/595(38%)例报告了阳性结果,其中 136 例可以解释头痛。在临床情况下,LLM 检测头痛存在的敏感性/特异性(95%CI)分别为 98% (583/595)(97-99)/99% (1791/1803)(99-100) ,注射造影剂的敏感性/特异性(95%CI)分别为 99% (514/517)(98-100)/99% (68/69)(92-100) 、97%(219/227)(93-99)/99%(364/368)(97-100)用于正常或异常研究分类,88%(120/136)(82-93)/73%(66/91)(62-81)用于 MRI 发现与头痛之间的因果推断。结论 开源 LLM 能够从自由文本放射学报告中提取信息,准确性极高,无需进一步培训。©RSNA,2024。
{"title":"Performance of an Open-Source Large Language Model in Extracting Information from Free-Text Radiology Reports.","authors":"Bastien Le Guellec, Alexandre Lefèvre, Charlotte Geay, Lucas Shorten, Cyril Bruge, Lotfi Hacein-Bey, Philippe Amouyel, Jean-Pierre Pruvo, Gregory Kuchcinski, Aghiles Hamroun","doi":"10.1148/ryai.230364","DOIUrl":"10.1148/ryai.230364","url":null,"abstract":"<p><p>Purpose To assess the performance of a local open-source large language model (LLM) in various information extraction tasks from real-life emergency brain MRI reports. Materials and Methods All consecutive emergency brain MRI reports written in 2022 from a French quaternary center were retrospectively reviewed. Two radiologists identified MRI scans that were performed in the emergency department for headaches. Four radiologists scored the reports' conclusions as either normal or abnormal. Abnormalities were labeled as either headache-causing or incidental. Vicuna (LMSYS Org), an open-source LLM, performed the same tasks. Vicuna's performance metrics were evaluated using the radiologists' consensus as the reference standard. Results Among the 2398 reports during the study period, radiologists identified 595 that included headaches in the indication (median age of patients, 35 years [IQR, 26-51 years]; 68% [403 of 595] women). A positive finding was reported in 227 of 595 (38%) cases, 136 of which could explain the headache. The LLM had a sensitivity of 98.0% (95% CI: 96.5, 99.0) and specificity of 99.3% (95% CI: 98.8, 99.7) for detecting the presence of headache in the clinical context, a sensitivity of 99.4% (95% CI: 98.3, 99.9) and specificity of 98.6% (95% CI: 92.2, 100.0) for the use of contrast medium injection, a sensitivity of 96.0% (95% CI: 92.5, 98.2) and specificity of 98.9% (95% CI: 97.2, 99.7) for study categorization as either normal or abnormal, and a sensitivity of 88.2% (95% CI: 81.6, 93.1) and specificity of 73% (95% CI: 62, 81) for causal inference between MRI findings and headache. Conclusion An open-source LLM was able to extract information from free-text radiology reports with excellent accuracy without requiring further training. <b>Keywords:</b> Large Language Model (LLM), Generative Pretrained Transformers (GPT), Open Source, Information Extraction, Report, Brain, MRI <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license. See also the commentary by Akinci D'Antonoli and Bluethgen in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294959/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140877470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning for Breast Cancer Risk Prediction: Application to a Large Representative UK Screening Cohort. 深度学习用于乳腺癌风险预测:应用于英国大型代表性筛查队列。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-01 DOI: 10.1148/ryai.230431
Sam Ellis, Sandra Gomes, Matthew Trumble, Mark D Halling-Brown, Kenneth C Young, Nouman S Chaudhry, Peter Harris, Lucy M Warren

Purpose To develop an artificial intelligence (AI) deep learning tool capable of predicting future breast cancer risk from a current negative screening mammographic examination and to evaluate the model on data from the UK National Health Service Breast Screening Program. Materials and Methods The OPTIMAM Mammography Imaging Database contains screening data, including mammograms and information on interval cancers, for more than 300 000 female patients who attended screening at three different sites in the United Kingdom from 2012 onward. Cancer-free screening examinations from women aged 50-70 years were performed and classified as risk-positive or risk-negative based on the occurrence of cancer within 3 years of the original examination. Examinations with confirmed cancer and images containing implants were excluded. From the resulting 5264 risk-positive and 191 488 risk-negative examinations, training (n = 89 285), validation (n = 2106), and test (n = 39 351) datasets were produced for model development and evaluation. The AI model was trained to predict future cancer occurrence based on screening mammograms and patient age. Performance was evaluated on the test dataset using the area under the receiver operating characteristic curve (AUC) and compared across subpopulations to assess potential biases. Interpretability of the model was explored, including with saliency maps. Results On the hold-out test set, the AI model achieved an overall AUC of 0.70 (95% CI: 0.69, 0.72). There was no evidence of a difference in performance across the three sites, between patient ethnicities, or across age groups. Visualization of saliency maps and sample images provided insights into the mammographic features associated with AI-predicted cancer risk. Conclusion The developed AI tool showed good performance on a multisite, United Kingdom-specific dataset. Keywords: Deep Learning, Artificial Intelligence, Breast Cancer, Screening, Risk Prediction Supplemental material is available for this article. ©RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 开发一种人工智能(AI)深度学习工具,该工具能够根据当前乳腺X光筛查的阴性结果预测未来的乳腺癌风险,并根据英国国民健康服务乳腺筛查项目的数据对模型进行评估。材料与方法 OPTIMAM 乳房 X 线照相术成像数据库包含从 2012 年起在英国三个不同地点参加筛查的超过 30 万名女性的筛查数据,包括乳房 X 线照相术和间期癌症信息。该数据库获取了 50-70 岁妇女的无癌症筛查数据,并根据原始检查后 3 年内癌症的发生情况将其分为风险阳性和风险阴性。排除了确诊癌症的检查和含有植入物的图像。在由此产生的 5264 例风险阳性和 191488 例风险阴性检查中,产生了用于模型开发和评估的训练数据集(n = 89285)、验证数据集(n = 2106)和测试数据集(n = 39351)。对人工智能模型进行了训练,以根据筛查乳房 X 线照片和患者年龄预测未来癌症发生率。使用接收者工作特征曲线下面积(AUC)对测试数据集的性能进行评估,并对不同亚群进行比较,以评估潜在的偏差。此外,还对模型的可解释性进行了探讨,包括使用突出图。结果 在保留测试集上,人工智能模型的总体 AUC 为 0.70(95% CI:0.69,0.72)。没有证据表明三个部位、不同种族或不同年龄组的患者在性能上存在差异 突出图和样本图像的可视化提供了与人工智能预测癌症风险相关的乳房 X 线摄影特征。结论 开发的人工智能工具在英国特定的多站点数据集上表现良好。©RSNA,2024。
{"title":"Deep Learning for Breast Cancer Risk Prediction: Application to a Large Representative UK Screening Cohort.","authors":"Sam Ellis, Sandra Gomes, Matthew Trumble, Mark D Halling-Brown, Kenneth C Young, Nouman S Chaudhry, Peter Harris, Lucy M Warren","doi":"10.1148/ryai.230431","DOIUrl":"10.1148/ryai.230431","url":null,"abstract":"<p><p>Purpose To develop an artificial intelligence (AI) deep learning tool capable of predicting future breast cancer risk from a current negative screening mammographic examination and to evaluate the model on data from the UK National Health Service Breast Screening Program. Materials and Methods The OPTIMAM Mammography Imaging Database contains screening data, including mammograms and information on interval cancers, for more than 300 000 female patients who attended screening at three different sites in the United Kingdom from 2012 onward. Cancer-free screening examinations from women aged 50-70 years were performed and classified as risk-positive or risk-negative based on the occurrence of cancer within 3 years of the original examination. Examinations with confirmed cancer and images containing implants were excluded. From the resulting 5264 risk-positive and 191 488 risk-negative examinations, training (<i>n</i> = 89 285), validation (<i>n</i> = 2106), and test (<i>n</i> = 39 351) datasets were produced for model development and evaluation. The AI model was trained to predict future cancer occurrence based on screening mammograms and patient age. Performance was evaluated on the test dataset using the area under the receiver operating characteristic curve (AUC) and compared across subpopulations to assess potential biases. Interpretability of the model was explored, including with saliency maps. Results On the hold-out test set, the AI model achieved an overall AUC of 0.70 (95% CI: 0.69, 0.72). There was no evidence of a difference in performance across the three sites, between patient ethnicities, or across age groups. Visualization of saliency maps and sample images provided insights into the mammographic features associated with AI-predicted cancer risk. Conclusion The developed AI tool showed good performance on a multisite, United Kingdom-specific dataset. <b>Keywords:</b> Deep Learning, Artificial Intelligence, Breast Cancer, Screening, Risk Prediction <i>Supplemental material is available for this article.</i> ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294956/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141074674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Checklist for Artificial Intelligence in Medical Imaging (CLAIM): 2024 Update. 医学影像人工智能检查表(CLAIM):2024 年更新。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-01 DOI: 10.1148/ryai.240300
Ali S Tejani, Michail E Klontzas, Anthony A Gatti, John T Mongan, Linda Moy, Seong Ho Park, Charles E Kahn
{"title":"Checklist for Artificial Intelligence in Medical Imaging (CLAIM): 2024 Update.","authors":"Ali S Tejani, Michail E Klontzas, Anthony A Gatti, John T Mongan, Linda Moy, Seong Ho Park, Charles E Kahn","doi":"10.1148/ryai.240300","DOIUrl":"10.1148/ryai.240300","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11304031/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141162489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Radiology-Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1