首页 > 最新文献

Radiology-Artificial Intelligence最新文献

英文 中文
Deep Learning Segmentation of Infiltrative and Enhancing Cellular Tumor at Pre- and Posttreatment Multishell Diffusion MRI of Glioblastoma. 胶质母细胞瘤治疗前和治疗后多壳体弥散 MRI 上浸润性和增强型细胞肿瘤的深度学习分割
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-01 DOI: 10.1148/ryai.230489
Louis Gagnon, Diviya Gupta, George Mastorakos, Nathan White, Vanessa Goodwill, Carrie R McDonald, Thomas Beaumont, Christopher Conlin, Tyler M Seibert, Uyen Nguyen, Jona Hattangadi-Gluth, Santosh Kesari, Jessica D Schulte, David Piccioni, Kathleen M Schmainda, Nikdokht Farid, Anders M Dale, Jeffrey D Rudie

Purpose To develop and validate a deep learning (DL) method to detect and segment enhancing and nonenhancing cellular tumor on pre- and posttreatment MRI scans in patients with glioblastoma and to predict overall survival (OS) and progression-free survival (PFS). Materials and Methods This retrospective study included 1397 MRI scans in 1297 patients with glioblastoma, including an internal set of 243 MRI scans (January 2010 to June 2022) for model training and cross-validation and four external test cohorts. Cellular tumor maps were segmented by two radiologists on the basis of imaging, clinical history, and pathologic findings. Multimodal MRI data with perfusion and multishell diffusion imaging were inputted into a nnU-Net DL model to segment cellular tumor. Segmentation performance (Dice score) and performance in distinguishing recurrent tumor from posttreatment changes (area under the receiver operating characteristic curve [AUC]) were quantified. Model performance in predicting OS and PFS was assessed using Cox multivariable analysis. Results A cohort of 178 patients (mean age, 56 years ± 13 [SD]; 116 male, 62 female) with 243 MRI timepoints, as well as four external datasets with 55, 70, 610, and 419 MRI timepoints, respectively, were evaluated. The median Dice score was 0.79 (IQR, 0.53-0.89), and the AUC for detecting residual or recurrent tumor was 0.84 (95% CI: 0.79, 0.89). In the internal test set, estimated cellular tumor volume was significantly associated with OS (hazard ratio [HR] = 1.04 per milliliter; P < .001) and PFS (HR = 1.04 per milliliter; P < .001) after adjustment for age, sex, and gross total resection (GTR) status. In the external test sets, estimated cellular tumor volume was significantly associated with OS (HR = 1.01 per milliliter; P < .001) after adjustment for age, sex, and GTR status. Conclusion A DL model incorporating advanced imaging could accurately segment enhancing and nonenhancing cellular tumor, distinguish recurrent or residual tumor from posttreatment changes, and predict OS and PFS in patients with glioblastoma. Keywords: Segmentation, Glioblastoma, Multishell Diffusion MRI Supplemental material is available for this article. © RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现一些可能影响内容的错误。目的 开发并验证一种深度学习(DL)方法,用于检测和分割胶质母细胞瘤患者治疗前和治疗后 MRI 扫描中的增强和非增强细胞肿瘤,并预测总生存期(OS)和无进展生存期(PFS)。材料与方法 这项回顾性研究包括 1297 名胶质母细胞瘤患者的 1397 次核磁共振成像,其中包括用于模型训练和交叉验证的 243 次核磁共振成像内部队列(2010 年 1 月至 2022 年 6 月)和四个外部测试队列。细胞肿瘤图由两名放射科医生根据成像、临床病史和病理学进行分割。多模态 MRI 灌注和多壳体扩散成像被输入 nnU-Net DL 模型,以分割细胞肿瘤。对分割性能(Dice评分)和从治疗后变化中检测复发肿瘤的性能(接收器操作特征曲线下面积[AUC])进行了量化。使用 Cox 多变量分析评估了模型预测 OS 和 PFS 的性能。结果 评估了一组 178 例患者(平均年龄 56 岁 ± [SD]13;男性 121 例,女性 57 例),共 243 个 MRI 时间点,以及四个外部数据集,分别有 55、70、610 和 419 个 MRI 时间点。Dice 评分的中位数为 0.79(IQR:0.53-0.89),检测残留/复发肿瘤的 AUC 为 0.84(95% CI:0.79-0.89)。在内部测试组中,当调整年龄、性别和总切除状态时,估计的细胞肿瘤体积与OS(危险比[HR] = 1.04/mL,P < .001)和PFS(HR = 1.04/mL,P < .001)显著相关。在外部测试集中,当调整年龄、性别和大体全切除状态时,估计的细胞肿瘤体积与 OS 显著相关(HR = 1.01/mL,P < .001)。结论 结合先进成像技术的 DL 模型可准确分割增强和非增强细胞肿瘤,根据治疗后的变化对复发/残留肿瘤进行分类,并预测胶质母细胞瘤患者的 OS 和 PFS。©RSNA,2024。
{"title":"Deep Learning Segmentation of Infiltrative and Enhancing Cellular Tumor at Pre- and Posttreatment Multishell Diffusion MRI of Glioblastoma.","authors":"Louis Gagnon, Diviya Gupta, George Mastorakos, Nathan White, Vanessa Goodwill, Carrie R McDonald, Thomas Beaumont, Christopher Conlin, Tyler M Seibert, Uyen Nguyen, Jona Hattangadi-Gluth, Santosh Kesari, Jessica D Schulte, David Piccioni, Kathleen M Schmainda, Nikdokht Farid, Anders M Dale, Jeffrey D Rudie","doi":"10.1148/ryai.230489","DOIUrl":"10.1148/ryai.230489","url":null,"abstract":"<p><p>Purpose To develop and validate a deep learning (DL) method to detect and segment enhancing and nonenhancing cellular tumor on pre- and posttreatment MRI scans in patients with glioblastoma and to predict overall survival (OS) and progression-free survival (PFS). Materials and Methods This retrospective study included 1397 MRI scans in 1297 patients with glioblastoma, including an internal set of 243 MRI scans (January 2010 to June 2022) for model training and cross-validation and four external test cohorts. Cellular tumor maps were segmented by two radiologists on the basis of imaging, clinical history, and pathologic findings. Multimodal MRI data with perfusion and multishell diffusion imaging were inputted into a nnU-Net DL model to segment cellular tumor. Segmentation performance (Dice score) and performance in distinguishing recurrent tumor from posttreatment changes (area under the receiver operating characteristic curve [AUC]) were quantified. Model performance in predicting OS and PFS was assessed using Cox multivariable analysis. Results A cohort of 178 patients (mean age, 56 years ± 13 [SD]; 116 male, 62 female) with 243 MRI timepoints, as well as four external datasets with 55, 70, 610, and 419 MRI timepoints, respectively, were evaluated. The median Dice score was 0.79 (IQR, 0.53-0.89), and the AUC for detecting residual or recurrent tumor was 0.84 (95% CI: 0.79, 0.89). In the internal test set, estimated cellular tumor volume was significantly associated with OS (hazard ratio [HR] = 1.04 per milliliter; <i>P</i> < .001) and PFS (HR = 1.04 per milliliter; <i>P</i> < .001) after adjustment for age, sex, and gross total resection (GTR) status. In the external test sets, estimated cellular tumor volume was significantly associated with OS (HR = 1.01 per milliliter; <i>P</i> < .001) after adjustment for age, sex, and GTR status. Conclusion A DL model incorporating advanced imaging could accurately segment enhancing and nonenhancing cellular tumor, distinguish recurrent or residual tumor from posttreatment changes, and predict OS and PFS in patients with glioblastoma. <b>Keywords:</b> Segmentation, Glioblastoma, Multishell Diffusion MRI <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230489"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11427928/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142018897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fluid Intelligence: AI's Role in Accurate Measurement of Ascites. 流体智能:人工智能在精确测量腹水中的作用。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-01 DOI: 10.1148/ryai.240377
Alex M Aisen, Pedro S Rodrigues
{"title":"Fluid Intelligence: AI's Role in Accurate Measurement of Ascites.","authors":"Alex M Aisen, Pedro S Rodrigues","doi":"10.1148/ryai.240377","DOIUrl":"10.1148/ryai.240377","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 5","pages":"e240377"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11427919/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142018901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Intelligence Outcome Prediction in Neonates with Encephalopathy (AI-OPiNE). 新生儿脑病的人工智能结果预测(AI-OPINE)。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-01 DOI: 10.1148/ryai.240076
Christopher O Lew, Evan Calabrese, Joshua V Chen, Felicia Tang, Gunvant Chaudhari, Amanda Lee, John Faro, Sandra Juul, Amit Mathur, Robert C McKinstry, Jessica L Wisnowski, Andreas Rauschecker, Yvonne W Wu, Yi Li

Purpose To develop a deep learning algorithm to predict 2-year neurodevelopmental outcomes in neonates with hypoxic-ischemic encephalopathy using MRI and basic clinical data. Materials and Methods In this study, MRI data of term neonates with encephalopathy in the High-dose Erythropoietin for Asphyxia and Encephalopathy (HEAL) trial (ClinicalTrials.gov: NCT02811263), who were enrolled from 17 institutions between January 25, 2017, and October 9, 2019, were retrospectively analyzed. The harmonized MRI protocol included T1-weighted, T2-weighted, and diffusion tensor imaging. Deep learning classifiers were trained to predict the primary outcome of the HEAL trial (death or any neurodevelopmental impairment at 2 years) using multisequence MRI and basic clinical variables, including sex and gestational age at birth. Model performance was evaluated on test sets comprising 10% of cases from 15 institutions (in-distribution test set, n = 41) and 10% of cases from two institutions (out-of-distribution test set, n = 41). Model performance in predicting additional secondary outcomes, including death alone, was also assessed. Results For the 414 neonates (mean gestational age, 39 weeks ± 1.4 [SD]; 232 male, 182 female), in the study cohort, 198 (48%) died or had any neurodevelopmental impairment at 2 years. The deep learning model achieved an area under the receiver operating characteristic curve (AUC) of 0.74 (95% CI: 0.60, 0.86) and 63% accuracy in the in-distribution test set and an AUC of 0.77 (95% CI: 0.63, 0.90) and 78% accuracy in the out-of-distribution test set. Performance was similar or better for predicting secondary outcomes. Conclusion Deep learning analysis of neonatal brain MRI yielded high performance for predicting 2-year neurodevelopmental outcomes. Keywords: Convolutional Neural Network (CNN), Prognosis, Pediatrics, Brain, Brain Stem Clinical trial registration no. NCT02811263 Supplemental material is available for this article. © RSNA, 2024 See also commentary by Rafful and Reis Teixeira in this issue.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现一些可能影响内容的错误。目的 开发一种深度学习算法,利用核磁共振成像和基本临床数据预测缺氧缺血性脑病(HIE)新生儿的 2 年神经发育结局。材料与方法 在本研究中,对2017年1月25日至2019年10月9日期间从17家机构入组的 "高剂量促红细胞生成素治疗窒息(HEAL)"试验(ClinicalTrials.gov:NCT02811263)中患有脑病的足月新生儿的MRI数据进行了回顾性分析。统一的 MRI 方案包括 T1 加权、T2 加权和弥散张量成像。利用多序列 MRI 和基本临床变量(包括出生时的性别和胎龄)训练了深度学习分类器,以预测 HEAL 试验的主要结果(2 岁时死亡或任何神经发育障碍 [NDI])。对模型性能进行评估的测试集包括来自 15 家机构的 10% 病例(分布内测试集,n = 41)和来自 2 家机构的 100% 病例(分布外测试集,n = 41)。此外,还评估了模型在预测其他次要结果(包括单纯死亡)方面的性能。结果 在研究队列中的 414 名新生儿(平均胎龄为 39 周 ± 1.4,232 名男性,182 名女性)中,有 198 名(48%)在 2 年后死亡或出现任何 NDI。深度学习模型在分布内测试集上的接收者操作特征曲线下面积(AUC)为 0.74(95% CI:0.60-0.86),准确率为 63%;在分布外测试集上的接收者操作特征曲线下面积(AUC)为 0.77(95% CI:0.63-0.90),准确率为 78%。在预测次要结果方面的表现类似或更好。结论 对新生儿大脑磁共振成像的深度学习分析在预测2年神经发育结果方面具有很高的性能。©RSNA,2024。
{"title":"Artificial Intelligence Outcome Prediction in Neonates with Encephalopathy (AI-OPiNE).","authors":"Christopher O Lew, Evan Calabrese, Joshua V Chen, Felicia Tang, Gunvant Chaudhari, Amanda Lee, John Faro, Sandra Juul, Amit Mathur, Robert C McKinstry, Jessica L Wisnowski, Andreas Rauschecker, Yvonne W Wu, Yi Li","doi":"10.1148/ryai.240076","DOIUrl":"10.1148/ryai.240076","url":null,"abstract":"<p><p>Purpose To develop a deep learning algorithm to predict 2-year neurodevelopmental outcomes in neonates with hypoxic-ischemic encephalopathy using MRI and basic clinical data. Materials and Methods In this study, MRI data of term neonates with encephalopathy in the High-dose Erythropoietin for Asphyxia and Encephalopathy (HEAL) trial (ClinicalTrials.gov: NCT02811263), who were enrolled from 17 institutions between January 25, 2017, and October 9, 2019, were retrospectively analyzed. The harmonized MRI protocol included T1-weighted, T2-weighted, and diffusion tensor imaging. Deep learning classifiers were trained to predict the primary outcome of the HEAL trial (death or any neurodevelopmental impairment at 2 years) using multisequence MRI and basic clinical variables, including sex and gestational age at birth. Model performance was evaluated on test sets comprising 10% of cases from 15 institutions (in-distribution test set, <i>n</i> = 41) and 10% of cases from two institutions (out-of-distribution test set, <i>n</i> = 41). Model performance in predicting additional secondary outcomes, including death alone, was also assessed. Results For the 414 neonates (mean gestational age, 39 weeks ± 1.4 [SD]; 232 male, 182 female), in the study cohort, 198 (48%) died or had any neurodevelopmental impairment at 2 years. The deep learning model achieved an area under the receiver operating characteristic curve (AUC) of 0.74 (95% CI: 0.60, 0.86) and 63% accuracy in the in-distribution test set and an AUC of 0.77 (95% CI: 0.63, 0.90) and 78% accuracy in the out-of-distribution test set. Performance was similar or better for predicting secondary outcomes. Conclusion Deep learning analysis of neonatal brain MRI yielded high performance for predicting 2-year neurodevelopmental outcomes. <b>Keywords:</b> Convolutional Neural Network (CNN), Prognosis, Pediatrics, Brain, Brain Stem Clinical trial registration no. NCT02811263 <i>Supplemental material is available for this article.</i> © RSNA, 2024 See also commentary by Rafful and Reis Teixeira in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240076"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11427921/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141564665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Presurgical Upgrade Prediction of DCIS to Invasive Ductal Carcinoma Using Time-dependent Deep Learning Models with DCE MRI. 利用时间依赖性深度学习模型和 DCE MRI 对 DCIS 升级为浸润性导管癌进行手术前预测。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-01 DOI: 10.1148/ryai.230348
John D Mayfield, Dana Ataya, Mahmoud Abdalah, Olya Stringfield, Marilyn M Bui, Natarajan Raghunand, Bethany Niell, Issam El Naqa

Purpose To determine whether time-dependent deep learning models can outperform single time point models in predicting preoperative upgrade of ductal carcinoma in situ (DCIS) to invasive malignancy at dynamic contrast-enhanced (DCE) breast MRI without a lesion segmentation prerequisite. Materials and Methods In this exploratory study, 154 cases of biopsy-proven DCIS (25 upgraded at surgery and 129 not upgraded) were selected consecutively from a retrospective cohort of preoperative DCE MRI in women with a mean age of 59 years at time of diagnosis from 2012 to 2022. Binary classification was implemented with convolutional neural network (CNN)-long short-term memory (LSTM) architectures benchmarked against traditional CNNs without manual segmentation of the lesions. Combinatorial performance analysis of ResNet50 versus VGG16-based models was performed with each contrast phase. Binary classification area under the receiver operating characteristic curve (AUC) was reported. Results VGG16-based models consistently provided better holdout test AUCs than did ResNet50 in CNN and CNN-LSTM studies (multiphase test AUC, 0.67 vs 0.59, respectively, for CNN models [P = .04] and 0.73 vs 0.62 for CNN-LSTM models [P = .008]). The time-dependent model (CNN-LSTM) provided a better multiphase test AUC over single time point (CNN) models (0.73 vs 0.67; P = .04). Conclusion Compared with single time point architectures, sequential deep learning algorithms using preoperative DCE MRI improved prediction of DCIS lesions upgraded to invasive malignancy without the need for lesion segmentation. Keywords: MRI, Dynamic Contrast-enhanced, Breast, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现一些可能影响内容的错误。目的 确定在动态对比增强(DCE)乳腺 MRI 上预测乳腺导管原位癌(DCIS)术前升级为浸润性恶性肿瘤时,时间依赖性深度学习模型是否优于单一时间点模型,而无需病灶分割前提条件。材料与方法 在这项探索性研究中,我们从 2012 年至 2022 年期间平均年龄 58.6 岁的女性术前 DCE MRI 回顾性队列中连续选取了 154 例经活检证实的 DCIS(25 例在手术时升级,129 例未升级)。使用卷积神经网络-长短期记忆(CNN-LSTM)架构实施二元分类,并以传统 CNN 为基准,无需人工分割病灶。在每个对比阶段对 ResNet50 与基于 VGG16 的模型进行了组合性能分析。报告了接收器工作特征曲线下的二元分类面积 (AUC)。结果 在 CNN 和 CNNLSTM 研究中,基于 VGG16 的模型始终比 ResNet50 提供更好的保持测试 AUC(多相测试 AUC:0.67 对 0.59,多相测试 AUC:0.67 对 0.59,多相测试 AUC:0.67 对 0.59):CNN 模型的 AUC 分别为 0.67 对 0.59;P = .04;CNN-LSTM 模型的 AUC 分别为 0.73 对 0.62;P = .008)。与单时间点模型(CNN)相比,时间依赖性模型(CNN-LSTM)提供了更好的多阶段测试 AUC(0.73 对 0.67,P = .04)。结论 与单时间点架构相比,使用术前 DCE MRI 的连续深度学习算法提高了对 DCIS 病变升级为浸润性恶性肿瘤的预测能力,而无需进行病变分割。©RSNA,2024。
{"title":"Presurgical Upgrade Prediction of DCIS to Invasive Ductal Carcinoma Using Time-dependent Deep Learning Models with DCE MRI.","authors":"John D Mayfield, Dana Ataya, Mahmoud Abdalah, Olya Stringfield, Marilyn M Bui, Natarajan Raghunand, Bethany Niell, Issam El Naqa","doi":"10.1148/ryai.230348","DOIUrl":"10.1148/ryai.230348","url":null,"abstract":"<p><p>Purpose To determine whether time-dependent deep learning models can outperform single time point models in predicting preoperative upgrade of ductal carcinoma in situ (DCIS) to invasive malignancy at dynamic contrast-enhanced (DCE) breast MRI without a lesion segmentation prerequisite. Materials and Methods In this exploratory study, 154 cases of biopsy-proven DCIS (25 upgraded at surgery and 129 not upgraded) were selected consecutively from a retrospective cohort of preoperative DCE MRI in women with a mean age of 59 years at time of diagnosis from 2012 to 2022. Binary classification was implemented with convolutional neural network (CNN)-long short-term memory (LSTM) architectures benchmarked against traditional CNNs without manual segmentation of the lesions. Combinatorial performance analysis of ResNet50 versus VGG16-based models was performed with each contrast phase. Binary classification area under the receiver operating characteristic curve (AUC) was reported. Results VGG16-based models consistently provided better holdout test AUCs than did ResNet50 in CNN and CNN-LSTM studies (multiphase test AUC, 0.67 vs 0.59, respectively, for CNN models [<i>P</i> = .04] and 0.73 vs 0.62 for CNN-LSTM models [<i>P</i> = .008]). The time-dependent model (CNN-LSTM) provided a better multiphase test AUC over single time point (CNN) models (0.73 vs 0.67; <i>P</i> = .04). Conclusion Compared with single time point architectures, sequential deep learning algorithms using preoperative DCE MRI improved prediction of DCIS lesions upgraded to invasive malignancy without the need for lesion segmentation. <b>Keywords:</b> MRI, Dynamic Contrast-enhanced, Breast, Convolutional Neural Network (CNN) <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230348"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11427917/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141427769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing Pediatric Neuro-Oncology: Multi-institutional nnU-Net Segmentation of Medulloblastoma. 推进儿科神经肿瘤学:髓母细胞瘤的多机构 nnU-Net 分类。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-01 DOI: 10.1148/ryai.240517
Jeffrey D Rudie, Maria Correia de Verdier
{"title":"Advancing Pediatric Neuro-Oncology: Multi-institutional nnU-Net Segmentation of Medulloblastoma.","authors":"Jeffrey D Rudie, Maria Correia de Verdier","doi":"10.1148/ryai.240517","DOIUrl":"10.1148/ryai.240517","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 5","pages":"e240517"},"PeriodicalIF":8.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11427924/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of an Open-Source Large Language Model in Extracting Information from Free-Text Radiology Reports. 开源大语言模型从自由文本放射学报告中提取信息的性能。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-01 DOI: 10.1148/ryai.230364
Bastien Le Guellec, Alexandre Lefèvre, Charlotte Geay, Lucas Shorten, Cyril Bruge, Lotfi Hacein-Bey, Philippe Amouyel, Jean-Pierre Pruvo, Gregory Kuchcinski, Aghiles Hamroun

Purpose To assess the performance of a local open-source large language model (LLM) in various information extraction tasks from real-life emergency brain MRI reports. Materials and Methods All consecutive emergency brain MRI reports written in 2022 from a French quaternary center were retrospectively reviewed. Two radiologists identified MRI scans that were performed in the emergency department for headaches. Four radiologists scored the reports' conclusions as either normal or abnormal. Abnormalities were labeled as either headache-causing or incidental. Vicuna (LMSYS Org), an open-source LLM, performed the same tasks. Vicuna's performance metrics were evaluated using the radiologists' consensus as the reference standard. Results Among the 2398 reports during the study period, radiologists identified 595 that included headaches in the indication (median age of patients, 35 years [IQR, 26-51 years]; 68% [403 of 595] women). A positive finding was reported in 227 of 595 (38%) cases, 136 of which could explain the headache. The LLM had a sensitivity of 98.0% (95% CI: 96.5, 99.0) and specificity of 99.3% (95% CI: 98.8, 99.7) for detecting the presence of headache in the clinical context, a sensitivity of 99.4% (95% CI: 98.3, 99.9) and specificity of 98.6% (95% CI: 92.2, 100.0) for the use of contrast medium injection, a sensitivity of 96.0% (95% CI: 92.5, 98.2) and specificity of 98.9% (95% CI: 97.2, 99.7) for study categorization as either normal or abnormal, and a sensitivity of 88.2% (95% CI: 81.6, 93.1) and specificity of 73% (95% CI: 62, 81) for causal inference between MRI findings and headache. Conclusion An open-source LLM was able to extract information from free-text radiology reports with excellent accuracy without requiring further training. Keywords: Large Language Model (LLM), Generative Pretrained Transformers (GPT), Open Source, Information Extraction, Report, Brain, MRI Supplemental material is available for this article. Published under a CC BY 4.0 license. See also the commentary by Akinci D'Antonoli and Bluethgen in this issue.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 评估本地开源大语言模型(LLM)在实际急诊脑部核磁共振成像报告的各种信息提取任务中的表现。材料与方法 回顾性审查了法国一家四级中心 2022 年撰写的所有连续急诊脑部 MRI 报告。两名放射科医生确定了因头痛而进行的磁共振成像。四位放射科医生将报告结论分为正常或异常。异常被标记为导致头痛或偶发。开源 LLM Vicuna 也执行了同样的任务。以放射科医生的共识作为参考标准,对 Vicuna 的性能指标进行了评估。结果 在研究期间的 2398 份报告中,放射科医生发现有 595 份报告的适应症包括头痛(患者年龄中位数为 35 岁 [IQR,26-51],68%(403/595)为女性)。227/595(38%)例报告了阳性结果,其中 136 例可以解释头痛。在临床情况下,LLM 检测头痛存在的敏感性/特异性(95%CI)分别为 98% (583/595)(97-99)/99% (1791/1803)(99-100) ,注射造影剂的敏感性/特异性(95%CI)分别为 99% (514/517)(98-100)/99% (68/69)(92-100) 、97%(219/227)(93-99)/99%(364/368)(97-100)用于正常或异常研究分类,88%(120/136)(82-93)/73%(66/91)(62-81)用于 MRI 发现与头痛之间的因果推断。结论 开源 LLM 能够从自由文本放射学报告中提取信息,准确性极高,无需进一步培训。©RSNA,2024。
{"title":"Performance of an Open-Source Large Language Model in Extracting Information from Free-Text Radiology Reports.","authors":"Bastien Le Guellec, Alexandre Lefèvre, Charlotte Geay, Lucas Shorten, Cyril Bruge, Lotfi Hacein-Bey, Philippe Amouyel, Jean-Pierre Pruvo, Gregory Kuchcinski, Aghiles Hamroun","doi":"10.1148/ryai.230364","DOIUrl":"10.1148/ryai.230364","url":null,"abstract":"<p><p>Purpose To assess the performance of a local open-source large language model (LLM) in various information extraction tasks from real-life emergency brain MRI reports. Materials and Methods All consecutive emergency brain MRI reports written in 2022 from a French quaternary center were retrospectively reviewed. Two radiologists identified MRI scans that were performed in the emergency department for headaches. Four radiologists scored the reports' conclusions as either normal or abnormal. Abnormalities were labeled as either headache-causing or incidental. Vicuna (LMSYS Org), an open-source LLM, performed the same tasks. Vicuna's performance metrics were evaluated using the radiologists' consensus as the reference standard. Results Among the 2398 reports during the study period, radiologists identified 595 that included headaches in the indication (median age of patients, 35 years [IQR, 26-51 years]; 68% [403 of 595] women). A positive finding was reported in 227 of 595 (38%) cases, 136 of which could explain the headache. The LLM had a sensitivity of 98.0% (95% CI: 96.5, 99.0) and specificity of 99.3% (95% CI: 98.8, 99.7) for detecting the presence of headache in the clinical context, a sensitivity of 99.4% (95% CI: 98.3, 99.9) and specificity of 98.6% (95% CI: 92.2, 100.0) for the use of contrast medium injection, a sensitivity of 96.0% (95% CI: 92.5, 98.2) and specificity of 98.9% (95% CI: 97.2, 99.7) for study categorization as either normal or abnormal, and a sensitivity of 88.2% (95% CI: 81.6, 93.1) and specificity of 73% (95% CI: 62, 81) for causal inference between MRI findings and headache. Conclusion An open-source LLM was able to extract information from free-text radiology reports with excellent accuracy without requiring further training. <b>Keywords:</b> Large Language Model (LLM), Generative Pretrained Transformers (GPT), Open Source, Information Extraction, Report, Brain, MRI <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license. See also the commentary by Akinci D'Antonoli and Bluethgen in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230364"},"PeriodicalIF":8.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294959/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140877470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning for Breast Cancer Risk Prediction: Application to a Large Representative UK Screening Cohort. 深度学习用于乳腺癌风险预测:应用于英国大型代表性筛查队列。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-01 DOI: 10.1148/ryai.230431
Sam Ellis, Sandra Gomes, Matthew Trumble, Mark D Halling-Brown, Kenneth C Young, Nouman S Chaudhry, Peter Harris, Lucy M Warren

Purpose To develop an artificial intelligence (AI) deep learning tool capable of predicting future breast cancer risk from a current negative screening mammographic examination and to evaluate the model on data from the UK National Health Service Breast Screening Program. Materials and Methods The OPTIMAM Mammography Imaging Database contains screening data, including mammograms and information on interval cancers, for more than 300 000 female patients who attended screening at three different sites in the United Kingdom from 2012 onward. Cancer-free screening examinations from women aged 50-70 years were performed and classified as risk-positive or risk-negative based on the occurrence of cancer within 3 years of the original examination. Examinations with confirmed cancer and images containing implants were excluded. From the resulting 5264 risk-positive and 191 488 risk-negative examinations, training (n = 89 285), validation (n = 2106), and test (n = 39 351) datasets were produced for model development and evaluation. The AI model was trained to predict future cancer occurrence based on screening mammograms and patient age. Performance was evaluated on the test dataset using the area under the receiver operating characteristic curve (AUC) and compared across subpopulations to assess potential biases. Interpretability of the model was explored, including with saliency maps. Results On the hold-out test set, the AI model achieved an overall AUC of 0.70 (95% CI: 0.69, 0.72). There was no evidence of a difference in performance across the three sites, between patient ethnicities, or across age groups. Visualization of saliency maps and sample images provided insights into the mammographic features associated with AI-predicted cancer risk. Conclusion The developed AI tool showed good performance on a multisite, United Kingdom-specific dataset. Keywords: Deep Learning, Artificial Intelligence, Breast Cancer, Screening, Risk Prediction Supplemental material is available for this article. ©RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 开发一种人工智能(AI)深度学习工具,该工具能够根据当前乳腺X光筛查的阴性结果预测未来的乳腺癌风险,并根据英国国民健康服务乳腺筛查项目的数据对模型进行评估。材料与方法 OPTIMAM 乳房 X 线照相术成像数据库包含从 2012 年起在英国三个不同地点参加筛查的超过 30 万名女性的筛查数据,包括乳房 X 线照相术和间期癌症信息。该数据库获取了 50-70 岁妇女的无癌症筛查数据,并根据原始检查后 3 年内癌症的发生情况将其分为风险阳性和风险阴性。排除了确诊癌症的检查和含有植入物的图像。在由此产生的 5264 例风险阳性和 191488 例风险阴性检查中,产生了用于模型开发和评估的训练数据集(n = 89285)、验证数据集(n = 2106)和测试数据集(n = 39351)。对人工智能模型进行了训练,以根据筛查乳房 X 线照片和患者年龄预测未来癌症发生率。使用接收者工作特征曲线下面积(AUC)对测试数据集的性能进行评估,并对不同亚群进行比较,以评估潜在的偏差。此外,还对模型的可解释性进行了探讨,包括使用突出图。结果 在保留测试集上,人工智能模型的总体 AUC 为 0.70(95% CI:0.69,0.72)。没有证据表明三个部位、不同种族或不同年龄组的患者在性能上存在差异 突出图和样本图像的可视化提供了与人工智能预测癌症风险相关的乳房 X 线摄影特征。结论 开发的人工智能工具在英国特定的多站点数据集上表现良好。©RSNA,2024。
{"title":"Deep Learning for Breast Cancer Risk Prediction: Application to a Large Representative UK Screening Cohort.","authors":"Sam Ellis, Sandra Gomes, Matthew Trumble, Mark D Halling-Brown, Kenneth C Young, Nouman S Chaudhry, Peter Harris, Lucy M Warren","doi":"10.1148/ryai.230431","DOIUrl":"10.1148/ryai.230431","url":null,"abstract":"<p><p>Purpose To develop an artificial intelligence (AI) deep learning tool capable of predicting future breast cancer risk from a current negative screening mammographic examination and to evaluate the model on data from the UK National Health Service Breast Screening Program. Materials and Methods The OPTIMAM Mammography Imaging Database contains screening data, including mammograms and information on interval cancers, for more than 300 000 female patients who attended screening at three different sites in the United Kingdom from 2012 onward. Cancer-free screening examinations from women aged 50-70 years were performed and classified as risk-positive or risk-negative based on the occurrence of cancer within 3 years of the original examination. Examinations with confirmed cancer and images containing implants were excluded. From the resulting 5264 risk-positive and 191 488 risk-negative examinations, training (<i>n</i> = 89 285), validation (<i>n</i> = 2106), and test (<i>n</i> = 39 351) datasets were produced for model development and evaluation. The AI model was trained to predict future cancer occurrence based on screening mammograms and patient age. Performance was evaluated on the test dataset using the area under the receiver operating characteristic curve (AUC) and compared across subpopulations to assess potential biases. Interpretability of the model was explored, including with saliency maps. Results On the hold-out test set, the AI model achieved an overall AUC of 0.70 (95% CI: 0.69, 0.72). There was no evidence of a difference in performance across the three sites, between patient ethnicities, or across age groups. Visualization of saliency maps and sample images provided insights into the mammographic features associated with AI-predicted cancer risk. Conclusion The developed AI tool showed good performance on a multisite, United Kingdom-specific dataset. <b>Keywords:</b> Deep Learning, Artificial Intelligence, Breast Cancer, Screening, Risk Prediction <i>Supplemental material is available for this article.</i> ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230431"},"PeriodicalIF":8.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294956/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141074674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Checklist for Artificial Intelligence in Medical Imaging (CLAIM): 2024 Update. 医学影像人工智能检查表(CLAIM):2024 年更新。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-01 DOI: 10.1148/ryai.240300
Ali S Tejani, Michail E Klontzas, Anthony A Gatti, John T Mongan, Linda Moy, Seong Ho Park, Charles E Kahn
{"title":"Checklist for Artificial Intelligence in Medical Imaging (CLAIM): 2024 Update.","authors":"Ali S Tejani, Michail E Klontzas, Anthony A Gatti, John T Mongan, Linda Moy, Seong Ho Park, Charles E Kahn","doi":"10.1148/ryai.240300","DOIUrl":"10.1148/ryai.240300","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240300"},"PeriodicalIF":8.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11304031/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141162489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two-Stage Training Framework Using Multicontrast MRI Radiomics for IDH Mutation Status Prediction in Glioma. 利用多对比核磁共振成像放射组学预测胶质瘤中 IDH 突变状态的两阶段训练框架
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-01 DOI: 10.1148/ryai.230218
Nghi C D Truong, Chandan Ganesh Bangalore Yogananda, Benjamin C Wagner, James M Holcomb, Divya Reddy, Niloufar Saadat, Kimmo J Hatanpaa, Toral R Patel, Baowei Fei, Matthew D Lee, Rajan Jain, Richard J Bruce, Marco C Pinho, Ananth J Madhuranthakam, Joseph A Maldjian

Purpose To develop a radiomics framework for preoperative MRI-based prediction of isocitrate dehydrogenase (IDH) mutation status, a crucial glioma prognostic indicator. Materials and Methods Radiomics features (shape, first-order statistics, and texture) were extracted from the whole tumor or the combination of nonenhancing, necrosis, and edema regions. Segmentation masks were obtained via the federated tumor segmentation tool or the original data source. Boruta, a wrapper-based feature selection algorithm, identified relevant features. Addressing the imbalance between mutated and wild-type cases, multiple prediction models were trained on balanced data subsets using random forest or XGBoost and assembled to build the final classifier. The framework was evaluated using retrospective MRI scans from three public datasets (The Cancer Imaging Archive [TCIA, 227 patients], the University of California San Francisco Preoperative Diffuse Glioma MRI dataset [UCSF, 495 patients], and the Erasmus Glioma Database [EGD, 456 patients]) and internal datasets collected from the University of Texas Southwestern Medical Center (UTSW, 356 patients), New York University (NYU, 136 patients), and University of Wisconsin-Madison (UWM, 174 patients). TCIA and UTSW served as separate training sets, while the remaining data constituted the test set (1617 or 1488 testing cases, respectively). Results The best performing models trained on the TCIA dataset achieved area under the receiver operating characteristic curve (AUC) values of 0.89 for UTSW, 0.86 for NYU, 0.93 for UWM, 0.94 for UCSF, and 0.88 for EGD test sets. The best performing models trained on the UTSW dataset achieved slightly higher AUCs: 0.92 for TCIA, 0.88 for NYU, 0.96 for UWM, 0.93 for UCSF, and 0.90 for EGD. Conclusion This MRI radiomics-based framework shows promise for accurate preoperative prediction of IDH mutation status in patients with glioma. Keywords: Glioma, Isocitrate Dehydrogenase Mutation, IDH Mutation, Radiomics, MRI Supplemental material is available for this article. Published under a CC BY 4.0 license. See also commentary by Moassefi and Erickson in this issue.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 建立一个放射组学框架,用于术前基于 MRI 预测 IDH 突变状态,这是胶质瘤预后的一个重要指标。材料与方法 从整个肿瘤或非增强、坏死和水肿区域的组合中提取放射组学特征(形状、一阶统计和纹理)。分割掩膜通过联合肿瘤分割工具或原始数据源获得。Boruta是一种基于包装的特征选择算法,可识别相关特征。为了解决突变型病例和野生型病例之间的不平衡问题,使用随机森林或 XGBoost 在平衡数据子集上训练了多个预测模型,并将其组合起来建立最终分类器。利用三个公共数据集(癌症成像档案(TCIA,227 名患者)、加州大学旧金山分校术前弥漫性胶质瘤 MRI 数据集(UCSF,495 名患者))的回顾性 MRI 扫描对该框架进行了评估、和伊拉斯谟胶质瘤数据库(EGD,456 名患者))以及UT 西南医学中心(UTSW,356 名患者)、纽约大学(NYU,136 名患者)和威斯康星大学麦迪逊分校(UWM,174 名患者)收集的内部数据集。TCIA和UTSW作为单独的训练集,其余数据构成测试集(分别有1617个或1488个测试病例)。结果 在 TCIA 数据集上训练的表现最好的模型,其接收者操作特征曲线下面积(AUC)值分别为:UTSW 0.89、NYU 0.86、UWM 0.93、UCSF 0.94、EGD 测试集 0.88。在UTSW数据集上训练的表现最好的模型的AUC略高:TCIA为0.92,NYU为0.88,UWM为0.96,UCSF为0.93,EGD为0.90。结论 这种基于磁共振成像放射组学的框架有望准确预测胶质瘤患者的术前IDH突变状态。以 CC BY 4.0 许可发布。
{"title":"Two-Stage Training Framework Using Multicontrast MRI Radiomics for <i>IDH</i> Mutation Status Prediction in Glioma.","authors":"Nghi C D Truong, Chandan Ganesh Bangalore Yogananda, Benjamin C Wagner, James M Holcomb, Divya Reddy, Niloufar Saadat, Kimmo J Hatanpaa, Toral R Patel, Baowei Fei, Matthew D Lee, Rajan Jain, Richard J Bruce, Marco C Pinho, Ananth J Madhuranthakam, Joseph A Maldjian","doi":"10.1148/ryai.230218","DOIUrl":"10.1148/ryai.230218","url":null,"abstract":"<p><p>Purpose To develop a radiomics framework for preoperative MRI-based prediction of isocitrate dehydrogenase (<i>IDH</i>) mutation status, a crucial glioma prognostic indicator. Materials and Methods Radiomics features (shape, first-order statistics, and texture) were extracted from the whole tumor or the combination of nonenhancing, necrosis, and edema regions. Segmentation masks were obtained via the federated tumor segmentation tool or the original data source. Boruta, a wrapper-based feature selection algorithm, identified relevant features. Addressing the imbalance between mutated and wild-type cases, multiple prediction models were trained on balanced data subsets using random forest or XGBoost and assembled to build the final classifier. The framework was evaluated using retrospective MRI scans from three public datasets (The Cancer Imaging Archive [TCIA, 227 patients], the University of California San Francisco Preoperative Diffuse Glioma MRI dataset [UCSF, 495 patients], and the Erasmus Glioma Database [EGD, 456 patients]) and internal datasets collected from the University of Texas Southwestern Medical Center (UTSW, 356 patients), New York University (NYU, 136 patients), and University of Wisconsin-Madison (UWM, 174 patients). TCIA and UTSW served as separate training sets, while the remaining data constituted the test set (1617 or 1488 testing cases, respectively). Results The best performing models trained on the TCIA dataset achieved area under the receiver operating characteristic curve (AUC) values of 0.89 for UTSW, 0.86 for NYU, 0.93 for UWM, 0.94 for UCSF, and 0.88 for EGD test sets. The best performing models trained on the UTSW dataset achieved slightly higher AUCs: 0.92 for TCIA, 0.88 for NYU, 0.96 for UWM, 0.93 for UCSF, and 0.90 for EGD. Conclusion This MRI radiomics-based framework shows promise for accurate preoperative prediction of <i>IDH</i> mutation status in patients with glioma. <b>Keywords:</b> Glioma, Isocitrate Dehydrogenase Mutation, <i>IDH</i> Mutation, Radiomics, MRI <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license. See also commentary by Moassefi and Erickson in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230218"},"PeriodicalIF":8.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294953/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141074538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating Sex-specific Differences in Abdominal Fat Volume and Proton Density Fat Fraction at MRI Using Automated nnU-Net-based Segmentation. 利用基于 nnU-Net 的自动分割技术评估磁共振成像扫描中腹部脂肪量和质子密度脂肪率的性别差异
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-01 DOI: 10.1148/ryai.230471
Arun Somasundaram, Mingming Wu, Anna Reik, Selina Rupp, Jessie Han, Stella Naebauer, Daniela Junker, Lisa Patzelt, Meike Wiechert, Yu Zhao, Daniel Rueckert, Hans Hauner, Christina Holzapfel, Dimitrios C Karampinos

Sex-specific abdominal organ volume and proton density fat fraction (PDFF) in people with obesity during a weight loss intervention was assessed with automated multiorgan segmentation of quantitative water-fat MRI. An nnU-Net architecture was employed for automatic segmentation of abdominal organs, including visceral and subcutaneous adipose tissue, liver, and psoas and erector spinae muscle, based on quantitative chemical shift-encoded MRI and using ground truth labels generated from participants of the Lifestyle Intervention (LION) study. Each organ's volume and fat content were examined in 127 participants (73 female and 54 male participants; body mass index, 30-39.9 kg/m2) and in 81 (54 female and 32 male participants) of these participants after an 8-week formula-based low-calorie diet. Dice scores ranging from 0.91 to 0.97 were achieved for the automatic segmentation. PDFF was found to be lower in visceral adipose tissue compared with subcutaneous adipose tissue in both male and female participants. Before intervention, female participants exhibited higher PDFF in subcutaneous adipose tissue (90.6% vs 89.7%; P < .001) and lower PDFF in liver (8.6% vs 13.3%; P < .001) and visceral adipose tissue (76.4% vs 81.3%; P < .001) compared with male participants. This relation persisted after intervention. As a response to caloric restriction, male participants lost significantly more visceral adipose tissue volume (1.76 L vs 0.91 L; P < .001) and showed a higher decrease in subcutaneous adipose tissue PDFF (2.7% vs 1.5%; P < .001) than female participants. Automated body composition analysis on quantitative water-fat MRI data provides new insights for understanding sex-specific metabolic response to caloric restriction and weight loss in people with obesity. Keywords: Obesity, Chemical Shift-encoded MRI, Abdominal Fat Volume, Proton Density Fat Fraction, nnU-Net ClinicalTrials.gov registration no. NCT04023942 Supplemental material is available for this article. Published under a CC BY 4.0 license.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校样审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。利用定量水-脂肪核磁共振成像的自动多器官分割技术评估了减肥干预期间肥胖症患者腹部器官体积和质子密度脂肪分数(PDFF)的性别特异性。根据定量化学位移编码核磁共振成像,并使用生活方式干预(LION)研究参与者生成的地面实况标签,采用 nnU-Net 架构自动分割腹部器官,包括内脏脂肪组织(VAT)和皮下脂肪组织(SAT)、肝脏、腰肌和竖脊肌。研究人员对 127 名参与者(73 名女性,54 名男性;体重指数为 30-39.9 kg/m2)以及其中 81 名参与者(54 名女性,32 名男性)进行了为期 8 周的配方低热量饮食后,对每个器官的体积和脂肪含量进行了检测。自动分段的骰子得分从 0.91 到 0.97 不等。研究发现,在男性和女性参与者中,VAT 的 PDFF 均低于 SAT。干预前,与男性相比,女性在 SAT(90.6% 对 89.7%,P < .001)和肝脏(8.6% 对 13.3%,P < .001)以及 VAT(76.4% 对 81.3%,P < .001)中表现出更高的 PDFF。这种关系在干预后仍然存在。作为对热量限制的反应,与女性参与者相比,男性参与者的增值脂肪体积明显减少(1.76 升比 0.91 升,P < .001),SAT PDFF 的下降幅度也更高(2.7% 比 1.5%,P < .001)。对定量水-脂肪核磁共振成像数据进行自动身体成分分析为了解肥胖症患者对热量限制和体重减轻的性别特异性代谢反应提供了新的视角。以 CC BY 4.0 许可发布。
{"title":"Evaluating Sex-specific Differences in Abdominal Fat Volume and Proton Density Fat Fraction at MRI Using Automated nnU-Net-based Segmentation.","authors":"Arun Somasundaram, Mingming Wu, Anna Reik, Selina Rupp, Jessie Han, Stella Naebauer, Daniela Junker, Lisa Patzelt, Meike Wiechert, Yu Zhao, Daniel Rueckert, Hans Hauner, Christina Holzapfel, Dimitrios C Karampinos","doi":"10.1148/ryai.230471","DOIUrl":"10.1148/ryai.230471","url":null,"abstract":"<p><p>Sex-specific abdominal organ volume and proton density fat fraction (PDFF) in people with obesity during a weight loss intervention was assessed with automated multiorgan segmentation of quantitative water-fat MRI. An nnU-Net architecture was employed for automatic segmentation of abdominal organs, including visceral and subcutaneous adipose tissue, liver, and psoas and erector spinae muscle, based on quantitative chemical shift-encoded MRI and using ground truth labels generated from participants of the Lifestyle Intervention (LION) study. Each organ's volume and fat content were examined in 127 participants (73 female and 54 male participants; body mass index, 30-39.9 kg/m<sup>2</sup>) and in 81 (54 female and 32 male participants) of these participants after an 8-week formula-based low-calorie diet. Dice scores ranging from 0.91 to 0.97 were achieved for the automatic segmentation. PDFF was found to be lower in visceral adipose tissue compared with subcutaneous adipose tissue in both male and female participants. Before intervention, female participants exhibited higher PDFF in subcutaneous adipose tissue (90.6% vs 89.7%; <i>P</i> < .001) and lower PDFF in liver (8.6% vs 13.3%; <i>P</i> < .001) and visceral adipose tissue (76.4% vs 81.3%; <i>P</i> < .001) compared with male participants. This relation persisted after intervention. As a response to caloric restriction, male participants lost significantly more visceral adipose tissue volume (1.76 L vs 0.91 L; <i>P</i> < .001) and showed a higher decrease in subcutaneous adipose tissue PDFF (2.7% vs 1.5%; <i>P</i> < .001) than female participants. Automated body composition analysis on quantitative water-fat MRI data provides new insights for understanding sex-specific metabolic response to caloric restriction and weight loss in people with obesity. <b>Keywords:</b> Obesity, Chemical Shift-encoded MRI, Abdominal Fat Volume, Proton Density Fat Fraction, nnU-Net ClinicalTrials.gov registration no. NCT04023942 <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230471"},"PeriodicalIF":8.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294970/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141162496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Radiology-Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1