首页 > 最新文献

Radiology-Artificial Intelligence最新文献

英文 中文
Random Convolutions for Domain Generalization of Deep Learning-based Medical Image Segmentation Models. 基于深度学习的医学图像分割模型领域泛化的随机卷积。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-01 DOI: 10.1148/ryai.240502
Daniel Scholz, Ayhan Can Erdur, Jan C Peeken, Aswathi Varma, Robert Graf, Jan S Kirschke, Daniel Rueckert, Benedikt Wiestler

Purpose To evaluate random convolutions as an augmentation strategy for improving domain generalization of deep learning-based segmentation models in medical imaging. Materials and Methods In this retrospective study, a random convolution-based augmentation strategy was applied to abdominal organ segmentation (AbdomenCT-1k dataset: 361 CT images; Abdominal Multi Organ Segmentation [AMOS] dataset: 298 CT and 59 MRI scans) and brain tissue segmentation (Information eXtraction from Images [IXI] dataset: 504 T1-weighted images from Guy's and Hammersmith Hospitals, 146 paired T1-weighted and T2-weighted images from the Institute of Psychiatry). Performance was compared with baseline and state-of-the-art segmentation models (TotalSegmentator and deepAtropos). Random convolution configurations were analyzed for effects on in- and out-of-domain performance. Results The random convolution-enhanced U-Net achieved in-domain Dice scores comparable to state-of-the-art baselines (CT: 0.93 vs TotalSegmentator: 0.95; T1-weighted imaging: 0.83 vs deepAtropos: 0.79). Out-of-domain Dice scores were significantly higher (MRI: 0.93, T2-weighted imaging: 0.52) compared with baselines (TotalSegmentator in MRI: 0.85, deepAtropos in T2-weighted imaging: 0.33; false discovery rate-adjusted P < .001). Augmentation probability and configuration influenced the trade-off between in- and out-of-domain performance. Conclusion Random convolutions yielding more robust segmentation models generalized better to unseen domains than models trained without random convolutions and are compatible with diverse segmentation architectures. Keywords: MR-Imaging, CT, Supervised Learning, Segmentation, Abdomen/GI, Experimental Investigations Supplemental material is available for this article. © RSNA, 2025 See also commentary by Mathai in this issue.

目的评价随机卷积作为一种增强策略对医学影像中基于深度学习的分割模型的领域泛化的改善作用。在这项回顾性研究中,基于随机卷积的增强策略应用于腹部器官分割(腹部器官分割(腹部器官分割:361张CT图像;AMOS: 298张CT和59张MRI扫描)和脑组织分割(IXI: 504张来自盖伊和哈默史密斯医院的t1加权[T1w]图像,146张来自精神病学研究所的T1w/ t2加权[T2w]图像)。将性能与基线和最先进的分割模型(TotalSegmentator和deepAtropos)进行比较。分析了随机卷积配置对域内和域外性能的影响。随机卷积增强的UNet实现了与最先进基线相当的域内Dice分数(CT: 0.93 vs TotalSegmentator: 0.95; T1w成像:0.83 vs deepAtropos: 0.79)。与基线(MRI TotalSegmentator: 0.85, T2w成像deepAtropos: 0.33, fdr校正P值< 0.001)相比,域外Dice评分显著升高(MRI: 0.93, T2w成像:0.52)。增强概率和配置影响域内和域外性能之间的权衡。结论随机卷积产生的分割模型比没有随机卷积训练的模型更鲁棒,可以更好地泛化到未知领域,并且兼容各种分割架构。©RSNA, 2025年。
{"title":"Random Convolutions for Domain Generalization of Deep Learning-based Medical Image Segmentation Models.","authors":"Daniel Scholz, Ayhan Can Erdur, Jan C Peeken, Aswathi Varma, Robert Graf, Jan S Kirschke, Daniel Rueckert, Benedikt Wiestler","doi":"10.1148/ryai.240502","DOIUrl":"10.1148/ryai.240502","url":null,"abstract":"<p><p>Purpose To evaluate random convolutions as an augmentation strategy for improving domain generalization of deep learning-based segmentation models in medical imaging. Materials and Methods In this retrospective study, a random convolution-based augmentation strategy was applied to abdominal organ segmentation (AbdomenCT-1k dataset: 361 CT images; Abdominal Multi Organ Segmentation [AMOS] dataset: 298 CT and 59 MRI scans) and brain tissue segmentation (Information eXtraction from Images [IXI] dataset: 504 T1-weighted images from Guy's and Hammersmith Hospitals, 146 paired T1-weighted and T2-weighted images from the Institute of Psychiatry). Performance was compared with baseline and state-of-the-art segmentation models (TotalSegmentator and deepAtropos). Random convolution configurations were analyzed for effects on in- and out-of-domain performance. Results The random convolution-enhanced U-Net achieved in-domain Dice scores comparable to state-of-the-art baselines (CT: 0.93 vs TotalSegmentator: 0.95; T1-weighted imaging: 0.83 vs deepAtropos: 0.79). Out-of-domain Dice scores were significantly higher (MRI: 0.93, T2-weighted imaging: 0.52) compared with baselines (TotalSegmentator in MRI: 0.85, deepAtropos in T2-weighted imaging: 0.33; false discovery rate-adjusted <i>P</i> < .001). Augmentation probability and configuration influenced the trade-off between in- and out-of-domain performance. Conclusion Random convolutions yielding more robust segmentation models generalized better to unseen domains than models trained without random convolutions and are compatible with diverse segmentation architectures. <b>Keywords:</b> MR-Imaging, CT, Supervised Learning, Segmentation, Abdomen/GI, Experimental Investigations <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Mathai in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240502"},"PeriodicalIF":13.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145550006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Futile to Feasible: Improving Assessment of Stenosis in Heavily Calcified Coronary Arteries. 从无用到可行:改进重度钙化冠状动脉狭窄的评估。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-01 DOI: 10.1148/ryai.251077
Ahmed Maiter, Samer Alabed
{"title":"From Futile to Feasible: Improving Assessment of Stenosis in Heavily Calcified Coronary Arteries.","authors":"Ahmed Maiter, Samer Alabed","doi":"10.1148/ryai.251077","DOIUrl":"https://doi.org/10.1148/ryai.251077","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"8 1","pages":"e251077"},"PeriodicalIF":13.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146067574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Breast Cancers Detected and Missed by AI-CAD: Results from the AI-STREAM Trial. AI-CAD检测和遗漏乳腺癌:来自AI-STREAM试验的结果。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-01 DOI: 10.1148/ryai.250281
Yun-Woo Chang, Jung Kyu Ryu, Jin Kyung An, Nami Choi, Young Mi Park, Kyung Hee Ko

Purpose To evaluate the characteristics of breast cancers detected and missed by artificial intelligence-based computer-assisted diagnosis (AI-CAD) during screening mammography. Materials and Methods This retrospective secondary analysis was conducted using data from the Artificial Intelligence for Breast Cancer Screening in Mammography trial (ClinicalTrials.gov: NCT05024591), a prospective, multicenter cohort study performed from 2021 to 2022. AI-CAD results were categorized into nine subgroups based on abnormality scores (in 10% increments). Positive predictive values of recall (PPV1s) were calculated for each subgroup and by breast density, and AI-CAD scores were compared with mammographic and pathologic features. Results A total of 24 543 women (mean age ± SD, 59.8 years ± 11.2), including two with bilateral cancer, were included; 148 cancers were confirmed by pathologic evaluation after 1 year of follow-up. AI-CAD results were negative in 23 010 cases (93.8%) and positive in 1535 (6.2%). The overall PPV1 was 8.7% (133 of 1535), with a sensitivity of 89.9% and specificity of 94.3%; PPV1 increased with higher abnormality scores but remained below 3% in groups 1 and 3 for dense breasts. AI-CAD detected 3.4% (five of 148) of cancers missed by radiologists but missed 8.1% (12 of 148) that were detected at radiologist recall. Abnormality scores were lower in patients presenting with mammographic asymmetry (P = .001) and luminal A subtype (P = .032). Conclusion AI-CAD shows potential to improve breast cancer detection in screening programs and to support radiologists in mammogram interpretation. Understanding the imaging and pathologic features of cancers detected or missed by AI-CAD may enhance its effective clinical application. Keywords: Breast Cancer, Mammography, AI CAD Clinical trial registration no. NCT05024591 © RSNA, 2025 See also commentary by Do and Bahl in this issue.

目的评价基于人工智能的计算机辅助诊断(AI-CAD)在乳腺x线筛查中发现和漏诊的乳腺癌特征。材料和方法本回顾性二次分析使用人工智能乳腺x线摄影筛查试验(ClinicalTrials.gov: NCT05024591)的数据进行,这是一项前瞻性多中心队列研究,于2021年至2022年进行。AI-CAD结果根据异常评分分为9个亚组(以10%的增量)。计算每个亚组和乳腺密度的阳性回忆率预测值(PPV1),并比较AI-CAD评分和乳腺x线摄影和病理特征。结果共纳入24543例女性(平均年龄59.8岁±11.2 [SD]),其中双侧肿瘤2例,随访1年后病理证实肿瘤148例。AI-CAD阴性23,010例(93.8%),阳性1,535例(6.2%)。总体PPV1为8.7%(133/ 1535),敏感性为89.9%,特异性为94.3%,PPV1随着异常评分的增加而增加,但在致密乳房的1组和3组中仍低于3%。AI-CAD检测出被放射科医生遗漏的3.4%(5/148)的癌症,但遗漏了被放射科医生召回的8.1%(12/148)的癌症。乳房x线摄影不对称(P = 0.001)和腔内A亚型(P = 0.032)患者的异常评分较低。结论AI-CAD显示了在筛查项目中提高乳腺癌检测的潜力,并支持放射科医生对乳房x线照片的解释。了解AI-CAD检测或漏诊肿瘤的影像学及病理特征,有助于提高其临床有效应用。©RSNA, 2025年。
{"title":"Breast Cancers Detected and Missed by AI-CAD: Results from the AI-STREAM Trial.","authors":"Yun-Woo Chang, Jung Kyu Ryu, Jin Kyung An, Nami Choi, Young Mi Park, Kyung Hee Ko","doi":"10.1148/ryai.250281","DOIUrl":"10.1148/ryai.250281","url":null,"abstract":"<p><p>Purpose To evaluate the characteristics of breast cancers detected and missed by artificial intelligence-based computer-assisted diagnosis (AI-CAD) during screening mammography. Materials and Methods This retrospective secondary analysis was conducted using data from the Artificial Intelligence for Breast Cancer Screening in Mammography trial (ClinicalTrials.gov: NCT05024591), a prospective, multicenter cohort study performed from 2021 to 2022. AI-CAD results were categorized into nine subgroups based on abnormality scores (in 10% increments). Positive predictive values of recall (PPV1s) were calculated for each subgroup and by breast density, and AI-CAD scores were compared with mammographic and pathologic features. Results A total of 24 543 women (mean age ± SD, 59.8 years ± 11.2), including two with bilateral cancer, were included; 148 cancers were confirmed by pathologic evaluation after 1 year of follow-up. AI-CAD results were negative in 23 010 cases (93.8%) and positive in 1535 (6.2%). The overall PPV1 was 8.7% (133 of 1535), with a sensitivity of 89.9% and specificity of 94.3%; PPV1 increased with higher abnormality scores but remained below 3% in groups 1 and 3 for dense breasts. AI-CAD detected 3.4% (five of 148) of cancers missed by radiologists but missed 8.1% (12 of 148) that were detected at radiologist recall. Abnormality scores were lower in patients presenting with mammographic asymmetry (<i>P</i> = .001) and luminal A subtype (<i>P</i> = .032). Conclusion AI-CAD shows potential to improve breast cancer detection in screening programs and to support radiologists in mammogram interpretation. Understanding the imaging and pathologic features of cancers detected or missed by AI-CAD may enhance its effective clinical application. <b>Keywords:</b> Breast Cancer, Mammography, AI CAD Clinical trial registration no. NCT05024591 © RSNA, 2025 See also commentary by Do and Bahl in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e250281"},"PeriodicalIF":13.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145378797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Randomness: Can It Serve as a Bridge to Domain Generalization? 随机性:它能作为领域泛化的桥梁吗?
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-01 DOI: 10.1148/ryai.251014
Tejas Sudharshan Mathai
{"title":"Randomness: Can It Serve as a Bridge to Domain Generalization?","authors":"Tejas Sudharshan Mathai","doi":"10.1148/ryai.251014","DOIUrl":"https://doi.org/10.1148/ryai.251014","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"8 1","pages":"e251014"},"PeriodicalIF":13.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145967197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Brain Extraction Tool for Nonenhanced CT and CT Angiography: CTA-BET. 用于非增强CT和CT血管造影的鲁棒脑提取工具:Cta-bet。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-01 DOI: 10.1148/ryai.240847
Mustafa Ahmed Mahmutoglu, Aditya Rastogi, Yeong Chul Yun, Sanya Middha, Julius Kernbach, Martha Foltyn-Dumitru, Gianluca Brugnara, Philipp Vollmuth, Alexander Radbruch, Martin Bendszus, Marianne Schell

Purpose To develop and evaluate a deep learning-based brain extraction model, CTA-BET, capable of providing accurate brain segmentation for CT angiography (CTA) and non-contrast-enhanced CT (NCCT) images. Materials and Methods In this retrospective study, CTA-BET was trained using CTA data from multi-institutional cohorts (n = 100 patients) and validated on an external CTA dataset (n = 50 patients). NCCT validation was performed using the publicly available CQ500 dataset (n = 132 patients). The model's performance was compared with five benchmark noncommercial brain extraction tools. Dice score, Hausdorff distance, and z score-normalized histograms were used to evaluate segmentation performance. Results The CTA-BET model outperformed all benchmark models, achieving a mean Dice score of 0.99 (95% CI: 0.99, 0.99) on CTA data (P < .001 for all comparisons) and 0.98 (95% CI: 0.98, 0.99) on NCCT images (P < .001 for all comparisons). In terms of Hausdorff distance, CTA-BET demonstrated higher performance compared with other benchmark tools on CTA images (P < .001 for all comparisons). Conclusion CTA-BET outperformed benchmark brain extraction tools on both CTA and NCCT images, providing a robust and accurate solution that could enhance automated imaging analysis in clinical and research settings. Keywords: CT-Angiography, Head/Neck, Brain/Brain Stem, Computer Applications-3D, Comparative Studies, Experimental Investigations, Technology Assessment, Segmentation, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA 2025.

目的开发并评估一种基于深度学习的脑提取模型CTA- bet,该模型能够为CT血管造影(CTA)和非对比增强(NCCT)图像提供准确的脑分割。在这项回顾性研究中,CTA- bet使用来自多机构队列(n = 100例患者)的CTA数据进行训练,并在外部CTA数据集(n = 50例患者)上进行验证。使用公开可用的CQ500数据集(n = 132例患者)进行NCCT验证。将该模型的性能与五种基准非商业脑提取工具进行比较。使用Dice评分、Hausdorff距离和z-score归一化直方图来评估分割性能。CTA- bet模型优于所有基准模型,在CTA数据上的平均Dice得分为0.99 (95% CI: 0.99, 0.99)(所有比较的P < 0.001),在NCCT图像上的平均Dice得分为0.98 (95% CI: 0.98, 0.99)(所有比较的P < 0.001)。在豪斯多夫距离方面,CTA- bet在CTA图像上比其他基准工具表现出更高的性能(P < 0.001)。结论CTA- bet在CTA和NCCT图像上均优于基准脑提取工具,为临床和研究领域的自动成像分析提供了可靠、准确的解决方案。©RSNA, 2025年。
{"title":"Robust Brain Extraction Tool for Nonenhanced CT and CT Angiography: CTA-BET.","authors":"Mustafa Ahmed Mahmutoglu, Aditya Rastogi, Yeong Chul Yun, Sanya Middha, Julius Kernbach, Martha Foltyn-Dumitru, Gianluca Brugnara, Philipp Vollmuth, Alexander Radbruch, Martin Bendszus, Marianne Schell","doi":"10.1148/ryai.240847","DOIUrl":"10.1148/ryai.240847","url":null,"abstract":"<p><p>Purpose To develop and evaluate a deep learning-based brain extraction model, CTA-BET, capable of providing accurate brain segmentation for CT angiography (CTA) and non-contrast-enhanced CT (NCCT) images. Materials and Methods In this retrospective study, CTA-BET was trained using CTA data from multi-institutional cohorts (<i>n</i> = 100 patients) and validated on an external CTA dataset (<i>n</i> = 50 patients). NCCT validation was performed using the publicly available CQ500 dataset (<i>n</i> = 132 patients). The model's performance was compared with five benchmark noncommercial brain extraction tools. Dice score, Hausdorff distance, and <i>z</i> score-normalized histograms were used to evaluate segmentation performance. Results The CTA-BET model outperformed all benchmark models, achieving a mean Dice score of 0.99 (95% CI: 0.99, 0.99) on CTA data (<i>P</i> < .001 for all comparisons) and 0.98 (95% CI: 0.98, 0.99) on NCCT images (<i>P</i> < .001 for all comparisons). In terms of Hausdorff distance, CTA-BET demonstrated higher performance compared with other benchmark tools on CTA images (<i>P</i> < .001 for all comparisons). Conclusion CTA-BET outperformed benchmark brain extraction tools on both CTA and NCCT images, providing a robust and accurate solution that could enhance automated imaging analysis in clinical and research settings. <b>Keywords:</b> CT-Angiography, Head/Neck, Brain/Brain Stem, Computer Applications-3D, Comparative Studies, Experimental Investigations, Technology Assessment, Segmentation, Convolutional Neural Network (CNN) <i>Supplemental material is available for this article.</i> © RSNA 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240847"},"PeriodicalIF":13.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145379122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rethinking Privacy in Medical Imaging AI: From Metadata and Pixel-Level Identification Risks to Federated Learning and Synthetic Data Challenges. 重新思考医疗成像人工智能中的隐私:从元数据和像素级识别风险到联邦学习和综合数据挑战。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-01 DOI: 10.1148/ryai.250273
Konstantina Giouroukou, Kostas Marias, Manolis Tsiknakis, Michail E Klontzas

Metadata, which refers to nonimage information such as patient identifiers, acquisition parameters, and institutional details, have long been the primary focus of de-identification efforts when constructing datasets for artificial intelligence applications in medical imaging. However, it is now evident that information intrinsic to the image itself, at the pixel level (eg, intensity values), can also be exploited by deep learning models, potentially revealing sensitive patient data and posing privacy risks. This report discusses both metadata and sources of identifiable information in medical imaging studies, highlighting the potential risks of overlooking their presence. Privacy-preserving approaches such as federated learning and synthetic data generation are also reviewed, with emphasis on their limitations-particularly vulnerabilities to model inversion and inference attacks-that must be considered when developing and deploying artificial intelligence in medical imaging. Keywords: Privacy, Metadata, Synthetic, Federated Learning, Anonymization De-identification ©RSNA, 2025.

元数据是指非图像信息,如患者标识符、采集参数和机构详细信息,长期以来一直是为医学成像中的人工智能(AI)应用构建数据集时去识别工作的主要焦点。然而,现在很明显,图像本身固有的信息,在像素级别(例如,强度值),也可以被深度学习模型利用,可能会泄露敏感的患者数据并带来隐私风险。本文讨论了医学成像研究中可识别信息的元数据和来源,强调了忽视它们存在的潜在风险。本文还回顾了联邦学习和合成数据生成等隐私保护方法,重点强调了它们的局限性,特别是模型反演和推理攻击的脆弱性,这是在医学成像中开发和部署人工智能时必须考虑的问题。©RSNA, 2025年。
{"title":"Rethinking Privacy in Medical Imaging AI: From Metadata and Pixel-Level Identification Risks to Federated Learning and Synthetic Data Challenges.","authors":"Konstantina Giouroukou, Kostas Marias, Manolis Tsiknakis, Michail E Klontzas","doi":"10.1148/ryai.250273","DOIUrl":"10.1148/ryai.250273","url":null,"abstract":"<p><p>Metadata, which refers to nonimage information such as patient identifiers, acquisition parameters, and institutional details, have long been the primary focus of de-identification efforts when constructing datasets for artificial intelligence applications in medical imaging. However, it is now evident that information intrinsic to the image itself, at the pixel level (eg, intensity values), can also be exploited by deep learning models, potentially revealing sensitive patient data and posing privacy risks. This report discusses both metadata and sources of identifiable information in medical imaging studies, highlighting the potential risks of overlooking their presence. Privacy-preserving approaches such as federated learning and synthetic data generation are also reviewed, with emphasis on their limitations-particularly vulnerabilities to model inversion and inference attacks-that must be considered when developing and deploying artificial intelligence in medical imaging. <b>Keywords:</b> Privacy, Metadata, Synthetic, Federated Learning, Anonymization De-identification ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e250273"},"PeriodicalIF":13.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145606753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Body Charts from CT Segmentations across the Adult Lifespan: Large-scale Cross-sectional and Longitudinal Analyses. 成人一生中CT分割的身体图:大规模横断面和纵向分析。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-24 DOI: 10.1148/ryai.250506
Christian Wachinger, Bernhard Renger, Christopher Späth, Marcus R Makowski

Purpose To model the distribution of CT-derived whole-body anatomic volumes across adulthood and establish comprehensive cross-sectional and longitudinal reference charts, addressing the current lack of non-brain CT-based whole-body standards. Materials and Methods Retrospective CT scans acquired from March 2017 to April 2025 (189,710 scans, 106,563 patients) from the institutional PACS and two external datasets (19,393 and 1,158 patients, respectively), were automatically segmented into 104 structures (totaling 7.8 million volumes). An automated quality control pipeline, incorporating a novel outlier removal strategy based on strong correlation between organ sizes, ensured data reliability. Cross-sectional normative models were constructed using Generalized Additive Models for Location, Scale, and Shape (GAMLSS) to capture non-linear age effects through fractional polynomial functions. A Generalized Additive Mixed Model (GAMM) was employed for longitudinal analyses to assess within-subject changes over follow-up visits. Results All anatomic structures followed complex, non-linear age trajectories, with marked sex differences and distinct CT contrast effects on vascular structures. Bootstrap resampling confirmed the stability and precision of these volume trajectories in both central tendency and variability. An exemplary cardiomegaly case-control analysis showed significantly increased centile scores (P < .001) for heart volume. The longitudinal analysis further revealed significant age-sex interactions influencing within-subject trajectories. Conclusion Cross-sectional and longitudinal reference models were developed from CT-derived anatomic volumes that map the trajectories of body structural change across adulthood. These body charts facilitate robust quantification of individual deviations via centile scores. ©RSNA, 2025.

目的建立成年期ct全身解剖体积分布模型,建立全面的横断面和纵向参考图,解决目前缺乏非脑ct全身标准的问题。材料和方法从2017年3月至2025年4月从机构PACS和两个外部数据集(分别为19,393和1,158例患者)获取回顾性CT扫描(189,710次扫描,106,563例患者),自动分割为104个结构(总计780万卷)。自动化质量控制管道,结合基于器官大小之间强相关性的新颖异常值去除策略,确保了数据的可靠性。采用广义加性位置、尺度和形状模型(GAMLSS)构建截面规范模型,通过分数阶多项式函数捕捉非线性年龄效应。采用广义加性混合模型(GAMM)进行纵向分析,以评估随访期间受试者内部的变化。结果所有解剖结构均遵循复杂的非线性年龄轨迹,性别差异明显,血管结构CT对比效果明显。自举重采样证实了这些体积轨迹在集中趋势和变率方面的稳定性和精度。一项典型的心脏肥大病例对照分析显示,心脏容积的百分位评分显著增加(P < 0.001)。纵向分析进一步揭示了显著的年龄-性别相互作用对受试者内部轨迹的影响。横断面和纵向参考模型是从ct衍生的解剖体积中建立的,可以绘制成年期身体结构变化的轨迹。这些身体图通过百分位分数促进了个体偏差的稳健量化。©RSNA, 2025年。
{"title":"Body Charts from CT Segmentations across the Adult Lifespan: Large-scale Cross-sectional and Longitudinal Analyses.","authors":"Christian Wachinger, Bernhard Renger, Christopher Späth, Marcus R Makowski","doi":"10.1148/ryai.250506","DOIUrl":"https://doi.org/10.1148/ryai.250506","url":null,"abstract":"<p><p>Purpose To model the distribution of CT-derived whole-body anatomic volumes across adulthood and establish comprehensive cross-sectional and longitudinal reference charts, addressing the current lack of non-brain CT-based whole-body standards. Materials and Methods Retrospective CT scans acquired from March 2017 to April 2025 (189,710 scans, 106,563 patients) from the institutional PACS and two external datasets (19,393 and 1,158 patients, respectively), were automatically segmented into 104 structures (totaling 7.8 million volumes). An automated quality control pipeline, incorporating a novel outlier removal strategy based on strong correlation between organ sizes, ensured data reliability. Cross-sectional normative models were constructed using Generalized Additive Models for Location, Scale, and Shape (GAMLSS) to capture non-linear age effects through fractional polynomial functions. A Generalized Additive Mixed Model (GAMM) was employed for longitudinal analyses to assess within-subject changes over follow-up visits. Results All anatomic structures followed complex, non-linear age trajectories, with marked sex differences and distinct CT contrast effects on vascular structures. Bootstrap resampling confirmed the stability and precision of these volume trajectories in both central tendency and variability. An exemplary cardiomegaly case-control analysis showed significantly increased centile scores (<i>P</i> < .001) for heart volume. The longitudinal analysis further revealed significant age-sex interactions influencing within-subject trajectories. Conclusion Cross-sectional and longitudinal reference models were developed from CT-derived anatomic volumes that map the trajectories of body structural change across adulthood. These body charts facilitate robust quantification of individual deviations via centile scores. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e250506"},"PeriodicalIF":13.2,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145821197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Radiologist and AI Concordance in Screening Mammography and Association with Future Breast Cancer Risk. 放射科医生和人工智能在筛查乳房x光检查中的一致性以及与未来乳腺癌风险的关系。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-01 DOI: 10.1148/ryai.240804
Eun Young Kim, Eun Kyung Park, Mi-Ri Kwon, Minjeong Kim, Junhee Park, Jeonggyu Kang, Yoosun Cho, Sanghyup Lee, Minji Song, Ki Hwan Kim, Tae Soo Kim, Hyeonsoo Lee, Ria Kwon, Ga-Young Lim, JunHyeok Choi, Soo-Youn Ham, Shin Ho Kook, Yoosoo Chang, Seungho Ryu

Purpose To evaluate the association between radiologist and stand-alone AI concordance in screening mammography interpretations and future breast cancer risk and to assess how breast density influences this relationship. Materials and Methods This retrospective study included Korean women who underwent digital screening mammography between January 2009 and December 2018. Incidental breast cancers, defined as those diagnosed within 1 year of screening, were excluded using the National Cancer Registry data. A commercial AI system retrospectively analyzed mammograms, with a 10% malignancy probability threshold. Patients were categorized into four groups based on AI-radiologist concordance: Concordant-Negative, Radiologist-Positive/AI-Negative, Radiologist-Negative/AI-Positive, and Concordant-Positive. Breast density was categorized using Breast Imaging Reporting and Data System classification. Cox proportional hazards models estimated adjusted hazard ratios (HRs) and 95% CIs. Results Over a median 7.3-year follow-up, 1011 breast cancers occurred among 82 899 women (mean age, 43.4 years ± 8.6 [SD]). Breast cancer incidence was the highest in the Concordant-Positive group (5-year cumulative incidence, 37.4 per 1000 person-years) and the lowest in the Concordant-Negative group (5.9 per 1000 person-years). Compared with the Concordant-Negative group, adjusted HRs for incidental breast cancer were 2.30 (P < .001) for the Radiologist-Negative/AI-Positive, 1.15 (P = .17) for the Radiologist-Positive/AI-Negative, and 4.51 (P < .001) for the Concordant-Positive groups. Risk increase in the Concordant-Positive group was consistent across breast densities; in dense breasts, elevated risk occurred only with positive AI. Conclusion Positive screening mammography findings identified by both radiologists and stand-alone AI (Concordant-Positive group) were associated with he highest 5-year breast cancer incidence of 3.74%, exceeding the 3% threshold for considering chemoprevention or supplemental imaging. Keywords: Mammography, Breast Neoplasms, Artificial Intelligence, Screening, Concordance Supplemental material is available for this article. © RSNA, 2025.

目的评估放射科医生和独立AI一致性在筛查乳房x线摄影解释和未来乳腺癌风险之间的关系,并评估乳腺密度如何影响这种关系。材料和方法本回顾性研究纳入了2009年1月至2018年12月期间接受数字筛查乳房x光检查的韩国女性。偶发性乳腺癌,定义为在筛查一年内诊断出的乳腺癌,使用国家癌症登记处的数据排除在外。商业人工智能系统回顾性分析乳房x线照片,恶性概率阈值为10%。根据ai -放射科医师的一致性将患者分为四组:协和阴性、放射科医师阳性/ ai阴性、放射科医师阴性/ ai阳性和协和阳性。乳腺密度采用BI-RADS分类。Cox比例风险模型估计校正风险比(aHRs)和95% ci。结果在5.3年的中位随访中,82,899名女性(平均年龄43.4[8.6]岁)发生了1011例乳腺癌。乳腺癌发病率在协和阳性组最高(5年累计发病率为37.4 / 1000人年),在协和阴性组最低(5.9 / 1000人年)。与协和阴性组相比,放射学阴性/ ai阳性组偶发乳腺癌的ahr为2.30 (P < 0.001),放射学阳性/ ai阴性组为1.15 (P = 0.17),协和阳性组为4.51 (P < 0.001)。一致性阳性组在不同乳腺密度下的风险增加是一致的;在致密乳腺中,只有AI阳性才会增加风险。结论放射科医师和独立AI (concorant -阳性组)筛查的乳房x线检查阳性结果与最高的5年乳腺癌发病率相关,为3.74%,超过了考虑化学预防或补充影像学检查的3%的阈值。©RSNA, 2025年。
{"title":"Radiologist and AI Concordance in Screening Mammography and Association with Future Breast Cancer Risk.","authors":"Eun Young Kim, Eun Kyung Park, Mi-Ri Kwon, Minjeong Kim, Junhee Park, Jeonggyu Kang, Yoosun Cho, Sanghyup Lee, Minji Song, Ki Hwan Kim, Tae Soo Kim, Hyeonsoo Lee, Ria Kwon, Ga-Young Lim, JunHyeok Choi, Soo-Youn Ham, Shin Ho Kook, Yoosoo Chang, Seungho Ryu","doi":"10.1148/ryai.240804","DOIUrl":"10.1148/ryai.240804","url":null,"abstract":"<p><p>Purpose To evaluate the association between radiologist and stand-alone AI concordance in screening mammography interpretations and future breast cancer risk and to assess how breast density influences this relationship. Materials and Methods This retrospective study included Korean women who underwent digital screening mammography between January 2009 and December 2018. Incidental breast cancers, defined as those diagnosed within 1 year of screening, were excluded using the National Cancer Registry data. A commercial AI system retrospectively analyzed mammograms, with a 10% malignancy probability threshold. Patients were categorized into four groups based on AI-radiologist concordance: Concordant-Negative, Radiologist-Positive/AI-Negative, Radiologist-Negative/AI-Positive, and Concordant-Positive. Breast density was categorized using Breast Imaging Reporting and Data System classification. Cox proportional hazards models estimated adjusted hazard ratios (HRs) and 95% CIs. Results Over a median 7.3-year follow-up, 1011 breast cancers occurred among 82 899 women (mean age, 43.4 years ± 8.6 [SD]). Breast cancer incidence was the highest in the Concordant-Positive group (5-year cumulative incidence, 37.4 per 1000 person-years) and the lowest in the Concordant-Negative group (5.9 per 1000 person-years). Compared with the Concordant-Negative group, adjusted HRs for incidental breast cancer were 2.30 (<i>P</i> < .001) for the Radiologist-Negative/AI-Positive, 1.15 (<i>P</i> = .17) for the Radiologist-Positive/AI-Negative, and 4.51 (<i>P</i> < .001) for the Concordant-Positive groups. Risk increase in the Concordant-Positive group was consistent across breast densities; in dense breasts, elevated risk occurred only with positive AI. Conclusion Positive screening mammography findings identified by both radiologists and stand-alone AI (Concordant-Positive group) were associated with he highest 5-year breast cancer incidence of 3.74%, exceeding the 3% threshold for considering chemoprevention or supplemental imaging. <b>Keywords:</b> Mammography, Breast Neoplasms, Artificial Intelligence, Screening, Concordance <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240804"},"PeriodicalIF":13.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145293781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Explainable Deep Learning Model for Focal Liver Lesion Diagnosis Using Multiparametric MRI. 多参数MRI诊断局灶性肝脏病变的可解释深度学习模型。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-01 DOI: 10.1148/ryai.240531
Zhehan Shen, Lingzhi Chen, Lilong Wang, Shunjie Dong, Fakai Wang, Yaning Pan, Jiahao Zhou, Yikun Wang, Xinxin Xu, Huanhuan Chong, Huimin Lin, Weixia Li, Ruokun Li, Haihong Ma, Jing Ma, Yixing Yu, Lianjun Du, Xiaosong Wang, Shaoting Zhang, Fuhua Yan

Purpose To assess the effectiveness of an explainable deep learning model, developed using multiparametric MRI features, in improving diagnostic accuracy and efficiency of radiologists for classification of focal liver lesions (FLLs). Materials and Methods FLLs 1 cm or larger in diameter at multiparametric MRI were included in the study. The nn-Unet and Liver Imaging Feature Transformer models were developed using retrospective data from the Ruijin Hospital (January 2018-August 2023). The nnU-Net was used for lesion segmentation and the Liver Imaging Feature Transformer model for FLL classification. External testing was performed on data from the Xinjiang Production and Construction Corps Hospital, the First Affiliated Hospital of Soochow University, and Xinrui Hospital (January 2018-December 2023), with a prospective test set obtained from January to April 2024. Model performance was compared with radiologists, and impact of model assistance on junior and senior radiologist performance was assessed. Evaluation metrics included the Dice similarity coefficient and accuracy. Results A total of 2131 individuals with FLLs (mean age, 56 years ± 12 [SD]; 1476 female patients) were included in the training, internal test, external test, and prospective test sets. Average Dice similarity coefficient values for liver and tumor segmentation across the three test sets were 0.98 and 0.96, respectively. Average accuracy for features and lesion classification across the three test sets were 93% and 97%, respectively. Readings assisted by the Liver Imaging Feature Transformer model improved diagnostic accuracy (average 5.3% increase, P < .001), reduced reading time (average 34.5 seconds decrease, P < .001), and enhanced confidence (average 0.3-point increase, P < .001) of junior radiologists. Conclusion The proposed deep learning model accurately detected and classified FLLs, improving diagnostic accuracy and efficiency of junior radiologists. Keywords: Liver, MR-Dynamic Contrast Enhanced, Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms, Feature Detection, Vision, Application Domain Supplemental material is available for this article. © RSNA, 2025 See also commentary by Adams and Bressem in this issue.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的评估利用多参数MRI (mpMRI)特征开发的可解释深度学习(DL)模型在提高放射科医生对局灶性肝病变(fll)分类的诊断准确性和效率方面的有效性。材料与方法纳入mpMRI检查中直径≥1cm的fll。使用一家医院(2018年1月至2023年8月)的回顾性数据开发了nn-Unet和肝脏成像特征转换器(LIFT)模型。nnU-Net用于病灶分割,LIFT用于FLL分类。对三家医院(2018年1月- 2023年12月)的数据进行外部测试,并于2024年1月至2024年4月获得前瞻性测试集。将模型的表现与放射科医生进行比较,并评估模型辅助对初级和高级放射科医生表现的影响。评价指标包括Dice相似系数(DSC)和准确性。结果共有2131例fll患者(平均年龄56±[SD] 12岁,女性1476例)被纳入训练组、内部测试组、外部测试组和前瞻性测试组。三个测试集中肝脏和肿瘤分割的平均DSC值分别为0.98和0.96。三个测试集的特征和病变分类的平均准确率分别为93%和97%。lift辅助读数提高了初级放射科医生的诊断准确性(平均提高5.3%,P < .001),减少了阅读时间(平均减少34.5秒,P < .001),增强了他们的信心(平均提高0.3点,P < .001)。结论所建立的DL模型能准确地检测和分类fll,提高了初级放射科医师的诊断准确率和诊断效率。©RSNA, 2025年。
{"title":"An Explainable Deep Learning Model for Focal Liver Lesion Diagnosis Using Multiparametric MRI.","authors":"Zhehan Shen, Lingzhi Chen, Lilong Wang, Shunjie Dong, Fakai Wang, Yaning Pan, Jiahao Zhou, Yikun Wang, Xinxin Xu, Huanhuan Chong, Huimin Lin, Weixia Li, Ruokun Li, Haihong Ma, Jing Ma, Yixing Yu, Lianjun Du, Xiaosong Wang, Shaoting Zhang, Fuhua Yan","doi":"10.1148/ryai.240531","DOIUrl":"10.1148/ryai.240531","url":null,"abstract":"<p><p>Purpose To assess the effectiveness of an explainable deep learning model, developed using multiparametric MRI features, in improving diagnostic accuracy and efficiency of radiologists for classification of focal liver lesions (FLLs). Materials and Methods FLLs 1 cm or larger in diameter at multiparametric MRI were included in the study. The nn-Unet and Liver Imaging Feature Transformer models were developed using retrospective data from the Ruijin Hospital (January 2018-August 2023). The nnU-Net was used for lesion segmentation and the Liver Imaging Feature Transformer model for FLL classification. External testing was performed on data from the Xinjiang Production and Construction Corps Hospital, the First Affiliated Hospital of Soochow University, and Xinrui Hospital (January 2018-December 2023), with a prospective test set obtained from January to April 2024. Model performance was compared with radiologists, and impact of model assistance on junior and senior radiologist performance was assessed. Evaluation metrics included the Dice similarity coefficient and accuracy. Results A total of 2131 individuals with FLLs (mean age, 56 years ± 12 [SD]; 1476 female patients) were included in the training, internal test, external test, and prospective test sets. Average Dice similarity coefficient values for liver and tumor segmentation across the three test sets were 0.98 and 0.96, respectively. Average accuracy for features and lesion classification across the three test sets were 93% and 97%, respectively. Readings assisted by the Liver Imaging Feature Transformer model improved diagnostic accuracy (average 5.3% increase, <i>P</i> < .001), reduced reading time (average 34.5 seconds decrease, <i>P</i> < .001), and enhanced confidence (average 0.3-point increase, <i>P</i> < .001) of junior radiologists. Conclusion The proposed deep learning model accurately detected and classified FLLs, improving diagnostic accuracy and efficiency of junior radiologists. <b>Keywords:</b> Liver, MR-Dynamic Contrast Enhanced, Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms, Feature Detection, Vision, Application Domain <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Adams and Bressem in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240531"},"PeriodicalIF":13.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145030897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DLMUSE: Robust Brain Segmentation in Seconds Using Deep Learning. DLMUSE:使用深度学习在几秒钟内进行稳健的大脑分割。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-01 DOI: 10.1148/ryai.240299
Vishnu M Bashyam, Guray Erus, Yuhan Cui, Di Wu, Gyujoon Hwang, Alexander Getka, Ashish Singh, George Aidinis, Kyunglok Baik, Randa Melhem, Elizabeth Mamourian, Jimit Doshi, Ashwini Davison, Ilya M Nasrallah, Christos Davatzikos

Purpose To introduce an open-source deep learning brain segmentation model for fully automated brain MRI segmentation, enabling rapid segmentation and facilitating large-scale neuroimaging research. Materials and Methods In this retrospective study, a deep learning model was developed using a diverse training dataset of 1900 MRI scans (patients aged 24-93 years, with a mean of 65 years ± 11.5 [SD]; 1007 female, 893 male) with reference labels generated using a multi-atlas segmentation method with human supervision. The final model was validated using 71 391 scans from 14 studies. Segmentation quality was assessed using Dice similarity and Pearson correlation coefficients with reference segmentations. Downstream predictive performance for brain age and Alzheimer disease was evaluated by fitting machine learning models. Statistical significance was assessed using Mann-Whitney U and McNemar tests. Results The DLMUSE model achieved high correlation (r = 0.93-0.95) and agreement (median Dice scores, 0.84-0.89) with reference segmentations across the testing dataset. Prediction of brain age using DLMUSE features achieved a mean absolute error of 5.08 years, similar to that of the reference method (5.15 years, P = .56). Classification of Alzheimer disease using DLMUSE features achieved an accuracy of 89% and F1 score of 0.80, which were comparable to values achieved by the reference method (89% and 0.79, respectively). DLMUSE segmentation speed was over 10 000 times faster than that of the reference method (3.5 seconds vs 14 hours). Conclusion DLMUSE enabled rapid brain MRI segmentation, with performance comparable to that of state-of-the-art methods across diverse datasets. The resulting open-source tools and user-friendly web interface can facilitate large-scale neuroimaging research and wide utilization of advanced segmentation methods. Keywords: Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms, Segmentation, Application Domain, Supervised Learning, MRI, Brain/Brain Stem Supplemental material is available for this article. © RSNA, 2025.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的介绍一种用于全自动脑MRI分割的开源深度学习脑分割模型,实现快速分割,促进大规模神经影像学研究。在这项回顾性研究中,使用1900个MRI扫描(年龄24-93岁,平均65岁(SD: 11.5岁),1007名女性和893名男性)的不同训练数据集开发了一个深度学习模型,并使用人工监督的多图谱分割方法生成参考标签。最终的模型通过14项研究的71391次扫描得到验证。使用参考分割的Dice相似度和Pearson相关系数来评估分割质量。通过拟合机器学习模型评估脑年龄和阿尔茨海默病的下游预测性能。采用mann - whitney U和McNemar检验评估统计学显著性。结果DLMUSE模型与整个测试数据集的参考分割具有较高的相关性(r = 0.93-0.95)和一致性(Dice得分中位数= 0.84-0.89)。利用DLMUSE特征预测脑年龄的平均绝对误差为5.08岁,与参考方法相似(5.15岁,P = 0.56)。使用DLMUSE特征对阿尔茨海默病进行分类的准确率为89%,f1评分为0.80,与参考方法(分别为89%和0.79)相当。DLMUSE分割速度比参考方法快10000倍以上(3.5秒vs 14小时)。结论DLMUSE实现了快速脑MRI分割,在不同数据集上的性能与最先进的方法相当。由此产生的开源工具和用户友好的web界面可以促进大规模的神经影像学研究和广泛使用先进的分割方法。©RSNA, 2025年。
{"title":"DLMUSE: Robust Brain Segmentation in Seconds Using Deep Learning.","authors":"Vishnu M Bashyam, Guray Erus, Yuhan Cui, Di Wu, Gyujoon Hwang, Alexander Getka, Ashish Singh, George Aidinis, Kyunglok Baik, Randa Melhem, Elizabeth Mamourian, Jimit Doshi, Ashwini Davison, Ilya M Nasrallah, Christos Davatzikos","doi":"10.1148/ryai.240299","DOIUrl":"10.1148/ryai.240299","url":null,"abstract":"<p><p>Purpose To introduce an open-source deep learning brain segmentation model for fully automated brain MRI segmentation, enabling rapid segmentation and facilitating large-scale neuroimaging research. Materials and Methods In this retrospective study, a deep learning model was developed using a diverse training dataset of 1900 MRI scans (patients aged 24-93 years, with a mean of 65 years ± 11.5 [SD]; 1007 female, 893 male) with reference labels generated using a multi-atlas segmentation method with human supervision. The final model was validated using 71 391 scans from 14 studies. Segmentation quality was assessed using Dice similarity and Pearson correlation coefficients with reference segmentations. Downstream predictive performance for brain age and Alzheimer disease was evaluated by fitting machine learning models. Statistical significance was assessed using Mann-Whitney <i>U</i> and McNemar tests. Results The DLMUSE model achieved high correlation (<i>r</i> = 0.93-0.95) and agreement (median Dice scores, 0.84-0.89) with reference segmentations across the testing dataset. Prediction of brain age using DLMUSE features achieved a mean absolute error of 5.08 years, similar to that of the reference method (5.15 years, <i>P</i> = .56). Classification of Alzheimer disease using DLMUSE features achieved an accuracy of 89% and F1 score of 0.80, which were comparable to values achieved by the reference method (89% and 0.79, respectively). DLMUSE segmentation speed was over 10 000 times faster than that of the reference method (3.5 seconds vs 14 hours). Conclusion DLMUSE enabled rapid brain MRI segmentation, with performance comparable to that of state-of-the-art methods across diverse datasets. The resulting open-source tools and user-friendly web interface can facilitate large-scale neuroimaging research and wide utilization of advanced segmentation methods. <b>Keywords:</b> Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms, Segmentation, Application Domain, Supervised Learning, MRI, Brain/Brain Stem <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240299"},"PeriodicalIF":13.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12661376/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145076191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Radiology-Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1