首页 > 最新文献

Academic Radiology最新文献

英文 中文
Evaluating the Diagnostic Accuracy of Artificial Intelligence in Spondylolisthesis Detection: A Systematic Review and Meta-analysis 评估人工智能在脊椎滑脱检测中的诊断准确性:一项系统综述和荟萃分析。
IF 3.9 2区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-03-01 Epub Date: 2025-11-24 DOI: 10.1016/j.acra.2025.11.002
Mohammad-Taha Pahlevan-Fallahy MD, MPH , Amir-Mohammad Asgari MD , Alireza Soltani Khaboushan MD , Majid Chalian , Farhad Shaker MD , Parnian Yari MD , Sara Haseli

Rationale and Objectives

Spondylolisthesis, a vertebral displacement condition affecting 5–26% of adults, poses a significant health risk to the population. Artificial Intelligence (AI), has emerged as a tool for enhancing diagnostic accuracy. However, the heterogeneity in model performances requires a synthesis of existing evidence.

Materials and Methods

This study evaluated the diagnostic accuracy of AI models for the detection of spondylolisthesis across multiple imaging modalities. Following PRISMA PubMed, Scopus, Embase, Web of Science, including 24 studies (21 for meta-analysis) with 8029 observations. Inclusion criteria focused on original studies using standalone deep learning (DL) models with reported diagnostic metrics. Quality assessment was performed using Quality Assessment of Diagnostic Accuracy Studies-2, and statistical analysis employed random-effects meta-analysis.

Results

AI models demonstrated high diagnostic performance, with a pooled sensitivity of 94.7% (95% CI: 92.6–96.2%) and specificity of 97.1% (95% CI: 95.0–98.4%). The area under the curve (AUC) was 0.979, indicating robust discriminative ability. MRI-based models slightly outperformed radiography models (sensitivity: 95.71% vs. 94.95%; specificity: 98.38% vs. 96.80%), though differences were nonsignificant (p = 0.651). Classification models significantly surpassed detection-focused models (p = 0.026), while biomechanical feature-based models and DL image processing models showed comparable performance (p = 0.264). Notably, models like FAR networks and YOLOv8 achieved high accuracy (89–98%) in grading and localization tasks.

Conclusions

AI models show considerable diagnostic accuracy for spondylolisthesis, underscoring their potential as clinical adjunctive tools. However, considerable heterogeneity highlights the need for standardized studies. These findings advocate for integrating AI into diagnostic workflows, particularly in resource-limited settings, while urging further research to ensure real-world applicability.
理由和目的:椎体滑脱是一种影响5-26%成年人的椎体移位疾病,对人群构成重大健康风险。人工智能(AI)已经成为提高诊断准确性的工具。然而,模型性能的异质性需要综合现有证据。材料和方法:本研究评估了人工智能模型在多种成像方式下检测脊柱滑脱的诊断准确性。追踪PRISMA PubMed, Scopus, Embase, Web of Science,包括24项研究(21项为荟萃分析),8029项观察结果。纳入标准侧重于使用独立深度学习(DL)模型和报告的诊断指标的原始研究。质量评估采用诊断准确性研究质量评估-2,统计分析采用随机效应荟萃分析。结果:人工智能模型表现出较高的诊断性能,合并敏感性为94.7% (95% CI: 92.6-96.2%),特异性为97.1% (95% CI: 95.0-98.4%)。曲线下面积(AUC)为0.979,判别能力较强。基于mri的模型略优于x线摄影模型(灵敏度:95.71% vs. 94.95%;特异性:98.38% vs. 96.80%),但差异无统计学意义(p = 0.651)。分类模型显著优于以检测为中心的模型(p = 0.026),而基于生物力学特征的模型和深度学习图像处理模型的性能相当(p = 0.264)。值得注意的是,像FAR网络和YOLOv8这样的模型在分级和定位任务中实现了很高的准确率(89-98%)。结论:人工智能模型对脊柱滑脱的诊断具有相当高的准确性,强调了其作为临床辅助工具的潜力。然而,相当大的异质性突出了标准化研究的必要性。这些发现提倡将人工智能集成到诊断工作流程中,特别是在资源有限的情况下,同时敦促进一步研究以确保现实世界的适用性。
{"title":"Evaluating the Diagnostic Accuracy of Artificial Intelligence in Spondylolisthesis Detection: A Systematic Review and Meta-analysis","authors":"Mohammad-Taha Pahlevan-Fallahy MD, MPH ,&nbsp;Amir-Mohammad Asgari MD ,&nbsp;Alireza Soltani Khaboushan MD ,&nbsp;Majid Chalian ,&nbsp;Farhad Shaker MD ,&nbsp;Parnian Yari MD ,&nbsp;Sara Haseli","doi":"10.1016/j.acra.2025.11.002","DOIUrl":"10.1016/j.acra.2025.11.002","url":null,"abstract":"<div><h3>Rationale and Objectives</h3><div>Spondylolisthesis, a vertebral displacement condition affecting 5–26% of adults, poses a significant health risk to the population. Artificial Intelligence (AI), has emerged as a tool for enhancing diagnostic accuracy. However, the heterogeneity in model performances requires a synthesis of existing evidence.</div></div><div><h3>Materials and Methods</h3><div>This study evaluated the diagnostic accuracy of AI models for the detection of spondylolisthesis across multiple imaging modalities. Following PRISMA PubMed, Scopus, Embase, Web of Science, including 24 studies (21 for meta-analysis) with 8029 observations. Inclusion criteria focused on original studies using standalone deep learning (DL) models with reported diagnostic metrics. Quality assessment was performed using Quality Assessment of Diagnostic Accuracy Studies-2, and statistical analysis employed random-effects meta-analysis.</div></div><div><h3>Results</h3><div>AI models demonstrated high diagnostic performance, with a pooled sensitivity of 94.7% (95% CI: 92.6–96.2%) and specificity of 97.1% (95% CI: 95.0–98.4%). The area under the curve (AUC) was 0.979, indicating robust discriminative ability. MRI-based models slightly outperformed radiography models (sensitivity: 95.71% vs. 94.95%; specificity: 98.38% vs. 96.80%), though differences were nonsignificant (<em>p</em> = 0.651). Classification models significantly surpassed detection-focused models (<em>p</em> = 0.026), while biomechanical feature-based models and DL image processing models showed comparable performance (<em>p</em> = 0.264). Notably, models like FAR networks and YOLOv8 achieved high accuracy (89–98%) in grading and localization tasks.</div></div><div><h3>Conclusions</h3><div>AI models show considerable diagnostic accuracy for spondylolisthesis, underscoring their potential as clinical adjunctive tools. However, considerable heterogeneity highlights the need for standardized studies. These findings advocate for integrating AI into diagnostic workflows, particularly in resource-limited settings, while urging further research to ensure real-world applicability.</div></div>","PeriodicalId":50928,"journal":{"name":"Academic Radiology","volume":"33 3","pages":"Pages 1034-1048"},"PeriodicalIF":3.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145607077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Pediatric Fracture Detection: Multicenter Evaluation of a Deep Learning AI Model and Its Impact on Radiologist Performance 加强儿童骨折检测:深度学习人工智能模型的多中心评估及其对放射科医生表现的影响。
IF 3.9 2区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-03-01 Epub Date: 2025-11-29 DOI: 10.1016/j.acra.2025.11.022
Sean Raj MD, MBA, Chief Medical Officer and Chief Innovation Officer, Barry Sadegi MD, John Simon MD

Rationale and Objectives

This study investigates the efficacy of a deep learning-based artificial intelligence (AI) model in detecting pediatric fractures on musculoskeletal (MSK) radiographs and assesses the impact of AI-assistance on the performance of radiologists.

Materials and Methods

In Phase 1, the performance of the AI model was evaluated on 3016 MSK pediatric radiographs from 4 imaging centers in the US. Ground truth was established by consensus of pediatric radiologists. Phase 2 was a retrospective multi-reader, multi-center (MRMC) study using 189 cases. Twenty readers participated in two separate reading sessions evaluating for fracture, with and without AI assistance, with a one-month washout period.

Results

The AI model achieved a high standalone performance with accuracy (0.94), sensitivity (0.96), and specificity (0.86). Subgroup analysis revealed that the model maintained high performance across study types and confounders, including age (Se>0.94), gender (Se>0.96), anatomical region (Se>0.93), and fracture types (Se>0.93). With AI assistance, reader accuracy increased significantly from 0.93 to 0.96 (p < 0.05), sensitivity significantly improved from 0.86 to 0.93 (p < 0.05), and specificity improved from 0.94 to 0.95. The average reading time per exam was shortened by 26.1% with AI assistance.

Conclusion

The AI model's high accuracy in detecting pediatric fractures underscores its significant clinical utility. The integration of this tool enhanced overall radiologist performance and boosted the diagnostic confidence among non-specialist readers.
基本原理和目的:本研究探讨了基于深度学习的人工智能(AI)模型在检测儿童肌肉骨骼(MSK) x线片骨折方面的效果,并评估了人工智能辅助对放射科医生工作表现的影响。材料和方法:在第1阶段,对来自美国4个成像中心的3016张MSK儿童x线片进行了AI模型的性能评估。基本事实是由儿科放射科医生的共识建立的。第二阶段是一项回顾性多读者、多中心(MRMC)研究,共纳入189例病例。20名读者参加了两个独立的阅读课程,评估骨折,有和没有人工智能的帮助,有一个月的洗脱期。结果:人工智能模型的准确率(0.94)、灵敏度(0.96)、特异性(0.86)均达到了较高的独立性能。亚组分析显示,该模型在研究类型和混杂因素(包括年龄(Se>0.94)、性别(Se>0.96)、解剖区域(Se>0.93)和骨折类型(Se>0.93)中都保持了较高的性能。在人工智能的帮助下,阅读器的准确率从0.93显著提高到0.96 (p)。结论:人工智能模型在检测儿童骨折方面的高准确率强调了其重要的临床应用价值。该工具的整合提高了放射科医生的整体表现,并提高了非专业读者的诊断信心。
{"title":"Enhancing Pediatric Fracture Detection: Multicenter Evaluation of a Deep Learning AI Model and Its Impact on Radiologist Performance","authors":"Sean Raj MD, MBA, Chief Medical Officer and Chief Innovation Officer,&nbsp;Barry Sadegi MD,&nbsp;John Simon MD","doi":"10.1016/j.acra.2025.11.022","DOIUrl":"10.1016/j.acra.2025.11.022","url":null,"abstract":"<div><h3>Rationale and Objectives</h3><div>This study investigates the efficacy of a deep learning-based artificial intelligence (AI) model in detecting pediatric fractures on musculoskeletal (MSK) radiographs and assesses the impact of AI-assistance on the performance of radiologists.</div></div><div><h3>Materials and Methods</h3><div>In Phase 1, the performance of the AI model was evaluated on 3016 MSK pediatric radiographs from 4 imaging centers in the US. Ground truth was established by consensus of pediatric radiologists. Phase 2 was a retrospective multi-reader, multi-center (MRMC) study using 189 cases. Twenty readers participated in two separate reading sessions evaluating for fracture, with and without AI assistance, with a one-month washout period.</div></div><div><h3>Results</h3><div>The AI model achieved a high standalone performance with accuracy (0.94), sensitivity (0.96), and specificity (0.86). Subgroup analysis revealed that the model maintained high performance across study types and confounders, including age (Se&gt;0.94), gender (Se&gt;0.96), anatomical region (Se&gt;0.93), and fracture types (Se&gt;0.93). With AI assistance, reader accuracy increased significantly from 0.93 to 0.96 (p &lt; 0.05), sensitivity significantly improved from 0.86 to 0.93 (p &lt; 0.05), and specificity improved from 0.94 to 0.95. The average reading time per exam was shortened by 26.1% with AI assistance.</div></div><div><h3>Conclusion</h3><div>The AI model's high accuracy in detecting pediatric fractures underscores its significant clinical utility. The integration of this tool enhanced overall radiologist performance and boosted the diagnostic confidence among non-specialist readers.</div></div>","PeriodicalId":50928,"journal":{"name":"Academic Radiology","volume":"33 3","pages":"Pages 1121-1129"},"PeriodicalIF":3.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145649731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On Clinical Interpretation of Tumor-infiltrating Lymphocytes in Breast Cancer Prediction Models and Future Studies 肿瘤浸润淋巴细胞在乳腺癌预测模型中的临床意义及未来研究。
IF 3.9 2区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-03-01 Epub Date: 2025-12-02 DOI: 10.1016/j.acra.2025.11.031
Deniz Esin Tekcan Sanli MD, MSc , Ahmet Necati Sanli MD, MSc
{"title":"On Clinical Interpretation of Tumor-infiltrating Lymphocytes in Breast Cancer Prediction Models and Future Studies","authors":"Deniz Esin Tekcan Sanli MD, MSc ,&nbsp;Ahmet Necati Sanli MD, MSc","doi":"10.1016/j.acra.2025.11.031","DOIUrl":"10.1016/j.acra.2025.11.031","url":null,"abstract":"","PeriodicalId":50928,"journal":{"name":"Academic Radiology","volume":"33 3","pages":"Page 918"},"PeriodicalIF":3.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145670756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Imaging to Intervention: A Multicenter-Validated Radiomics Pipeline for Guiding Femoral Neck Fracture Surgical Management 从成像到介入:多中心验证的放射组学管道指导股骨颈骨折手术管理。
IF 3.9 2区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-03-01 Epub Date: 2025-12-01 DOI: 10.1016/j.acra.2025.11.024
Lin Mu , Yao Liu , Yunming Xie , Haoyu Liu , Ke Liu , Zheng Miao , Han Xue , Mingyang Li , Dong Dong , Huimao Zhang

Rationale and objectives

To develop and evaluate a femoral neck fracture (FNF) pipeline model for diagnosing fracture stability and aiding surgical decision-making.

Materials and Methods

Patients with confirmed FNFs were enrolled in the study. An automatic segmentation algorithm was employed to initially delineate fracture-displaced regions revealed using CT images, with subsequent manual refinement. A logistic-regression model was first trained on selected radiomic features to generate a Rad-score for fracture-stability classification. The Rad_score was then fed into a downstream model to guide surgical decision-making. The internal and external validation with multi-center data were used to assess the generalizability of the pipeline model.

Results

The internal dataset for fracture stability and surgical decision-making included 624 and 410 patients, respectively. The corresponding external test sets, included 364 and 186 patients enrolled from 32 centers. The radiomics model for FNF stability exhibited robust performance, achieving an area under the curve (AUC) of 0.905 (95% confidence interval [CI]: 0.853–0.944) and 0.821 (95% CI: 0.778–0.859) for the internal and external test sets, respectively. The AUCs for the surgical decision-making models were 0.881 (95% CI: 0.810–0.932) and 0.820 (95% CI: 0.757–0.873) for the internal and external test sets, respectively.

Conclusion

The radiomics pipeline model exhibited robust performance in classifying fracture stability and aiding surgical decision-making in the test sets across 33 centers. Our model incorporates explainable artificial intelligence in fracture quantification analysis, supporting doctors in making objective clinical decisions.
理由和目的:建立并评估股骨颈骨折(FNF)管道模型,用于骨折稳定性诊断和辅助手术决策。材料和方法:纳入确诊的fnf患者。采用自动分割算法对CT图像显示的骨折移位区域进行初步描绘,随后进行人工细化。首先对选定的放射学特征进行逻辑回归模型训练,生成用于骨折稳定性分类的rad评分。然后将Rad_score输入下游模型以指导手术决策。采用多中心数据的内部验证和外部验证来评估管道模型的通用性。结果:骨折稳定性和手术决策的内部数据集分别包括624例和410例患者。相应的外部测试集包括来自32个中心的364名和186名患者。FNF稳定性的放射组学模型表现出稳健的性能,在内部和外部测试集分别实现了0.905(95%置信区间[CI]: 0.853-0.944)和0.821 (95% CI: 0.778-0.859)的曲线下面积(AUC)。手术决策模型在内部和外部测试集的auc分别为0.881 (95% CI: 0.810-0.932)和0.820 (95% CI: 0.757-0.873)。结论:在33个中心的测试集中,放射组学管道模型在骨折稳定性分类和辅助手术决策方面表现出强大的性能。我们的模型将可解释的人工智能纳入骨折量化分析,支持医生做出客观的临床决策。
{"title":"From Imaging to Intervention: A Multicenter-Validated Radiomics Pipeline for Guiding Femoral Neck Fracture Surgical Management","authors":"Lin Mu ,&nbsp;Yao Liu ,&nbsp;Yunming Xie ,&nbsp;Haoyu Liu ,&nbsp;Ke Liu ,&nbsp;Zheng Miao ,&nbsp;Han Xue ,&nbsp;Mingyang Li ,&nbsp;Dong Dong ,&nbsp;Huimao Zhang","doi":"10.1016/j.acra.2025.11.024","DOIUrl":"10.1016/j.acra.2025.11.024","url":null,"abstract":"<div><h3>Rationale and objectives</h3><div>To develop and evaluate a femoral neck fracture (FNF) pipeline model for diagnosing fracture stability and aiding surgical decision-making.</div></div><div><h3>Materials and Methods</h3><div>Patients with confirmed FNFs were enrolled in the study. An automatic segmentation algorithm was employed to initially delineate fracture-displaced regions revealed using CT images, with subsequent manual refinement. A logistic-regression model was first trained on selected radiomic features to generate a Rad-score for fracture-stability classification. The Rad_score was then fed into a downstream model to guide surgical decision-making. The internal and external validation with multi-center data were used to assess the generalizability of the pipeline model.</div></div><div><h3>Results</h3><div>The internal dataset for fracture stability and surgical decision-making included 624 and 410 patients, respectively. The corresponding external test sets, included 364 and 186 patients enrolled from 32 centers. The radiomics model for FNF stability exhibited robust performance, achieving an area under the curve (AUC) of 0.905 (95% confidence interval [CI]: 0.853–0.944) and 0.821 (95% CI: 0.778–0.859) for the internal and external test sets, respectively. The AUCs for the surgical decision-making models were 0.881 (95% CI: 0.810–0.932) and 0.820 (95% CI: 0.757–0.873) for the internal and external test sets, respectively.</div></div><div><h3>Conclusion</h3><div>The radiomics pipeline model exhibited robust performance in classifying fracture stability and aiding surgical decision-making in the test sets across 33 centers. Our model incorporates explainable artificial intelligence in fracture quantification analysis, supporting doctors in making objective clinical decisions.</div></div>","PeriodicalId":50928,"journal":{"name":"Academic Radiology","volume":"33 3","pages":"Pages 1049-1059"},"PeriodicalIF":3.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145662641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalizable Deep Learning for Prostate Cancer Risk Stratification: Multicenter Study Integrating 18F-PSMA-1007 PET/CT and mpMRI 前列腺癌风险分层的可推广深度学习:整合18F-PSMA-1007 PET/CT和mpMRI的多中心研究。
IF 3.9 2区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-03-01 Epub Date: 2026-01-16 DOI: 10.1016/j.acra.2025.12.050
Cunke Miao , Houzhang Sun , Fei Yao , Tianle Hong , Zedong Ren , Yuandi Zhuang , Qi Lin , Shuying Bian , Yunjun Yang , Yezhi Lin

Background

Prostate cancer is the second most common cancer in men, with rising mortality rates necessitating precise risk stratification. High-invasive biological features—specifically International Society of Urological Pathology (ISUP) grade, extracapsular extension (EPE), and positive surgical margins (PSM)—are critical for guiding treatment but are difficult to detect due to tumor heterogeneity. Current imaging modalities, including 18F-PSMA-1007 PET/CT and multiparametric MRI (mpMRI), have limitations in fully capturing these features. This study aims to develop a few-shot deep learning model (CL-MGNET) that integrates multimodal imaging and clinical data to predict high-risk biological features, optimizing performance even with limited training data.

Materials and Methods

This retrospective, multicenter study analyzed data from 377 patients: 341 from a primary medical center (Center A) and 36 from an independent external validation cohort (Center B). The study utilized multimodal inputs (PET/CT, mpMRI) and clinical variables to predict ISUP grade, EPE, and PSM. A specialized few-shot deep learning network, CL-MGNET, was designed to fuse these data sources. The model was trained using a restricted subset of 30 patients and subsequently evaluated on both internal and external test sets to assess generalizability across different centers.

Results

CL-MGNET demonstrated excellent performance in predicting high-invasive biological features (defined as the presence of at least one high-risk feature: ISUP ≥ 3, EPE, or PSM), achieving an internal test AUC of 0.877 and an external validation AUC of 0.872, which significantly outperformed the clinical model with an AUC of 0.792. The model surpassed both single-modality models (PET/CT, mpMRI) and the clinical model. Furthermore, CL-MGNET exhibited strong generalization capability, effectively predicting various high-risk biological features. When clinical variables were integrated, the model's performance improved significantly, exceeding traditional methods.

Conclusion

The CL-MGNET model, leveraging multimodal imaging data and clinical variables with a few-shot learning approach, successfully predicts high-invasive biological features of prostate cancer with high accuracy, even with limited data. The model's performance across different biological features and medical centers shows its robust generalizability. This method holds great promise for improving prostate cancer diagnosis and risk prediction in data-limited environments.
背景:前列腺癌是男性第二大常见癌症,随着死亡率的上升,需要精确的风险分层。高侵入性生物学特征——特别是国际泌尿病理学学会(ISUP)分级、囊外延伸(EPE)和阳性手术切缘(PSM)——对指导治疗至关重要,但由于肿瘤的异质性,很难检测出来。目前的成像方式,包括18F-PSMA-1007 PET/CT和多参数MRI (mpMRI),在充分捕捉这些特征方面存在局限性。本研究旨在开发一种集成多模态成像和临床数据的少镜头深度学习模型(CL-MGNET),以预测高风险生物学特征,即使在有限的训练数据下也能优化性能。材料和方法:这项回顾性、多中心研究分析了377例患者的数据:341例来自初级医疗中心(中心a), 36例来自独立的外部验证队列(中心B)。该研究利用多模式输入(PET/CT、mpMRI)和临床变量预测ISUP分级、EPE和PSM。一个专门的少量深度学习网络CL-MGNET被设计用来融合这些数据源。该模型使用30名患者的有限子集进行训练,随后在内部和外部测试集上进行评估,以评估不同中心的通用性。结果:CL-MGNET在预测高侵入性生物学特征(定义为至少存在一个高风险特征:ISUP≥3,EPE或PSM)方面表现出色,实现了内部测试AUC为0.877,外部验证AUC为0.872,明显优于临床模型的AUC为0.792。该模型优于单模模型(PET/CT、mpMRI)和临床模型。此外,CL-MGNET具有较强的泛化能力,可有效预测各种高危生物学特征。整合临床变量后,模型的性能显著提高,优于传统方法。结论:CL-MGNET模型利用多模态影像数据和临床变量,采用少量学习方法,即使数据有限,也能准确预测前列腺癌的高侵袭性生物学特征。该模型在不同生物特征和医学中心的性能显示了其鲁棒的泛化性。在数据有限的环境中,这种方法有望改善前列腺癌的诊断和风险预测。
{"title":"Generalizable Deep Learning for Prostate Cancer Risk Stratification: Multicenter Study Integrating 18F-PSMA-1007 PET/CT and mpMRI","authors":"Cunke Miao ,&nbsp;Houzhang Sun ,&nbsp;Fei Yao ,&nbsp;Tianle Hong ,&nbsp;Zedong Ren ,&nbsp;Yuandi Zhuang ,&nbsp;Qi Lin ,&nbsp;Shuying Bian ,&nbsp;Yunjun Yang ,&nbsp;Yezhi Lin","doi":"10.1016/j.acra.2025.12.050","DOIUrl":"10.1016/j.acra.2025.12.050","url":null,"abstract":"<div><h3>Background</h3><div>Prostate cancer is the second most common cancer in men, with rising mortality rates necessitating precise risk stratification. High-invasive biological features—specifically International Society of Urological Pathology (ISUP) grade, extracapsular extension (EPE), and positive surgical margins (PSM)—are critical for guiding treatment but are difficult to detect due to tumor heterogeneity. Current imaging modalities, including 18F-PSMA-1007 PET/CT and multiparametric MRI (mpMRI), have limitations in fully capturing these features. This study aims to develop a few-shot deep learning model (CL-MGNET) that integrates multimodal imaging and clinical data to predict high-risk biological features, optimizing performance even with limited training data.</div></div><div><h3>Materials and Methods</h3><div>This retrospective, multicenter study analyzed data from 377 patients: 341 from a primary medical center (Center A) and 36 from an independent external validation cohort (Center B). The study utilized multimodal inputs (PET/CT, mpMRI) and clinical variables to predict ISUP grade, EPE, and PSM. A specialized few-shot deep learning network, CL-MGNET, was designed to fuse these data sources. The model was trained using a restricted subset of 30 patients and subsequently evaluated on both internal and external test sets to assess generalizability across different centers.</div></div><div><h3>Results</h3><div>CL-MGNET demonstrated excellent performance in predicting high-invasive biological features (defined as the presence of at least one high-risk feature: ISUP ≥ 3, EPE, or PSM), achieving an internal test AUC of 0.877 and an external validation AUC of 0.872, which significantly outperformed the clinical model with an AUC of 0.792. The model surpassed both single-modality models (PET/CT, mpMRI) and the clinical model. Furthermore, CL-MGNET exhibited strong generalization capability, effectively predicting various high-risk biological features. When clinical variables were integrated, the model's performance improved significantly, exceeding traditional methods.</div></div><div><h3>Conclusion</h3><div>The CL-MGNET model, leveraging multimodal imaging data and clinical variables with a few-shot learning approach, successfully predicts high-invasive biological features of prostate cancer with high accuracy, even with limited data. The model's performance across different biological features and medical centers shows its robust generalizability. This method holds great promise for improving prostate cancer diagnosis and risk prediction in data-limited environments.</div></div>","PeriodicalId":50928,"journal":{"name":"Academic Radiology","volume":"33 3","pages":"Pages 1107-1120"},"PeriodicalIF":3.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145994724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning Based Multiomics Model for Risk Stratification of Postoperative Distant Metastasis in Colorectal Cancer 基于深度学习的结直肠癌术后远处转移风险分层多组学模型。
IF 3.9 2区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-03-01 Epub Date: 2025-09-04 DOI: 10.1016/j.acra.2025.08.040
Xiuzhen Yao , Xiaoyu Han , Danjiang Huang , Yongfei Zheng , Shuitang Deng , Xiaoxiang Ning , Li Yuan , Weiqun Ao

Rationale and Objectives

To develop deep learning-based multiomics models for predicting postoperative distant metastasis (DM) and evaluating survival prognosis in colorectal cancer (CRC) patients.

Materials and Methods

This retrospective study included 521 CRC patients who underwent curative surgery at two centers. Preoperative CT and postoperative hematoxylin-eosin (HE) stained slides were collected. A total of 381 patients from Center 1 were split (7:3) into training and internal validation sets; 140 patients from Center 2 formed the independent external validation set. Patients were grouped based on DM status during follow-up. Radiological and pathological models were constructed using independent imaging and pathological predictors. Deep features were extracted with a ResNet-101 backbone to build deep learning radiomics (DLRS) and deep learning pathomics (DLPS) models. Two integrated models were developed: Nomogram 1 (radiological + DLRS) and Nomogram 2 (pathological + DLPS).

Results

CT- reported T (cT) stage (OR = 2.00, P = 0.006) and CT-reported N (cN) stage (OR = 1.63, P = 0.023) were identified as independent radiologic predictors for building the radiological model; pN stage (OR = 1.91, P = 0.003) and perineural invasion (OR = 2.07, P = 0.030) were identified as pathological predictors for building the pathological model. DLRS and DLPS incorporated 28 and 30 deep features, respectively. In the training set, area under the curve (AUC) for radiological, pathological, DLRS, DLPS, Nomogram 1, and Nomogram 2 models were 0.657, 0.687, 0.931, 0.914, 0.938, and 0.930. DeLong’s test showed DLRS, DLPS, and both nomograms significantly outperformed conventional models (P<.05). Kaplan–Meier analysis confirmed effective 3-year disease-free survival (DFS) stratification by the nomograms.

Conclusion

Deep learning-based multiomics models provided high accuracy for postoperative DM prediction. Nomogram models enabled reliable DFS risk stratification in CRC patients.
目的:建立基于深度学习的多组学模型,用于预测结直肠癌(CRC)患者术后远处转移(DM)和评估生存预后。材料和方法:本回顾性研究包括521例在两个中心接受治疗性手术的结直肠癌患者。收集术前CT和术后苏木精-伊红(HE)染色玻片。中心1共有381名患者(7:3)被分为训练组和内部验证组;来自第二中心的140例患者形成了独立的外部验证集。根据随访期间的糖尿病状态对患者进行分组。使用独立的影像学和病理预测因子构建影像学和病理模型。利用ResNet-101主干提取深度特征,构建深度学习放射组学(DLRS)和深度学习病理学(DLPS)模型。建立了两种综合模型:Nomogram 1(放射学+ DLRS)和Nomogram 2(病理学+ DLPS)。结果:CT报告的T (CT)分期(OR=2.00, P=0.006)和CT报告的N (cN)分期(OR=1.63, P=0.023)可作为建立放射学模型的独立预测因子;pN分期(OR=1.91, P=0.003)和周围神经浸润(OR=2.07, P=0.030)是建立病理模型的病理预测因子。DLRS和DLPS分别包含28个和30个深度特征。在训练集中,放射学、病理、DLRS、DLPS、Nomogram 1、Nomogram 2模型的曲线下面积(area under the curve, AUC)分别为0.657、0.687、0.931、0.914、0.938、0.930。DeLong的测试显示,DLRS、DLPS和两种模态图都明显优于传统模型(结论:基于深度学习的多组学模型对术后DM的预测具有较高的准确性。Nomogram模型能够在CRC患者中实现可靠的DFS风险分层。
{"title":"Deep Learning Based Multiomics Model for Risk Stratification of Postoperative Distant Metastasis in Colorectal Cancer","authors":"Xiuzhen Yao ,&nbsp;Xiaoyu Han ,&nbsp;Danjiang Huang ,&nbsp;Yongfei Zheng ,&nbsp;Shuitang Deng ,&nbsp;Xiaoxiang Ning ,&nbsp;Li Yuan ,&nbsp;Weiqun Ao","doi":"10.1016/j.acra.2025.08.040","DOIUrl":"10.1016/j.acra.2025.08.040","url":null,"abstract":"<div><h3>Rationale and Objectives</h3><div>To develop deep learning-based multiomics models for predicting postoperative distant metastasis (DM) and evaluating survival prognosis in colorectal cancer (CRC) patients.</div></div><div><h3>Materials and Methods</h3><div>This retrospective study included 521 CRC patients who underwent curative surgery at two centers. Preoperative CT and postoperative hematoxylin-eosin (HE) stained slides were collected. A total of 381 patients from Center 1 were split (7:3) into training and internal validation sets; 140 patients from Center 2 formed the independent external validation set. Patients were grouped based on DM status during follow-up. Radiological and pathological models were constructed using independent imaging and pathological predictors. Deep features were extracted with a ResNet-101 backbone to build deep learning radiomics (DLRS) and deep learning pathomics (DLPS) models. Two integrated models were developed: Nomogram 1 (radiological + DLRS) and Nomogram 2 (pathological + DLPS).</div></div><div><h3>Results</h3><div>CT- reported T (cT) stage (OR<!--> <!-->=<!--> <!-->2.00, P<!--> <!-->=<!--> <!-->0.006) and CT-reported N (cN) stage (OR<!--> <!-->=<!--> <!-->1.63, P<!--> <!-->=<!--> <!-->0.023) were identified as independent radiologic predictors for building the radiological model; pN stage (OR<!--> <!-->=<!--> <!-->1.91, P<!--> <!-->=<!--> <!-->0.003) and perineural invasion (OR<!--> <!-->=<!--> <!-->2.07, P<!--> <!-->=<!--> <!-->0.030) were identified as pathological predictors for building the pathological model. DLRS and DLPS incorporated 28 and 30 deep features, respectively. In the training set, area under the curve (AUC) for radiological, pathological, DLRS, DLPS, Nomogram 1, and Nomogram 2 models were 0.657, 0.687, 0.931, 0.914, 0.938, and 0.930. DeLong’s test showed DLRS, DLPS, and both nomograms significantly outperformed conventional models (P&lt;.05). Kaplan–Meier analysis confirmed effective 3-year disease-free survival (DFS) stratification by the nomograms.</div></div><div><h3>Conclusion</h3><div>Deep learning-based multiomics models provided high accuracy for postoperative DM prediction. Nomogram models enabled reliable DFS risk stratification in CRC patients.</div></div>","PeriodicalId":50928,"journal":{"name":"Academic Radiology","volume":"33 3","pages":"Pages 858-871"},"PeriodicalIF":3.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145006768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leading with C.A.R.E: A Framework to Foster Belonging and Well-Being in Radiology 以C.A.R.E为先导:培养放射学归属感和幸福感的框架。
IF 3.9 2区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-03-01 Epub Date: 2025-11-18 DOI: 10.1016/j.acra.2025.10.046
Lily M. Belfi M.D. FACR , Sarah Averill M.D. , Constantine Burgen M.D. , Reni Butler M.D. , Michele Retrouvey M.D. , Lori A. Deitte M.D. FACR FAUR
The field of radiology is experiencing profound shifts driven by technological advancements, evolving workplace structures, and changing generational expectations, contributing to increased rates of burnout, professional isolation, and diminished purpose among radiologists. To address these challenges and promote workforce sustainability, we propose the C.A.R.E. framework—encompassing Community, Advocacy, Recognition, and Empowerment—as a comprehensive, human-centered model to foster belonging and well-being in radiology. This framework emphasizes intentional strategies to build meaningful professional relationships in hybrid and remote environments; offers pathways to engage in advocacy at personal, interpersonal, and systemic levels; underscores the importance of recognition and appreciation in cultivating engagement and combating burnout; and highlights mentorship, coaching, and sponsorship as pivotal tools for professional and personal growth. By implementing the C.A.R.E. framework, radiology departments and organizations can create inclusive, supportive environments that enhance individual fulfillment, professional resilience, and organizational success. Future initiatives should focus on operationalizing and evaluating these practices to ensure sustained improvements in workforce well-being and patient care outcomes.
在技术进步、工作场所结构演变和世代期望变化的推动下,放射学领域正在经历深刻的变革,导致放射科医生职业倦怠率上升、职业孤立和目标降低。为了应对这些挑战并促进劳动力的可持续性,我们提出了C.A.R.E.框架——包括社区、倡导、认可和赋权——作为一个全面的、以人为本的模型,以促进放射学的归属感和福祉。该框架强调在混合和远程环境中建立有意义的专业关系的有意策略;提供在个人、人际和系统层面进行宣传的途径;强调认可和赞赏在培养敬业精神和对抗倦怠方面的重要性;并强调指导、指导和赞助是职业和个人成长的关键工具。通过实施C.A.R.E.框架,放射科和组织可以创造包容的、支持性的环境,从而提高个人的成就感、专业的弹性和组织的成功。未来的举措应侧重于实施和评估这些做法,以确保持续改善劳动力福利和患者护理结果。
{"title":"Leading with C.A.R.E: A Framework to Foster Belonging and Well-Being in Radiology","authors":"Lily M. Belfi M.D. FACR ,&nbsp;Sarah Averill M.D. ,&nbsp;Constantine Burgen M.D. ,&nbsp;Reni Butler M.D. ,&nbsp;Michele Retrouvey M.D. ,&nbsp;Lori A. Deitte M.D. FACR FAUR","doi":"10.1016/j.acra.2025.10.046","DOIUrl":"10.1016/j.acra.2025.10.046","url":null,"abstract":"<div><div>The field of radiology is experiencing profound shifts driven by technological advancements, evolving workplace structures, and changing generational expectations, contributing to increased rates of burnout, professional isolation, and diminished purpose among radiologists. To address these challenges and promote workforce sustainability, we propose the C.A.R.E. framework—encompassing Community, Advocacy, Recognition, and Empowerment—as a comprehensive, human-centered model to foster belonging and well-being in radiology. This framework emphasizes intentional strategies to build meaningful professional relationships in hybrid and remote environments; offers pathways to engage in advocacy at personal, interpersonal, and systemic levels; underscores the importance of recognition and appreciation in cultivating engagement and combating burnout; and highlights mentorship, coaching, and sponsorship as pivotal tools for professional and personal growth. By implementing the C.A.R.E. framework, radiology departments and organizations can create inclusive, supportive environments that enhance individual fulfillment, professional resilience, and organizational success. Future initiatives should focus on operationalizing and evaluating these practices to ensure sustained improvements in workforce well-being and patient care outcomes.</div></div>","PeriodicalId":50928,"journal":{"name":"Academic Radiology","volume":"33 3","pages":"Pages 718-725"},"PeriodicalIF":3.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145558193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-Based Differentiation of DCIS and IDC from Mammographic Microcalcifications 基于深度学习的DCIS和IDC与乳腺微钙化的鉴别。
IF 3.9 2区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-03-01 Epub Date: 2025-12-22 DOI: 10.1016/j.acra.2025.12.002
Deniz Esin Tekcan Sanli MD , Ahmet Necati Sanli
{"title":"Deep Learning-Based Differentiation of DCIS and IDC from Mammographic Microcalcifications","authors":"Deniz Esin Tekcan Sanli MD ,&nbsp;Ahmet Necati Sanli","doi":"10.1016/j.acra.2025.12.002","DOIUrl":"10.1016/j.acra.2025.12.002","url":null,"abstract":"","PeriodicalId":50928,"journal":{"name":"Academic Radiology","volume":"33 3","pages":"Pages 919-920"},"PeriodicalIF":3.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145821968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pixel-level Radiomics and Deep Learning for Predicting Ki-67 Expression in Breast Cancer Based on Dual-modal Ultrasound Images 基于双模超声图像的像素级放射组学和深度学习预测乳腺癌中Ki-67的表达。
IF 3.9 2区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-03-01 Epub Date: 2026-01-14 DOI: 10.1016/j.acra.2025.12.047
Wei Wei , Fei Xia , Di Zhang , Wang Zhou , Xinjin Wang , Yu Gao , Wenwu Lu , Huijun Feng , Chaoxue Zhang

Rationale and Objectives

This study aimed to develop a deep learning model using a novel pixel-level radiomics approach based on two-dimensional (2D) and strain elastography (SE) ultrasound images to predict Ki-67 expression in breast cancer (BC).

Methods

This multicenter study included 1031 BC patients, who were divided into training (n = 616), internal validation (n = 265), and external test (n = 150) cohorts. An additional 63 patients were prospectively enrolled for further validation. The deep learning model, termed Vision-Mamba, predicts Ki67 expression by integrating ultrasound (2D and SE) images with pixel-level radiomics feature maps (RFMs). A combined model was subsequently constructed by incorporating independent clinical predictors. Model performance was assessed using receiver operating characteristic (ROC) curves, calibration curves, and decision curve analysis (DCA). SHapley Additive exPlanations (SHAP) were applied to enhance interpretability.

Results

We developed a Vision-Mamba-US-RFMs-Clinical (V-MURC) model that integrates ultrasound images, RFMs, and clinical data for accurate prediction of Ki-67 expression in BC. The area under the ROC curve (AUC) values for the internal validation, external test, and prospective validation cohorts were 0.954 (95% CI, 0.929 - 0.975), 0.941 (95% CI, 0.903 - 0.975), and 0.945 (95% CI, 0.883 - 0.989), respectively, demonstrating excellent discrimination and calibration. Compared with individual models, the V-MURC model achieved significantly superior performance across all datasets (Delong test, P < 0.05). Calibration curves and DCA further supported its clinical applicability. SHAP analysis provided visual interpretability of the model's decision-making process.

Conclusion

The V-MURC model based on pixel-level RFMs can accurately predict Ki-67 expression in BC and may serve as a valuable tool for individualized treatment decision-making in clinical practice.
基本原理和目的:本研究旨在利用基于二维(2D)和应变弹性成像(SE)超声图像的新型像素级放射组学方法开发一种深度学习模型,以预测乳腺癌(BC)中Ki-67的表达。方法:本多中心研究纳入1031例BC患者,分为训练组(n = 616)、内部验证组(n = 265)和外部测试组(n = 150)。另外63名患者被纳入前瞻性研究以进一步验证。该深度学习模型被称为Vision-Mamba,通过整合超声(2D和SE)图像和像素级放射组学特征图(rfm)来预测Ki67的表达。随后通过合并独立的临床预测因子构建了一个联合模型。采用受试者工作特征(ROC)曲线、校准曲线和决策曲线分析(DCA)评估模型的性能。采用SHapley加性解释(SHAP)提高可解释性。结果:我们建立了一个视觉-曼巴-美国- rfm -临床(V-MURC)模型,该模型整合了超声图像、rfm和临床数据,用于准确预测BC中Ki-67的表达。内部验证队列、外部验证队列和前瞻性验证队列的ROC曲线下面积(AUC)值分别为0.954 (95% CI, 0.929 ~ 0.975)、0.941 (95% CI, 0.903 ~ 0.975)和0.945 (95% CI, 0.883 ~ 0.989),具有良好的判别和校准能力。与单个模型相比,V-MURC模型在所有数据集上的性能都显著优于单个模型(Delong检验,P < 0.05)。校准曲线和DCA进一步支持了其临床适用性。SHAP分析提供了模型决策过程的可视化可解释性。结论:基于像素级rmrm的V-MURC模型可以准确预测BC中Ki-67的表达,可作为临床个体化治疗决策的重要工具。
{"title":"Pixel-level Radiomics and Deep Learning for Predicting Ki-67 Expression in Breast Cancer Based on Dual-modal Ultrasound Images","authors":"Wei Wei ,&nbsp;Fei Xia ,&nbsp;Di Zhang ,&nbsp;Wang Zhou ,&nbsp;Xinjin Wang ,&nbsp;Yu Gao ,&nbsp;Wenwu Lu ,&nbsp;Huijun Feng ,&nbsp;Chaoxue Zhang","doi":"10.1016/j.acra.2025.12.047","DOIUrl":"10.1016/j.acra.2025.12.047","url":null,"abstract":"<div><h3>Rationale and Objectives</h3><div>This study aimed to develop a deep learning model using a novel pixel-level radiomics approach based on two-dimensional (2D) and strain elastography (SE) ultrasound images to predict Ki-67 expression in breast cancer (BC).</div></div><div><h3>Methods</h3><div>This multicenter study included 1031 BC patients, who were divided into training (n = 616), internal validation (n = 265), and external test (n = 150) cohorts. An additional 63 patients were prospectively enrolled for further validation. The deep learning model, termed Vision-Mamba, predicts Ki67 expression by integrating ultrasound (2D and SE) images with pixel-level radiomics feature maps (RFMs). A combined model was subsequently constructed by incorporating independent clinical predictors. Model performance was assessed using receiver operating characteristic (ROC) curves, calibration curves, and decision curve analysis (DCA). SHapley Additive exPlanations (SHAP) were applied to enhance interpretability.</div></div><div><h3>Results</h3><div>We developed a Vision-Mamba-US-RFMs-Clinical (V-MURC) model that integrates ultrasound images, RFMs, and clinical data for accurate prediction of Ki-67 expression in BC. The area under the ROC curve (AUC) values for the internal validation, external test, and prospective validation cohorts were 0.954 (95% CI, 0.929 - 0.975), 0.941 (95% CI, 0.903 - 0.975), and 0.945 (95% CI, 0.883 - 0.989), respectively, demonstrating excellent discrimination and calibration. Compared with individual models, the V-MURC model achieved significantly superior performance across all datasets (Delong test, <em>P</em> &lt; 0.05). Calibration curves and DCA further supported its clinical applicability. SHAP analysis provided visual interpretability of the model's decision-making process.</div></div><div><h3>Conclusion</h3><div>The V-MURC model based on pixel-level RFMs can accurately predict Ki-67 expression in BC and may serve as a valuable tool for individualized treatment decision-making in clinical practice.</div></div>","PeriodicalId":50928,"journal":{"name":"Academic Radiology","volume":"33 3","pages":"Pages 900-917"},"PeriodicalIF":3.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145991799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Cardiac MRI Planning: From Localizers to Cine Images 自动心脏MRI计划:从定位器到电影图像。
IF 3.9 2区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-03-01 Epub Date: 2026-03-02 DOI: 10.1016/j.acra.2026.01.026
Soheil Kooraki MD, Arash Bedayat MD
{"title":"Automated Cardiac MRI Planning: From Localizers to Cine Images","authors":"Soheil Kooraki MD,&nbsp;Arash Bedayat MD","doi":"10.1016/j.acra.2026.01.026","DOIUrl":"10.1016/j.acra.2026.01.026","url":null,"abstract":"","PeriodicalId":50928,"journal":{"name":"Academic Radiology","volume":"33 3","pages":"Pages 922-923"},"PeriodicalIF":3.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147357005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Academic Radiology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1