Pub Date : 2025-11-13eCollection Date: 2025-01-01DOI: 10.3389/fradi.2025.1663742
Yasser Hamdan Al Ghamdi, Saud Hussain Alawad, Fayka Karem, Mohammed J Alsaadi
Fibrolipomatous hamartoma is a rare benign overgrowth of tissue consisting of intermixed adipose and fibrous connective tissue within the epineurium. However, involvement of the sciatic nerve is exceptionally rare. We present the case of a 46-year-old female who exhibited a progressively enlarging mass in her right posterior thigh, accompanied by sciatica and gluteal pain. Clinical assessment and MRI revealed a large lesion along the sciatic nerve with characteristic features of fibrolipomatous hamartoma. MRI findings demonstrated characteristic features, including isointense (to fat) on T1-weighted images and hyperintense with fat suppression on short tau inversion recovery sequences, indicating a sciatic nerve fibrolipomatous hamartoma. The diagnosis was histopathologically confirmed following surgical excision. This case highlights the critical role of identifying specific MRI features of this rare entity to avoid unnecessary invasive interventional procedures. An accurate MRI-based diagnosis can significantly impact clinical decisions and improve patient care.
{"title":"Fibrolipomatous hamartoma of the sciatic nerve: an atypical case report.","authors":"Yasser Hamdan Al Ghamdi, Saud Hussain Alawad, Fayka Karem, Mohammed J Alsaadi","doi":"10.3389/fradi.2025.1663742","DOIUrl":"10.3389/fradi.2025.1663742","url":null,"abstract":"<p><p>Fibrolipomatous hamartoma is a rare benign overgrowth of tissue consisting of intermixed adipose and fibrous connective tissue within the epineurium. However, involvement of the sciatic nerve is exceptionally rare. We present the case of a 46-year-old female who exhibited a progressively enlarging mass in her right posterior thigh, accompanied by sciatica and gluteal pain. Clinical assessment and MRI revealed a large lesion along the sciatic nerve with characteristic features of fibrolipomatous hamartoma. MRI findings demonstrated characteristic features, including isointense (to fat) on T1-weighted images and hyperintense with fat suppression on short tau inversion recovery sequences, indicating a sciatic nerve fibrolipomatous hamartoma. The diagnosis was histopathologically confirmed following surgical excision. This case highlights the critical role of identifying specific MRI features of this rare entity to avoid unnecessary invasive interventional procedures. An accurate MRI-based diagnosis can significantly impact clinical decisions and improve patient care.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"5 ","pages":"1663742"},"PeriodicalIF":2.3,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12657156/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145650287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-13eCollection Date: 2025-01-01DOI: 10.3389/fradi.2025.1680803
Mustafa Çağlar, Kerime Selin Ertaş, Mehmet Sıddık Cebe, Ilkay Kara, Navid Kheradmand, Evrim Metcalfe
Objective: In this study, the accuracy of deep learning-based models developed for synthetic CT (sCT) generation from conventional Cone Beam Computed Tomography (CBCT) images of prostate cancer patients was evaluated. The clinical applicability of these sCTs in treatment planning and their potential to support adaptive radiotherapy decision-making were also investigated.
Methods: A total of 50 CBCT-CT mappings were obtained for each of 10 retrospectively selected prostate cancer patients, including one planning CT (pCT) and five CBCT scans taken on different days during the treatment process. All images were preprocessed, anatomically matched and used as input to the U-Net and ResU-Net models trained with PyTorch after z-score normalisation. The sCT outputs obtained from model outputs were quantitatively compared with the pCT with metrics such as SSIM, PSNR, MAE, and HU difference distribution.
Results: Both models produced sCT images with higher similarity to pCT compared to CBCT images. The mean SSIM value was 0.763 ± 0.040 for CBCT-CT matches, 0.840 ± 0.026 with U-Net and 0.851 ± 0.026 with ResU-Net, with a significant increase in both models (p < 0.05). PSNR values were 21.55 ± 1.38 dB for CBCT, 24.74 ± 1.83 dB for U-Net, and 25.24 ± 1.61 dB for ResU-Net. ResU-Net provided a statistically significant higher PSNR value compared to U-Net (p < 0.05). In terms of MAE, while the mean error in CBCT-CT matches was 75.2 ± 18.7 HU, the U-Net model reduced this value to 65.3 ± 14.8 HU and ResU-Net to 61.8 ± 13.7 HU (p < 0.05).
Conclusion: Deep learning models trained with simple architectures such as U-Net and ResU-Net provide effective and feasible solutions for the generation of clinically relevant sCT from CBCT images, supporting accurate dose calculation and facilitating adaptive radiotherapy workflows in prostate cancer management.
{"title":"Synthetic CT generation from CBCT using deep learning for adaptive radiotherapy in prostate cancer.","authors":"Mustafa Çağlar, Kerime Selin Ertaş, Mehmet Sıddık Cebe, Ilkay Kara, Navid Kheradmand, Evrim Metcalfe","doi":"10.3389/fradi.2025.1680803","DOIUrl":"10.3389/fradi.2025.1680803","url":null,"abstract":"<p><strong>Objective: </strong>In this study, the accuracy of deep learning-based models developed for synthetic CT (sCT) generation from conventional Cone Beam Computed Tomography (CBCT) images of prostate cancer patients was evaluated. The clinical applicability of these sCTs in treatment planning and their potential to support adaptive radiotherapy decision-making were also investigated.</p><p><strong>Methods: </strong>A total of 50 CBCT-CT mappings were obtained for each of 10 retrospectively selected prostate cancer patients, including one planning CT (pCT) and five CBCT scans taken on different days during the treatment process. All images were preprocessed, anatomically matched and used as input to the U-Net and ResU-Net models trained with PyTorch after z-score normalisation. The sCT outputs obtained from model outputs were quantitatively compared with the pCT with metrics such as SSIM, PSNR, MAE, and HU difference distribution.</p><p><strong>Results: </strong>Both models produced sCT images with higher similarity to pCT compared to CBCT images. The mean SSIM value was 0.763 ± 0.040 for CBCT-CT matches, 0.840 ± 0.026 with U-Net and 0.851 ± 0.026 with ResU-Net, with a significant increase in both models (<i>p</i> < 0.05). PSNR values were 21.55 ± 1.38 dB for CBCT, 24.74 ± 1.83 dB for U-Net, and 25.24 ± 1.61 dB for ResU-Net. ResU-Net provided a statistically significant higher PSNR value compared to U-Net (<i>p</i> < 0.05). In terms of MAE, while the mean error in CBCT-CT matches was 75.2 ± 18.7 HU, the U-Net model reduced this value to 65.3 ± 14.8 HU and ResU-Net to 61.8 ± 13.7 HU (<i>p</i> < 0.05).</p><p><strong>Conclusion: </strong>Deep learning models trained with simple architectures such as U-Net and ResU-Net provide effective and feasible solutions for the generation of clinically relevant sCT from CBCT images, supporting accurate dose calculation and facilitating adaptive radiotherapy workflows in prostate cancer management.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"5 ","pages":"1680803"},"PeriodicalIF":2.3,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12657355/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145650295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Non-invasive and comprehensive molecular characterization of glioma is crucial for personalized treatment but remains limited by invasive biopsy procedures and stringent privacy restrictions on clinical data sharing. Federated learning (FL) provides a promising solution by enabling multi-institutional collaboration without compromising patient confidentiality.
Methods: We propose a multi-task 3D deep neural network framework based on federated learning. Using multi-modal MRI images, without sharing the original data, the automatic segmentation of T2w high signal region and the prediction of four molecular markers (IDH mutation, 1p/19q co-deletion, MGMT promoter methylation, WHO grade) were completed in collaboration with multiple medical institutions. We trained the model on local patient data at independent clients and aggregated the model parameters on a central server to achieve distributed collaborative learning. The model was trained on five public datasets (n = 1,552) and evaluated on an external validation dataset (n = 466).
Results: The model showed good performance in the external test set (IDH AUC = 0.88, 1p/19q AUC = 0.84, MGMT AUC = 0.85, grading AUC = 0.94), and the median Dice of the segmentation task was 0.85.
Conclusions: Our federated multi-task deep learning model demonstrates the feasibility and effectiveness of predicting glioma molecular characteristics and grade from multi-parametric MRI, without compromising patient privacy. These findings suggest significant potential for clinical deployment, especially in scenarios where invasive tissue sampling is impractical or risky.
{"title":"Federated radiomics analysis of preoperative MRI across institutions: toward integrated glioma segmentation and molecular subtyping.","authors":"Ran Ren, Anjun Zhu, Yaxi Li, Huli Liu, Guo Huang, Jing Gu, Jianming Ni, Zengli Miao","doi":"10.3389/fradi.2025.1648145","DOIUrl":"10.3389/fradi.2025.1648145","url":null,"abstract":"<p><strong>Background: </strong>Non-invasive and comprehensive molecular characterization of glioma is crucial for personalized treatment but remains limited by invasive biopsy procedures and stringent privacy restrictions on clinical data sharing. Federated learning (FL) provides a promising solution by enabling multi-institutional collaboration without compromising patient confidentiality.</p><p><strong>Methods: </strong>We propose a multi-task 3D deep neural network framework based on federated learning. Using multi-modal MRI images, without sharing the original data, the automatic segmentation of T2w high signal region and the prediction of four molecular markers (IDH mutation, 1p/19q co-deletion, MGMT promoter methylation, WHO grade) were completed in collaboration with multiple medical institutions. We trained the model on local patient data at independent clients and aggregated the model parameters on a central server to achieve distributed collaborative learning. The model was trained on five public datasets (<i>n</i> = 1,552) and evaluated on an external validation dataset (<i>n</i> = 466).</p><p><strong>Results: </strong>The model showed good performance in the external test set (IDH AUC = 0.88, 1p/19q AUC = 0.84, MGMT AUC = 0.85, grading AUC = 0.94), and the median Dice of the segmentation task was 0.85.</p><p><strong>Conclusions: </strong>Our federated multi-task deep learning model demonstrates the feasibility and effectiveness of predicting glioma molecular characteristics and grade from multi-parametric MRI, without compromising patient privacy. These findings suggest significant potential for clinical deployment, especially in scenarios where invasive tissue sampling is impractical or risky.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"5 ","pages":"1648145"},"PeriodicalIF":2.3,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12640913/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145607782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-07eCollection Date: 2025-01-01DOI: 10.3389/fradi.2025.1684436
Lulu Wang
Breast cancer is the most common malignancy among women worldwide, and imaging remains critical for early detection, diagnosis, and treatment planning. Recent advances in artificial intelligence (AI), particularly self-supervised learning (SSL) and transformer-based architectures, have opened new opportunities for breast image analysis. SSL offers a label-efficient strategy that reduces reliance on large annotated datasets, with evidence suggesting that it can achieve strong performance. Transformer-based architectures, such as Vision Transformers, capture long-range dependencies and global contextual information, complementing the local feature sensitivity of convolutional neural networks. This study provides a comprehensive overview of recent developments in SSL and transformer models for breast lesion segmentation, detection, and classification, highlighting representative studies in each domain. It also discusses the advantages and current limitations of these approaches and outlines future research priorities, emphasizing that successful clinical translation depends on access to multi-institutional datasets to ensure generalizability, rigorous external validation to confirm real-world performance, and interpretable model designs to foster clinician trust and enable safe, effective deployment in clinical practice.
{"title":"Self-supervised learning and transformer-based technologies in breast cancer imaging.","authors":"Lulu Wang","doi":"10.3389/fradi.2025.1684436","DOIUrl":"10.3389/fradi.2025.1684436","url":null,"abstract":"<p><p>Breast cancer is the most common malignancy among women worldwide, and imaging remains critical for early detection, diagnosis, and treatment planning. Recent advances in artificial intelligence (AI), particularly self-supervised learning (SSL) and transformer-based architectures, have opened new opportunities for breast image analysis. SSL offers a label-efficient strategy that reduces reliance on large annotated datasets, with evidence suggesting that it can achieve strong performance. Transformer-based architectures, such as Vision Transformers, capture long-range dependencies and global contextual information, complementing the local feature sensitivity of convolutional neural networks. This study provides a comprehensive overview of recent developments in SSL and transformer models for breast lesion segmentation, detection, and classification, highlighting representative studies in each domain. It also discusses the advantages and current limitations of these approaches and outlines future research priorities, emphasizing that successful clinical translation depends on access to multi-institutional datasets to ensure generalizability, rigorous external validation to confirm real-world performance, and interpretable model designs to foster clinician trust and enable safe, effective deployment in clinical practice.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"5 ","pages":"1684436"},"PeriodicalIF":2.3,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12634322/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145589514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-06eCollection Date: 2025-01-01DOI: 10.3389/fradi.2025.1683274
Antonio Innocenzi, Sara Peluso, Federico Bruno, Laura Balducci, Ettore Rocchi, Michela Bellini, Alessia Catalucci, Patrizia Sucapane, Gennaro Saporito, Tommasina Russo, Gastone Castellani, Francesca Pistoia, Alessandra Splendiani
Objective: Magnetic resonance-guided focused ultrasound (MRgFUS) thalamotomy is an effective treatment for essential tremor (ET) and tremor-dominant Parkinson's disease (PD), yet a substantial proportion of patients experience tremor recurrence over time. Reliable imaging biomarkers to predict long-term outcomes are lacking. The purpose of the study was to evaluate whether radiomic features extracted from 24-h post-treatment MRI can predict clinically relevant tremor recurrence at 12 months after MRgFUS thalamotomy, using a machine learning (ML) approach.
Materials and methods: Retrospective, single-center study included 120 patients (61 ET, 59 PD) treated with unilateral MRgFUS Vim thalamotomy between February 2018 and June 2023. Tremor severity was assessed using part A of the Fahn-Tolosa-Marin Tremor Rating Scale (FTM-TRS) at baseline and 12 months. Recurrence was defined as an FTM-TRS part A score ≥ 3 at 12 months. Lesions were manually segmented on 24-h post-treatment T2-weighted MRI. Forty radiomic features (18 first-order, 22 texture GLCM from Laplacian of Gaussian-filtered images) were extracted. A linear Support Vector Classifier with leave-one-out cross-validation was used for classification. Model explainability was assessed using SHapley Additive exPlanations (SHAP).
Results: Clinically relevant tremor recurrence occurred in 23 patients (19%). For the full cohort, the ML model achieved a balanced accuracy of 0.720, weighted F1-score of 0.737, and comparable sensitivity and specificity across classes. Performance was higher in PD (BA = 0.808, F1 = 0.793) than in ET (BA = 0.580, F1 = 0.696). The most predictive features were texture-derived GLCM metrics, particularly from edge-enhanced images, with first-order features contributing complementary information. No significant correlations were found between radiomic features and procedural parameters.
Conclusion: Radiomic analysis of MRgFUS lesions on 24-h post-treatment MRI can provide early prediction of 12-month tremor recurrence, with higher predictive value in PD than in ET. Texture-based features may capture microstructural characteristics linked to treatment durability. This approach could inform post-treatment monitoring and individualized management strategies.
目的:磁共振引导聚焦超声(MRgFUS)丘脑切开术是治疗特发性震颤(ET)和震颤主导型帕金森病(PD)的有效方法,但随着时间的推移,相当一部分患者会出现震颤复发。目前缺乏可靠的成像生物标志物来预测长期预后。该研究的目的是利用机器学习(ML)方法,评估从治疗后24小时MRI中提取的放射学特征是否可以预测MRgFUS丘脑切除术后12个月的临床相关震颤复发。材料和方法:回顾性、单中心研究包括2018年2月至2023年6月期间接受单侧MRgFUS Vim丘脑切开术治疗的120例患者(61例ET, 59例PD)。在基线和12个月时,使用Fahn-Tolosa-Marin震颤评定量表(FTM-TRS)的A部分评估震颤严重程度。复发定义为12个月时FTM-TRS A部分评分≥3分。在治疗后24小时的t2加权MRI上手动分割病变。提取了40个放射学特征(18个一阶特征,22个高斯滤波后拉普拉斯图像的纹理GLCM特征)。采用留一交叉验证的线性支持向量分类器进行分类。采用SHapley加性解释(SHAP)评估模型的可解释性。结果:23例(19%)患者发生临床相关震颤复发。对于整个队列,ML模型的平衡精度为0.720,加权f1评分为0.737,并且在不同类别中具有可比的敏感性和特异性。PD组的生产性能(BA = 0.808, F1 = 0.793)高于ET组(BA = 0.580, F1 = 0.696)。最具预测性的特征是纹理衍生的GLCM度量,特别是来自边缘增强的图像,一阶特征提供了互补信息。放射学特征与手术参数之间无显著相关性。结论:治疗后24小时MRI对MRgFUS病变的放射组学分析可以提供12个月震颤复发的早期预测,PD的预测价值高于ET。基于纹理的特征可以捕获与治疗持久性相关的微结构特征。这种方法可以为治疗后监测和个性化管理策略提供信息。
{"title":"Radiomic signatures from postprocedural MRI thalamotomy lesion can predict long-term clinical outcome in patients with tremor after MRgFUS: a pilot study.","authors":"Antonio Innocenzi, Sara Peluso, Federico Bruno, Laura Balducci, Ettore Rocchi, Michela Bellini, Alessia Catalucci, Patrizia Sucapane, Gennaro Saporito, Tommasina Russo, Gastone Castellani, Francesca Pistoia, Alessandra Splendiani","doi":"10.3389/fradi.2025.1683274","DOIUrl":"10.3389/fradi.2025.1683274","url":null,"abstract":"<p><strong>Objective: </strong>Magnetic resonance-guided focused ultrasound (MRgFUS) thalamotomy is an effective treatment for essential tremor (ET) and tremor-dominant Parkinson's disease (PD), yet a substantial proportion of patients experience tremor recurrence over time. Reliable imaging biomarkers to predict long-term outcomes are lacking. The purpose of the study was to evaluate whether radiomic features extracted from 24-h post-treatment MRI can predict clinically relevant tremor recurrence at 12 months after MRgFUS thalamotomy, using a machine learning (ML) approach.</p><p><strong>Materials and methods: </strong>Retrospective, single-center study included 120 patients (61 ET, 59 PD) treated with unilateral MRgFUS Vim thalamotomy between February 2018 and June 2023. Tremor severity was assessed using part A of the Fahn-Tolosa-Marin Tremor Rating Scale (FTM-TRS) at baseline and 12 months. Recurrence was defined as an FTM-TRS part A score ≥ 3 at 12 months. Lesions were manually segmented on 24-h post-treatment T2-weighted MRI. Forty radiomic features (18 first-order, 22 texture GLCM from Laplacian of Gaussian-filtered images) were extracted. A linear Support Vector Classifier with leave-one-out cross-validation was used for classification. Model explainability was assessed using SHapley Additive exPlanations (SHAP).</p><p><strong>Results: </strong>Clinically relevant tremor recurrence occurred in 23 patients (19%). For the full cohort, the ML model achieved a balanced accuracy of 0.720, weighted F1-score of 0.737, and comparable sensitivity and specificity across classes. Performance was higher in PD (BA = 0.808, F1 = 0.793) than in ET (BA = 0.580, F1 = 0.696). The most predictive features were texture-derived GLCM metrics, particularly from edge-enhanced images, with first-order features contributing complementary information. No significant correlations were found between radiomic features and procedural parameters.</p><p><strong>Conclusion: </strong>Radiomic analysis of MRgFUS lesions on 24-h post-treatment MRI can provide early prediction of 12-month tremor recurrence, with higher predictive value in PD than in ET. Texture-based features may capture microstructural characteristics linked to treatment durability. This approach could inform post-treatment monitoring and individualized management strategies.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"5 ","pages":"1683274"},"PeriodicalIF":2.3,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12631418/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145588797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Accurate diagnosis of anterior cruciate ligament (ACL) tears on magnetic resonance imaging (MRI) is critical for timely treatment planning. Deep learning (DL) approaches have shown promise in assisting clinicians, but many prior studies are limited by small datasets, lack of surgical confirmation, or exclusion of partial tears.
Aim: To evaluate the performance of multiple convolutional neural network (CNN) architectures, including a proposed CustomCNN, for ACL tear detection using a surgically validated dataset.
Methods: A total of 8,086 proton density-weighted sagittal knee MRI slices were obtained from patients whose ACL status (intact, partial, or complete tear) was confirmed arthroscopically. Eleven deep learning models, including CustomCNN, DenseNet121, and InceptionResNetV2, were trained and evaluated with strict patient-level separation to avoid data leakage. Model performance was assessed using accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC).
Results: The CustomCNN model achieved the highest diagnostic performance, with an accuracy of 91.5% (95% CI: 89.5-93.1), sensitivity of 92.4% (95% CI: 90.4-94.2), and an AUC of 0.913. The inclusion of both partial and complete tears enhanced clinical relevance, and patient-level splitting reduced the risk of inflated metrics from correlated slices. Compared with previous reports, the proposed approach demonstrated competitive results while addressing key methodological limitations.
Conclusion: The CustomCNN model enables rapid and reliable detection of ACL tears, including partial lesions, and may serve as a valuable decision-support tool for radiologists and orthopedic surgeons. The use of a surgically validated dataset and rigorous methodology enhances clinical credibility. Future work should expand to multicenter datasets, diverse MRI protocols, and prospective reader studies to establish generalizability and facilitate integration into real-world workflows.
{"title":"Artificial intelligence-assisted accurate diagnosis of anterior cruciate ligament tears using customized CNN and YOLOv9.","authors":"Taner Alic, Sinan Zehir, Meryem Yalcinkaya, Emre Deniz, Harun Emre Kiran, Onur Afacan","doi":"10.3389/fradi.2025.1691048","DOIUrl":"10.3389/fradi.2025.1691048","url":null,"abstract":"<p><strong>Background: </strong>Accurate diagnosis of anterior cruciate ligament (ACL) tears on magnetic resonance imaging (MRI) is critical for timely treatment planning. Deep learning (DL) approaches have shown promise in assisting clinicians, but many prior studies are limited by small datasets, lack of surgical confirmation, or exclusion of partial tears.</p><p><strong>Aim: </strong>To evaluate the performance of multiple convolutional neural network (CNN) architectures, including a proposed CustomCNN, for ACL tear detection using a surgically validated dataset.</p><p><strong>Methods: </strong>A total of 8,086 proton density-weighted sagittal knee MRI slices were obtained from patients whose ACL status (intact, partial, or complete tear) was confirmed arthroscopically. Eleven deep learning models, including CustomCNN, DenseNet121, and InceptionResNetV2, were trained and evaluated with strict patient-level separation to avoid data leakage. Model performance was assessed using accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC).</p><p><strong>Results: </strong>The CustomCNN model achieved the highest diagnostic performance, with an accuracy of 91.5% (95% CI: 89.5-93.1), sensitivity of 92.4% (95% CI: 90.4-94.2), and an AUC of 0.913. The inclusion of both partial and complete tears enhanced clinical relevance, and patient-level splitting reduced the risk of inflated metrics from correlated slices. Compared with previous reports, the proposed approach demonstrated competitive results while addressing key methodological limitations.</p><p><strong>Conclusion: </strong>The CustomCNN model enables rapid and reliable detection of ACL tears, including partial lesions, and may serve as a valuable decision-support tool for radiologists and orthopedic surgeons. The use of a surgically validated dataset and rigorous methodology enhances clinical credibility. Future work should expand to multicenter datasets, diverse MRI protocols, and prospective reader studies to establish generalizability and facilitate integration into real-world workflows.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"5 ","pages":"1691048"},"PeriodicalIF":2.3,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12623178/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145558292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-04eCollection Date: 2025-01-01DOI: 10.3389/fradi.2025.1662089
Lin Zhou, Zhi-Cheng Huang, Xiao-Hui Lin, Shao-Jin Zhang, Ya He
Acute portal vein thrombosis (APVT) is a rare condition characterized by recent thrombus formation within the main portal vein or its branches. APVT occurring in patients without underlying cirrhosis or malignancy represents an even rarer presentation, with an estimated prevalence of 0.7-3.7 per 100,000 individuals. However, it can lead to severe complications, including intestinal infarction and mortality. We report two cases presenting with abdominal pain without an apparent precipitating factor. Both patients were diagnosed with APVT based on contrast-enhanced computed tomography (CT) findings, clinical presentation, and laboratory parameters. Depending on the extent of portal vein occlusion, distinct therapeutic approaches were employed: one patient underwent interventional therapy combining transjugular mechanical thrombectomy/thrombolysis with transjugular intrahepatic portosystemic shunt (TIPS) placement, while the other received systemic pharmacological thrombolysis. Successful portal vein recanalization was achieved in both patients, who subsequently recovered and were discharged. These cases underscore that prompt diagnosis and management of APVT can avert adverse clinical outcomes. Contrast-enhanced CT demonstrates significant value in classifying APVT, assessing disease severity, evaluating treatment response, and identifying complications, thereby providing crucial evidence for clinical decision-making.
{"title":"Case Report: CT manifestations of acute portal vein thrombosis: cases report and literature review.","authors":"Lin Zhou, Zhi-Cheng Huang, Xiao-Hui Lin, Shao-Jin Zhang, Ya He","doi":"10.3389/fradi.2025.1662089","DOIUrl":"10.3389/fradi.2025.1662089","url":null,"abstract":"<p><p>Acute portal vein thrombosis (APVT) is a rare condition characterized by recent thrombus formation within the main portal vein or its branches. APVT occurring in patients without underlying cirrhosis or malignancy represents an even rarer presentation, with an estimated prevalence of 0.7-3.7 per 100,000 individuals. However, it can lead to severe complications, including intestinal infarction and mortality. We report two cases presenting with abdominal pain without an apparent precipitating factor. Both patients were diagnosed with APVT based on contrast-enhanced computed tomography (CT) findings, clinical presentation, and laboratory parameters. Depending on the extent of portal vein occlusion, distinct therapeutic approaches were employed: one patient underwent interventional therapy combining transjugular mechanical thrombectomy/thrombolysis with transjugular intrahepatic portosystemic shunt (TIPS) placement, while the other received systemic pharmacological thrombolysis. Successful portal vein recanalization was achieved in both patients, who subsequently recovered and were discharged. These cases underscore that prompt diagnosis and management of APVT can avert adverse clinical outcomes. Contrast-enhanced CT demonstrates significant value in classifying APVT, assessing disease severity, evaluating treatment response, and identifying complications, thereby providing crucial evidence for clinical decision-making.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"5 ","pages":"1662089"},"PeriodicalIF":2.3,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12623160/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145558334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-03eCollection Date: 2025-01-01DOI: 10.3389/fradi.2025.1672382
B T Kavya, Shweta Raviraj Poojary, Harsha Sundaramurthy
Spinal cord infarction following neuraxial anesthesia is a rare but serious complication. We present the case of a 70-year-old female who developed acute onset of left lower limb weakness immediately following spinal anesthesia administered for total hip replacement. Clinical features were consistent with incomplete Brown-Séquard syndrome. MRI revealed a T2/STIR hyperintense lesion involving the left hemicord at the D12-L1 vertebral level, suggestive of sulcal artery infarction. MRI showed only age-related changes. After a structured physiotherapy program, the patient experienced significant functional improvement and was discharged with stable vitals. This case highlights the importance of early diagnosis and management of spinal cord infarction in the perioperative setting.
{"title":"Case Report: Sulcal artery infarction presenting as incomplete Brown-Séquard syndrome following spinal anesthesia in a 70-year-old female: a rare postoperative neurological complication.","authors":"B T Kavya, Shweta Raviraj Poojary, Harsha Sundaramurthy","doi":"10.3389/fradi.2025.1672382","DOIUrl":"10.3389/fradi.2025.1672382","url":null,"abstract":"<p><p>Spinal cord infarction following neuraxial anesthesia is a rare but serious complication. We present the case of a 70-year-old female who developed acute onset of left lower limb weakness immediately following spinal anesthesia administered for total hip replacement. Clinical features were consistent with incomplete Brown-Séquard syndrome. MRI revealed a T2/STIR hyperintense lesion involving the left hemicord at the D12-L1 vertebral level, suggestive of sulcal artery infarction. MRI showed only age-related changes. After a structured physiotherapy program, the patient experienced significant functional improvement and was discharged with stable vitals. This case highlights the importance of early diagnosis and management of spinal cord infarction in the perioperative setting.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"5 ","pages":"1672382"},"PeriodicalIF":2.3,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12620253/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145552229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-31eCollection Date: 2025-01-01DOI: 10.3389/fradi.2025.1695043
Sebastian Gassenmaier, Franziska Katharina Staber, Stephan Ursprung, Judith Herrmann, Sebastian Werner, Andreas Lingg, Lisa C Adams, Haidara Almansour, Konstantin Nikolaou, Saif Afat
Purpose: This study evaluates the impact of high-resolution T2-weighted imaging (T2HR) combined with deep learning image reconstruction (DLR) on image quality, lesion delineation, and extraprostatic extension (EPE) assessment in prostate multiparametric MRI (mpMRI).
Materials and methods: This retrospective study included 69 patients who underwent mpMRI of the prostate on a 3 T scanner with DLR between April 2023 and March 2024. Routine mpMRI protocols adhering to the Prostate Imaging Reporting and Data System (PI-RADS) v2.1 were used, including an additional T2HR sequence [2 mm slice thickness, 4:31 min vs. 4:12 min for standard T2 (T2S)]. The image datasets were evaluated by two radiologists using a Likert scale ranging from 1 to 5, with 5 being the best for sharpness, lesion contours, motion artifacts, prostate border delineation, overall image quality, and diagnostic confidence. PI-RADS scoring and EPE suspicion were analyzed. The statistical methods used included the Wilcoxon signed-rank test and Cohen's kappa for inter-reader agreement.
Results: T2HR significantly improved lesion contours (medians of 5 vs. 4, p < 0.001), prostate border delineation (medians of 5 vs. 4, p < 0.001), and overall image quality (medians of 5 vs. 4, p < 0.001) compared to T2S. However, motion artifacts were significantly worse in T2HR. Substantial inter-reader agreement was observed in the PI-RADS scoring. EPE detection marginally increased with T2HR, though histopathological validation was limited.
Conclusion: T2HR imaging with DLR enhances image quality, lesion delineation, and diagnostic confidence without significantly prolonged acquisition time. It shows potential for improving EPE assessment in prostate cancer but requires further validation in larger studies.
目的:本研究评估高分辨率t2加权成像(T2HR)结合深度学习图像重建(DLR)对前列腺多参数MRI (mpMRI)图像质量、病变描绘和前列腺外展(EPE)评估的影响。材料和方法:本回顾性研究纳入了69例患者,这些患者在2023年4月至2024年3月期间在3t扫描仪上进行了前列腺mpMRI检查。遵循前列腺成像报告和数据系统(PI-RADS) v2.1的常规mpMRI方案,包括额外的T2HR序列[2mm切片厚度,4:31分钟与标准T2 (T2S) 4:12分钟]。图像数据集由两名放射科医生使用李克特量表进行评估,范围从1到5,其中5代表清晰度,病变轮廓,运动伪影,前列腺边界划定,整体图像质量和诊断置信度。分析PI-RADS评分和EPE怀疑。使用的统计方法包括Wilcoxon sign -rank检验和Cohen's kappa对读者间协议的检验。结果:T2HR显著改善病变轮廓(中位数为5 vs. 4, p p p S)。然而,T2HR患者的运动伪影明显加重。在PI-RADS评分中观察到大量的读者间一致。尽管组织病理学验证有限,但EPE检测随T2HR轻微增加。结论:T2HR成像与DLR增强图像质量,病变描绘,和诊断的信心,没有明显延长采集时间。它显示了改善前列腺癌EPE评估的潜力,但需要在更大规模的研究中进一步验证。
{"title":"High-resolution deep learning-reconstructed T2-weighted imaging for the improvement of image quality and extraprostatic extension assessment in prostate MRI.","authors":"Sebastian Gassenmaier, Franziska Katharina Staber, Stephan Ursprung, Judith Herrmann, Sebastian Werner, Andreas Lingg, Lisa C Adams, Haidara Almansour, Konstantin Nikolaou, Saif Afat","doi":"10.3389/fradi.2025.1695043","DOIUrl":"10.3389/fradi.2025.1695043","url":null,"abstract":"<p><strong>Purpose: </strong>This study evaluates the impact of high-resolution T2-weighted imaging (T2<sub>HR</sub>) combined with deep learning image reconstruction (DLR) on image quality, lesion delineation, and extraprostatic extension (EPE) assessment in prostate multiparametric MRI (mpMRI).</p><p><strong>Materials and methods: </strong>This retrospective study included 69 patients who underwent mpMRI of the prostate on a 3 T scanner with DLR between April 2023 and March 2024. Routine mpMRI protocols adhering to the Prostate Imaging Reporting and Data System (PI-RADS) v2.1 were used, including an additional T2<sub>HR</sub> sequence [2 mm slice thickness, 4:31 min vs. 4:12 min for standard T2 (T2<sub>S</sub>)]. The image datasets were evaluated by two radiologists using a Likert scale ranging from 1 to 5, with 5 being the best for sharpness, lesion contours, motion artifacts, prostate border delineation, overall image quality, and diagnostic confidence. PI-RADS scoring and EPE suspicion were analyzed. The statistical methods used included the Wilcoxon signed-rank test and Cohen's kappa for inter-reader agreement.</p><p><strong>Results: </strong>T2<sub>HR</sub> significantly improved lesion contours (medians of 5 vs. 4, <i>p</i> < 0.001), prostate border delineation (medians of 5 vs. 4, <i>p</i> < 0.001), and overall image quality (medians of 5 vs. 4, <i>p</i> < 0.001) compared to T2<sub>S</sub>. However, motion artifacts were significantly worse in T2<sub>HR</sub>. Substantial inter-reader agreement was observed in the PI-RADS scoring. EPE detection marginally increased with T2<sub>HR</sub>, though histopathological validation was limited.</p><p><strong>Conclusion: </strong>T2<sub>HR</sub> imaging with DLR enhances image quality, lesion delineation, and diagnostic confidence without significantly prolonged acquisition time. It shows potential for improving EPE assessment in prostate cancer but requires further validation in larger studies.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"5 ","pages":"1695043"},"PeriodicalIF":2.3,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12615415/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145544332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-28eCollection Date: 2025-01-01DOI: 10.3389/fradi.2025.1670517
Daniel Nguyen, Isaac Bronson, Ryan Chen, Young H Kim
Objective: To systematically evaluate the diagnostic accuracy of various GPT models in radiology, focusing on differential diagnosis performance across textual and visual input modalities, model versions, and clinical contexts.
Methods: A systematic review and meta-analysis were conducted using PubMed and SCOPUS databases on March 24, 2025, retrieving 639 articles. Studies were eligible if they evaluated GPT model diagnostic accuracy on radiology cases. Non-radiology applications, fine-tuned/custom models, board-style multiple-choice questions, or studies lacking accuracy data were excluded. After screening, 28 studies were included. Risk of bias was assessed using the Newcastle-Ottawa Scale (NOS). Diagnostic accuracy was assessed as top diagnosis accuracy (correct diagnosis listed first) and differential accuracy (correct diagnosis listed anywhere). Statistical analysis involved Mann-Whitney U tests using study-level median (median) accuracy with interquartile ranges (IQR), and a generalized linear mixed-effects model (GLMM) to evaluate predictors influencing model performance.
Results: Analysis included 8,852 radiological cases across multiple radiology subspecialties. Differential accuracy varied significantly among GPT models, with newer models (GPT-4T: 72.00%, median 82.32%; GPT-4o: 57.23%, median 53.75%; GPT-4: 56.46%, median 56.65%) outperforming earlier versions (GPT-3.5: 37.87%, median 36.33%). Textual inputs demonstrated higher accuracy (GPT-4: 56.46%, median 58.23%) compared to visual inputs (GPT-4V: 42.32%, median 41.41%). The provision of clinical history was associated with improved diagnostic accuracy in the GLMM (OR = 1.27, p = .001), despite unadjusted medians showing lower performance when history was provided (61.74% vs. 52.28%). Private data (86.51%, median 94.00%) yielded higher accuracy than public data (47.62%, median 46.45%). Accuracy trends indicated improvement in newer models over time, while GPT-3.5's accuracy declined. GLMM results showed higher odds of accuracy for advanced models (OR = 1.84), and lower odds for visual inputs (OR = 0.29) and public datasets (OR = 0.34), while accuracy showed no significant trend over successive study years (p = 0.57). Egger's test found no significant publication bias, though considerable methodological heterogeneity was observed.
Conclusion: This meta-analysis highlights significant variability in GPT model performance influenced by input modality, data source, and model version. High methodological heterogeneity across studies emphasizes the need for standardized protocols in future research, and readers should interpret pooled estimates and medians with this variability in mind.
{"title":"A systematic review and meta-analysis of GPT-based differential diagnostic accuracy in radiological cases: 2023-2025.","authors":"Daniel Nguyen, Isaac Bronson, Ryan Chen, Young H Kim","doi":"10.3389/fradi.2025.1670517","DOIUrl":"10.3389/fradi.2025.1670517","url":null,"abstract":"<p><strong>Objective: </strong>To systematically evaluate the diagnostic accuracy of various GPT models in radiology, focusing on differential diagnosis performance across textual and visual input modalities, model versions, and clinical contexts.</p><p><strong>Methods: </strong>A systematic review and meta-analysis were conducted using PubMed and SCOPUS databases on March 24, 2025, retrieving 639 articles. Studies were eligible if they evaluated GPT model diagnostic accuracy on radiology cases. Non-radiology applications, fine-tuned/custom models, board-style multiple-choice questions, or studies lacking accuracy data were excluded. After screening, 28 studies were included. Risk of bias was assessed using the Newcastle-Ottawa Scale (NOS). Diagnostic accuracy was assessed as top diagnosis accuracy (correct diagnosis listed first) and differential accuracy (correct diagnosis listed anywhere). Statistical analysis involved Mann-Whitney U tests using study-level median (median) accuracy with interquartile ranges (IQR), and a generalized linear mixed-effects model (GLMM) to evaluate predictors influencing model performance.</p><p><strong>Results: </strong>Analysis included 8,852 radiological cases across multiple radiology subspecialties. Differential accuracy varied significantly among GPT models, with newer models (GPT-4T: 72.00%, median 82.32%; GPT-4o: 57.23%, median 53.75%; GPT-4: 56.46%, median 56.65%) outperforming earlier versions (GPT-3.5: 37.87%, median 36.33%). Textual inputs demonstrated higher accuracy (GPT-4: 56.46%, median 58.23%) compared to visual inputs (GPT-4V: 42.32%, median 41.41%). The provision of clinical history was associated with improved diagnostic accuracy in the GLMM (OR = 1.27, <i>p</i> = .001), despite unadjusted medians showing lower performance when history was provided (61.74% vs. 52.28%). Private data (86.51%, median 94.00%) yielded higher accuracy than public data (47.62%, median 46.45%). Accuracy trends indicated improvement in newer models over time, while GPT-3.5's accuracy declined. GLMM results showed higher odds of accuracy for advanced models (OR = 1.84), and lower odds for visual inputs (OR = 0.29) and public datasets (OR = 0.34), while accuracy showed no significant trend over successive study years (<i>p</i> = 0.57). Egger's test found no significant publication bias, though considerable methodological heterogeneity was observed.</p><p><strong>Conclusion: </strong>This meta-analysis highlights significant variability in GPT model performance influenced by input modality, data source, and model version. High methodological heterogeneity across studies emphasizes the need for standardized protocols in future research, and readers should interpret pooled estimates and medians with this variability in mind.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"5 ","pages":"1670517"},"PeriodicalIF":2.3,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12602482/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145507966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}