首页 > 最新文献

Frontiers in radiology最新文献

英文 中文
Federated radiomics analysis of preoperative MRI across institutions: toward integrated glioma segmentation and molecular subtyping. 跨机构术前MRI的联合放射组学分析:迈向综合胶质瘤分割和分子分型。
IF 2.3 Pub Date : 2025-11-10 eCollection Date: 2025-01-01 DOI: 10.3389/fradi.2025.1648145
Ran Ren, Anjun Zhu, Yaxi Li, Huli Liu, Guo Huang, Jing Gu, Jianming Ni, Zengli Miao

Background: Non-invasive and comprehensive molecular characterization of glioma is crucial for personalized treatment but remains limited by invasive biopsy procedures and stringent privacy restrictions on clinical data sharing. Federated learning (FL) provides a promising solution by enabling multi-institutional collaboration without compromising patient confidentiality.

Methods: We propose a multi-task 3D deep neural network framework based on federated learning. Using multi-modal MRI images, without sharing the original data, the automatic segmentation of T2w high signal region and the prediction of four molecular markers (IDH mutation, 1p/19q co-deletion, MGMT promoter methylation, WHO grade) were completed in collaboration with multiple medical institutions. We trained the model on local patient data at independent clients and aggregated the model parameters on a central server to achieve distributed collaborative learning. The model was trained on five public datasets (n = 1,552) and evaluated on an external validation dataset (n = 466).

Results: The model showed good performance in the external test set (IDH AUC = 0.88, 1p/19q AUC = 0.84, MGMT AUC = 0.85, grading AUC = 0.94), and the median Dice of the segmentation task was 0.85.

Conclusions: Our federated multi-task deep learning model demonstrates the feasibility and effectiveness of predicting glioma molecular characteristics and grade from multi-parametric MRI, without compromising patient privacy. These findings suggest significant potential for clinical deployment, especially in scenarios where invasive tissue sampling is impractical or risky.

背景:神经胶质瘤的非侵入性和全面的分子表征对于个性化治疗至关重要,但仍然受到侵入性活检程序和严格的临床数据共享隐私限制的限制。联邦学习(FL)提供了一个很有前途的解决方案,它支持多机构协作,而不会损害患者的机密性。方法:提出了一种基于联邦学习的多任务三维深度神经网络框架。利用多模态MRI图像,在不共享原始数据的情况下,与多家医疗机构合作完成T2w高信号区的自动分割和4个分子标记(IDH突变、1p/19q共缺失、MGMT启动子甲基化、WHO分级)的预测。我们在独立客户端的本地患者数据上训练模型,并在中央服务器上聚合模型参数,以实现分布式协作学习。该模型在5个公共数据集(n = 1552)上进行训练,并在一个外部验证数据集(n = 466)上进行评估。结果:该模型在外部测试集中表现良好(IDH AUC = 0.88, 1p/19q AUC = 0.84, MGMT AUC = 0.85, grading AUC = 0.94),分割任务的median Dice为0.85。结论:我们的联合多任务深度学习模型证明了在不损害患者隐私的情况下,从多参数MRI预测胶质瘤分子特征和分级的可行性和有效性。这些发现表明了临床应用的巨大潜力,特别是在侵入性组织取样不切实际或有风险的情况下。
{"title":"Federated radiomics analysis of preoperative MRI across institutions: toward integrated glioma segmentation and molecular subtyping.","authors":"Ran Ren, Anjun Zhu, Yaxi Li, Huli Liu, Guo Huang, Jing Gu, Jianming Ni, Zengli Miao","doi":"10.3389/fradi.2025.1648145","DOIUrl":"10.3389/fradi.2025.1648145","url":null,"abstract":"<p><strong>Background: </strong>Non-invasive and comprehensive molecular characterization of glioma is crucial for personalized treatment but remains limited by invasive biopsy procedures and stringent privacy restrictions on clinical data sharing. Federated learning (FL) provides a promising solution by enabling multi-institutional collaboration without compromising patient confidentiality.</p><p><strong>Methods: </strong>We propose a multi-task 3D deep neural network framework based on federated learning. Using multi-modal MRI images, without sharing the original data, the automatic segmentation of T2w high signal region and the prediction of four molecular markers (IDH mutation, 1p/19q co-deletion, MGMT promoter methylation, WHO grade) were completed in collaboration with multiple medical institutions. We trained the model on local patient data at independent clients and aggregated the model parameters on a central server to achieve distributed collaborative learning. The model was trained on five public datasets (<i>n</i> = 1,552) and evaluated on an external validation dataset (<i>n</i> = 466).</p><p><strong>Results: </strong>The model showed good performance in the external test set (IDH AUC = 0.88, 1p/19q AUC = 0.84, MGMT AUC = 0.85, grading AUC = 0.94), and the median Dice of the segmentation task was 0.85.</p><p><strong>Conclusions: </strong>Our federated multi-task deep learning model demonstrates the feasibility and effectiveness of predicting glioma molecular characteristics and grade from multi-parametric MRI, without compromising patient privacy. These findings suggest significant potential for clinical deployment, especially in scenarios where invasive tissue sampling is impractical or risky.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"5 ","pages":"1648145"},"PeriodicalIF":2.3,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12640913/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145607782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-supervised learning and transformer-based technologies in breast cancer imaging. 自我监督学习和基于转换器的乳腺癌成像技术。
IF 2.3 Pub Date : 2025-11-07 eCollection Date: 2025-01-01 DOI: 10.3389/fradi.2025.1684436
Lulu Wang

Breast cancer is the most common malignancy among women worldwide, and imaging remains critical for early detection, diagnosis, and treatment planning. Recent advances in artificial intelligence (AI), particularly self-supervised learning (SSL) and transformer-based architectures, have opened new opportunities for breast image analysis. SSL offers a label-efficient strategy that reduces reliance on large annotated datasets, with evidence suggesting that it can achieve strong performance. Transformer-based architectures, such as Vision Transformers, capture long-range dependencies and global contextual information, complementing the local feature sensitivity of convolutional neural networks. This study provides a comprehensive overview of recent developments in SSL and transformer models for breast lesion segmentation, detection, and classification, highlighting representative studies in each domain. It also discusses the advantages and current limitations of these approaches and outlines future research priorities, emphasizing that successful clinical translation depends on access to multi-institutional datasets to ensure generalizability, rigorous external validation to confirm real-world performance, and interpretable model designs to foster clinician trust and enable safe, effective deployment in clinical practice.

乳腺癌是全世界女性中最常见的恶性肿瘤,影像学对于早期发现、诊断和治疗计划仍然至关重要。人工智能(AI)的最新进展,特别是自监督学习(SSL)和基于变压器的架构,为乳房图像分析开辟了新的机会。SSL提供了一种标签高效的策略,减少了对大型带注释的数据集的依赖,有证据表明它可以实现强大的性能。基于变压器的架构,如视觉变压器,捕获远程依赖关系和全局上下文信息,补充了卷积神经网络的局部特征敏感性。本研究全面概述了用于乳腺病变分割、检测和分类的SSL和变压器模型的最新发展,重点介绍了每个领域的代表性研究。它还讨论了这些方法的优势和当前的局限性,并概述了未来的研究重点,强调成功的临床翻译依赖于对多机构数据集的访问,以确保通用性,严格的外部验证,以确认现实世界的性能,以及可解释的模型设计,以培养临床医生的信任,并使临床实践中安全有效的部署。
{"title":"Self-supervised learning and transformer-based technologies in breast cancer imaging.","authors":"Lulu Wang","doi":"10.3389/fradi.2025.1684436","DOIUrl":"10.3389/fradi.2025.1684436","url":null,"abstract":"<p><p>Breast cancer is the most common malignancy among women worldwide, and imaging remains critical for early detection, diagnosis, and treatment planning. Recent advances in artificial intelligence (AI), particularly self-supervised learning (SSL) and transformer-based architectures, have opened new opportunities for breast image analysis. SSL offers a label-efficient strategy that reduces reliance on large annotated datasets, with evidence suggesting that it can achieve strong performance. Transformer-based architectures, such as Vision Transformers, capture long-range dependencies and global contextual information, complementing the local feature sensitivity of convolutional neural networks. This study provides a comprehensive overview of recent developments in SSL and transformer models for breast lesion segmentation, detection, and classification, highlighting representative studies in each domain. It also discusses the advantages and current limitations of these approaches and outlines future research priorities, emphasizing that successful clinical translation depends on access to multi-institutional datasets to ensure generalizability, rigorous external validation to confirm real-world performance, and interpretable model designs to foster clinician trust and enable safe, effective deployment in clinical practice.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"5 ","pages":"1684436"},"PeriodicalIF":2.3,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12634322/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145589514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Radiomic signatures from postprocedural MRI thalamotomy lesion can predict long-term clinical outcome in patients with tremor after MRgFUS: a pilot study. MRI丘脑切除术后病变的放射学特征可以预测MRgFUS后震颤患者的长期临床结果:一项初步研究。
IF 2.3 Pub Date : 2025-11-06 eCollection Date: 2025-01-01 DOI: 10.3389/fradi.2025.1683274
Antonio Innocenzi, Sara Peluso, Federico Bruno, Laura Balducci, Ettore Rocchi, Michela Bellini, Alessia Catalucci, Patrizia Sucapane, Gennaro Saporito, Tommasina Russo, Gastone Castellani, Francesca Pistoia, Alessandra Splendiani

Objective: Magnetic resonance-guided focused ultrasound (MRgFUS) thalamotomy is an effective treatment for essential tremor (ET) and tremor-dominant Parkinson's disease (PD), yet a substantial proportion of patients experience tremor recurrence over time. Reliable imaging biomarkers to predict long-term outcomes are lacking. The purpose of the study was to evaluate whether radiomic features extracted from 24-h post-treatment MRI can predict clinically relevant tremor recurrence at 12 months after MRgFUS thalamotomy, using a machine learning (ML) approach.

Materials and methods: Retrospective, single-center study included 120 patients (61 ET, 59 PD) treated with unilateral MRgFUS Vim thalamotomy between February 2018 and June 2023. Tremor severity was assessed using part A of the Fahn-Tolosa-Marin Tremor Rating Scale (FTM-TRS) at baseline and 12 months. Recurrence was defined as an FTM-TRS part A score ≥ 3 at 12 months. Lesions were manually segmented on 24-h post-treatment T2-weighted MRI. Forty radiomic features (18 first-order, 22 texture GLCM from Laplacian of Gaussian-filtered images) were extracted. A linear Support Vector Classifier with leave-one-out cross-validation was used for classification. Model explainability was assessed using SHapley Additive exPlanations (SHAP).

Results: Clinically relevant tremor recurrence occurred in 23 patients (19%). For the full cohort, the ML model achieved a balanced accuracy of 0.720, weighted F1-score of 0.737, and comparable sensitivity and specificity across classes. Performance was higher in PD (BA = 0.808, F1 = 0.793) than in ET (BA = 0.580, F1 = 0.696). The most predictive features were texture-derived GLCM metrics, particularly from edge-enhanced images, with first-order features contributing complementary information. No significant correlations were found between radiomic features and procedural parameters.

Conclusion: Radiomic analysis of MRgFUS lesions on 24-h post-treatment MRI can provide early prediction of 12-month tremor recurrence, with higher predictive value in PD than in ET. Texture-based features may capture microstructural characteristics linked to treatment durability. This approach could inform post-treatment monitoring and individualized management strategies.

目的:磁共振引导聚焦超声(MRgFUS)丘脑切开术是治疗特发性震颤(ET)和震颤主导型帕金森病(PD)的有效方法,但随着时间的推移,相当一部分患者会出现震颤复发。目前缺乏可靠的成像生物标志物来预测长期预后。该研究的目的是利用机器学习(ML)方法,评估从治疗后24小时MRI中提取的放射学特征是否可以预测MRgFUS丘脑切除术后12个月的临床相关震颤复发。材料和方法:回顾性、单中心研究包括2018年2月至2023年6月期间接受单侧MRgFUS Vim丘脑切开术治疗的120例患者(61例ET, 59例PD)。在基线和12个月时,使用Fahn-Tolosa-Marin震颤评定量表(FTM-TRS)的A部分评估震颤严重程度。复发定义为12个月时FTM-TRS A部分评分≥3分。在治疗后24小时的t2加权MRI上手动分割病变。提取了40个放射学特征(18个一阶特征,22个高斯滤波后拉普拉斯图像的纹理GLCM特征)。采用留一交叉验证的线性支持向量分类器进行分类。采用SHapley加性解释(SHAP)评估模型的可解释性。结果:23例(19%)患者发生临床相关震颤复发。对于整个队列,ML模型的平衡精度为0.720,加权f1评分为0.737,并且在不同类别中具有可比的敏感性和特异性。PD组的生产性能(BA = 0.808, F1 = 0.793)高于ET组(BA = 0.580, F1 = 0.696)。最具预测性的特征是纹理衍生的GLCM度量,特别是来自边缘增强的图像,一阶特征提供了互补信息。放射学特征与手术参数之间无显著相关性。结论:治疗后24小时MRI对MRgFUS病变的放射组学分析可以提供12个月震颤复发的早期预测,PD的预测价值高于ET。基于纹理的特征可以捕获与治疗持久性相关的微结构特征。这种方法可以为治疗后监测和个性化管理策略提供信息。
{"title":"Radiomic signatures from postprocedural MRI thalamotomy lesion can predict long-term clinical outcome in patients with tremor after MRgFUS: a pilot study.","authors":"Antonio Innocenzi, Sara Peluso, Federico Bruno, Laura Balducci, Ettore Rocchi, Michela Bellini, Alessia Catalucci, Patrizia Sucapane, Gennaro Saporito, Tommasina Russo, Gastone Castellani, Francesca Pistoia, Alessandra Splendiani","doi":"10.3389/fradi.2025.1683274","DOIUrl":"10.3389/fradi.2025.1683274","url":null,"abstract":"<p><strong>Objective: </strong>Magnetic resonance-guided focused ultrasound (MRgFUS) thalamotomy is an effective treatment for essential tremor (ET) and tremor-dominant Parkinson's disease (PD), yet a substantial proportion of patients experience tremor recurrence over time. Reliable imaging biomarkers to predict long-term outcomes are lacking. The purpose of the study was to evaluate whether radiomic features extracted from 24-h post-treatment MRI can predict clinically relevant tremor recurrence at 12 months after MRgFUS thalamotomy, using a machine learning (ML) approach.</p><p><strong>Materials and methods: </strong>Retrospective, single-center study included 120 patients (61 ET, 59 PD) treated with unilateral MRgFUS Vim thalamotomy between February 2018 and June 2023. Tremor severity was assessed using part A of the Fahn-Tolosa-Marin Tremor Rating Scale (FTM-TRS) at baseline and 12 months. Recurrence was defined as an FTM-TRS part A score ≥ 3 at 12 months. Lesions were manually segmented on 24-h post-treatment T2-weighted MRI. Forty radiomic features (18 first-order, 22 texture GLCM from Laplacian of Gaussian-filtered images) were extracted. A linear Support Vector Classifier with leave-one-out cross-validation was used for classification. Model explainability was assessed using SHapley Additive exPlanations (SHAP).</p><p><strong>Results: </strong>Clinically relevant tremor recurrence occurred in 23 patients (19%). For the full cohort, the ML model achieved a balanced accuracy of 0.720, weighted F1-score of 0.737, and comparable sensitivity and specificity across classes. Performance was higher in PD (BA = 0.808, F1 = 0.793) than in ET (BA = 0.580, F1 = 0.696). The most predictive features were texture-derived GLCM metrics, particularly from edge-enhanced images, with first-order features contributing complementary information. No significant correlations were found between radiomic features and procedural parameters.</p><p><strong>Conclusion: </strong>Radiomic analysis of MRgFUS lesions on 24-h post-treatment MRI can provide early prediction of 12-month tremor recurrence, with higher predictive value in PD than in ET. Texture-based features may capture microstructural characteristics linked to treatment durability. This approach could inform post-treatment monitoring and individualized management strategies.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"5 ","pages":"1683274"},"PeriodicalIF":2.3,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12631418/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145588797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence-assisted accurate diagnosis of anterior cruciate ligament tears using customized CNN and YOLOv9. 基于定制CNN和YOLOv9的人工智能辅助前交叉韧带撕裂准确诊断
IF 2.3 Pub Date : 2025-11-04 eCollection Date: 2025-01-01 DOI: 10.3389/fradi.2025.1691048
Taner Alic, Sinan Zehir, Meryem Yalcinkaya, Emre Deniz, Harun Emre Kiran, Onur Afacan

Background: Accurate diagnosis of anterior cruciate ligament (ACL) tears on magnetic resonance imaging (MRI) is critical for timely treatment planning. Deep learning (DL) approaches have shown promise in assisting clinicians, but many prior studies are limited by small datasets, lack of surgical confirmation, or exclusion of partial tears.

Aim: To evaluate the performance of multiple convolutional neural network (CNN) architectures, including a proposed CustomCNN, for ACL tear detection using a surgically validated dataset.

Methods: A total of 8,086 proton density-weighted sagittal knee MRI slices were obtained from patients whose ACL status (intact, partial, or complete tear) was confirmed arthroscopically. Eleven deep learning models, including CustomCNN, DenseNet121, and InceptionResNetV2, were trained and evaluated with strict patient-level separation to avoid data leakage. Model performance was assessed using accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC).

Results: The CustomCNN model achieved the highest diagnostic performance, with an accuracy of 91.5% (95% CI: 89.5-93.1), sensitivity of 92.4% (95% CI: 90.4-94.2), and an AUC of 0.913. The inclusion of both partial and complete tears enhanced clinical relevance, and patient-level splitting reduced the risk of inflated metrics from correlated slices. Compared with previous reports, the proposed approach demonstrated competitive results while addressing key methodological limitations.

Conclusion: The CustomCNN model enables rapid and reliable detection of ACL tears, including partial lesions, and may serve as a valuable decision-support tool for radiologists and orthopedic surgeons. The use of a surgically validated dataset and rigorous methodology enhances clinical credibility. Future work should expand to multicenter datasets, diverse MRI protocols, and prospective reader studies to establish generalizability and facilitate integration into real-world workflows.

背景:磁共振成像(MRI)准确诊断前交叉韧带(ACL)撕裂对及时制定治疗方案至关重要。深度学习(DL)方法在帮助临床医生方面显示出了希望,但许多先前的研究受到数据集小、缺乏手术确认或排除部分撕裂的限制。目的:评估多个卷积神经网络(CNN)架构的性能,包括一个拟议的CustomCNN,使用手术验证的数据集进行ACL撕裂检测。方法:从关节镜下确认ACL状态(完整、部分或完全撕裂)的患者共获得8,086张质子密度加权矢状膝关节MRI切片。我们对CustomCNN、DenseNet121、InceptionResNetV2等11个深度学习模型进行了严格的患者级分离训练和评估,以避免数据泄露。通过准确性、灵敏度、特异性和受试者工作特征曲线下面积来评估模型的性能。结果:CustomCNN模型获得了最高的诊断性能,准确率为91.5% (95% CI: 89.5-93.1),灵敏度为92.4% (95% CI: 90.4-94.2), AUC为0.913。包括部分和完全撕裂增强了临床相关性,并且患者水平的分裂降低了相关切片中夸大指标的风险。与以前的报告相比,拟议的方法在解决关键方法局限性的同时显示出具有竞争力的结果。结论:CustomCNN模型能够快速可靠地检测前交叉韧带撕裂,包括部分病变,可以作为放射科医生和骨科医生有价值的决策支持工具。使用经过手术验证的数据集和严格的方法可提高临床可信度。未来的工作应该扩展到多中心数据集、不同的MRI协议和前瞻性读者研究,以建立通用性并促进与现实世界工作流程的整合。
{"title":"Artificial intelligence-assisted accurate diagnosis of anterior cruciate ligament tears using customized CNN and YOLOv9.","authors":"Taner Alic, Sinan Zehir, Meryem Yalcinkaya, Emre Deniz, Harun Emre Kiran, Onur Afacan","doi":"10.3389/fradi.2025.1691048","DOIUrl":"10.3389/fradi.2025.1691048","url":null,"abstract":"<p><strong>Background: </strong>Accurate diagnosis of anterior cruciate ligament (ACL) tears on magnetic resonance imaging (MRI) is critical for timely treatment planning. Deep learning (DL) approaches have shown promise in assisting clinicians, but many prior studies are limited by small datasets, lack of surgical confirmation, or exclusion of partial tears.</p><p><strong>Aim: </strong>To evaluate the performance of multiple convolutional neural network (CNN) architectures, including a proposed CustomCNN, for ACL tear detection using a surgically validated dataset.</p><p><strong>Methods: </strong>A total of 8,086 proton density-weighted sagittal knee MRI slices were obtained from patients whose ACL status (intact, partial, or complete tear) was confirmed arthroscopically. Eleven deep learning models, including CustomCNN, DenseNet121, and InceptionResNetV2, were trained and evaluated with strict patient-level separation to avoid data leakage. Model performance was assessed using accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC).</p><p><strong>Results: </strong>The CustomCNN model achieved the highest diagnostic performance, with an accuracy of 91.5% (95% CI: 89.5-93.1), sensitivity of 92.4% (95% CI: 90.4-94.2), and an AUC of 0.913. The inclusion of both partial and complete tears enhanced clinical relevance, and patient-level splitting reduced the risk of inflated metrics from correlated slices. Compared with previous reports, the proposed approach demonstrated competitive results while addressing key methodological limitations.</p><p><strong>Conclusion: </strong>The CustomCNN model enables rapid and reliable detection of ACL tears, including partial lesions, and may serve as a valuable decision-support tool for radiologists and orthopedic surgeons. The use of a surgically validated dataset and rigorous methodology enhances clinical credibility. Future work should expand to multicenter datasets, diverse MRI protocols, and prospective reader studies to establish generalizability and facilitate integration into real-world workflows.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"5 ","pages":"1691048"},"PeriodicalIF":2.3,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12623178/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145558292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Case Report: CT manifestations of acute portal vein thrombosis: cases report and literature review. 急性门静脉血栓的CT表现:病例报告及文献复习。
IF 2.3 Pub Date : 2025-11-04 eCollection Date: 2025-01-01 DOI: 10.3389/fradi.2025.1662089
Lin Zhou, Zhi-Cheng Huang, Xiao-Hui Lin, Shao-Jin Zhang, Ya He

Acute portal vein thrombosis (APVT) is a rare condition characterized by recent thrombus formation within the main portal vein or its branches. APVT occurring in patients without underlying cirrhosis or malignancy represents an even rarer presentation, with an estimated prevalence of 0.7-3.7 per 100,000 individuals. However, it can lead to severe complications, including intestinal infarction and mortality. We report two cases presenting with abdominal pain without an apparent precipitating factor. Both patients were diagnosed with APVT based on contrast-enhanced computed tomography (CT) findings, clinical presentation, and laboratory parameters. Depending on the extent of portal vein occlusion, distinct therapeutic approaches were employed: one patient underwent interventional therapy combining transjugular mechanical thrombectomy/thrombolysis with transjugular intrahepatic portosystemic shunt (TIPS) placement, while the other received systemic pharmacological thrombolysis. Successful portal vein recanalization was achieved in both patients, who subsequently recovered and were discharged. These cases underscore that prompt diagnosis and management of APVT can avert adverse clinical outcomes. Contrast-enhanced CT demonstrates significant value in classifying APVT, assessing disease severity, evaluating treatment response, and identifying complications, thereby providing crucial evidence for clinical decision-making.

急性门静脉血栓形成(APVT)是一种罕见的疾病,其特征是在主门静脉或其分支内近期形成血栓。APVT发生在无肝硬化或恶性肿瘤的患者中更为罕见,估计患病率为每10万人0.7-3.7例。然而,它会导致严重的并发症,包括肠梗死和死亡。我们报告两个病例表现为腹痛没有明显的沉淀因素。根据对比增强计算机断层扫描(CT)的表现、临床表现和实验室参数,两例患者均被诊断为APVT。根据门静脉阻塞的程度,采用不同的治疗方法:一名患者接受经颈静脉机械取栓/溶栓联合经颈静脉肝内门静脉系统分流(TIPS)放置的介入治疗,而另一名患者接受全身药物溶栓。两例患者均成功实现门静脉再通,随后康复出院。这些病例强调,及时诊断和处理APVT可以避免不良的临床结果。增强CT对APVT的分级、病情严重程度评估、治疗反应评估、并发症识别等方面具有重要价值,为临床决策提供重要依据。
{"title":"Case Report: CT manifestations of acute portal vein thrombosis: cases report and literature review.","authors":"Lin Zhou, Zhi-Cheng Huang, Xiao-Hui Lin, Shao-Jin Zhang, Ya He","doi":"10.3389/fradi.2025.1662089","DOIUrl":"10.3389/fradi.2025.1662089","url":null,"abstract":"<p><p>Acute portal vein thrombosis (APVT) is a rare condition characterized by recent thrombus formation within the main portal vein or its branches. APVT occurring in patients without underlying cirrhosis or malignancy represents an even rarer presentation, with an estimated prevalence of 0.7-3.7 per 100,000 individuals. However, it can lead to severe complications, including intestinal infarction and mortality. We report two cases presenting with abdominal pain without an apparent precipitating factor. Both patients were diagnosed with APVT based on contrast-enhanced computed tomography (CT) findings, clinical presentation, and laboratory parameters. Depending on the extent of portal vein occlusion, distinct therapeutic approaches were employed: one patient underwent interventional therapy combining transjugular mechanical thrombectomy/thrombolysis with transjugular intrahepatic portosystemic shunt (TIPS) placement, while the other received systemic pharmacological thrombolysis. Successful portal vein recanalization was achieved in both patients, who subsequently recovered and were discharged. These cases underscore that prompt diagnosis and management of APVT can avert adverse clinical outcomes. Contrast-enhanced CT demonstrates significant value in classifying APVT, assessing disease severity, evaluating treatment response, and identifying complications, thereby providing crucial evidence for clinical decision-making.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"5 ","pages":"1662089"},"PeriodicalIF":2.3,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12623160/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145558334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Case Report: Sulcal artery infarction presenting as incomplete Brown-Séquard syndrome following spinal anesthesia in a 70-year-old female: a rare postoperative neurological complication. 病例报告:沟动脉梗塞表现为不完全布朗-萨姆夸德综合征后脊髓麻醉的70岁女性:一个罕见的术后神经并发症。
IF 2.3 Pub Date : 2025-11-03 eCollection Date: 2025-01-01 DOI: 10.3389/fradi.2025.1672382
B T Kavya, Shweta Raviraj Poojary, Harsha Sundaramurthy

Spinal cord infarction following neuraxial anesthesia is a rare but serious complication. We present the case of a 70-year-old female who developed acute onset of left lower limb weakness immediately following spinal anesthesia administered for total hip replacement. Clinical features were consistent with incomplete Brown-Séquard syndrome. MRI revealed a T2/STIR hyperintense lesion involving the left hemicord at the D12-L1 vertebral level, suggestive of sulcal artery infarction. MRI showed only age-related changes. After a structured physiotherapy program, the patient experienced significant functional improvement and was discharged with stable vitals. This case highlights the importance of early diagnosis and management of spinal cord infarction in the perioperative setting.

脊髓梗塞后的神经轴麻是一个罕见但严重的并发症。我们提出的情况下,70岁的女性谁发展急性发作左下肢无力立即脊髓麻醉实施全髋关节置换术。临床表现符合不完全布朗-萨姆夸德综合征。MRI显示T2/STIR高信号病变累及左脐D12-L1椎体水平,提示沟动脉梗死。核磁共振显示只有年龄相关的变化。经过一个有组织的物理治疗方案,患者经历了显著的功能改善,出院时生命体征稳定。本病例强调了围手术期早期诊断和处理脊髓梗死的重要性。
{"title":"Case Report: Sulcal artery infarction presenting as incomplete Brown-Séquard syndrome following spinal anesthesia in a 70-year-old female: a rare postoperative neurological complication.","authors":"B T Kavya, Shweta Raviraj Poojary, Harsha Sundaramurthy","doi":"10.3389/fradi.2025.1672382","DOIUrl":"10.3389/fradi.2025.1672382","url":null,"abstract":"<p><p>Spinal cord infarction following neuraxial anesthesia is a rare but serious complication. We present the case of a 70-year-old female who developed acute onset of left lower limb weakness immediately following spinal anesthesia administered for total hip replacement. Clinical features were consistent with incomplete Brown-Séquard syndrome. MRI revealed a T2/STIR hyperintense lesion involving the left hemicord at the D12-L1 vertebral level, suggestive of sulcal artery infarction. MRI showed only age-related changes. After a structured physiotherapy program, the patient experienced significant functional improvement and was discharged with stable vitals. This case highlights the importance of early diagnosis and management of spinal cord infarction in the perioperative setting.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"5 ","pages":"1672382"},"PeriodicalIF":2.3,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12620253/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145552229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-resolution deep learning-reconstructed T2-weighted imaging for the improvement of image quality and extraprostatic extension assessment in prostate MRI. 用于提高前列腺MRI图像质量和前列腺外展评估的高分辨率深度学习重建t2加权成像。
IF 2.3 Pub Date : 2025-10-31 eCollection Date: 2025-01-01 DOI: 10.3389/fradi.2025.1695043
Sebastian Gassenmaier, Franziska Katharina Staber, Stephan Ursprung, Judith Herrmann, Sebastian Werner, Andreas Lingg, Lisa C Adams, Haidara Almansour, Konstantin Nikolaou, Saif Afat

Purpose: This study evaluates the impact of high-resolution T2-weighted imaging (T2HR) combined with deep learning image reconstruction (DLR) on image quality, lesion delineation, and extraprostatic extension (EPE) assessment in prostate multiparametric MRI (mpMRI).

Materials and methods: This retrospective study included 69 patients who underwent mpMRI of the prostate on a 3 T scanner with DLR between April 2023 and March 2024. Routine mpMRI protocols adhering to the Prostate Imaging Reporting and Data System (PI-RADS) v2.1 were used, including an additional T2HR sequence [2 mm slice thickness, 4:31 min vs. 4:12 min for standard T2 (T2S)]. The image datasets were evaluated by two radiologists using a Likert scale ranging from 1 to 5, with 5 being the best for sharpness, lesion contours, motion artifacts, prostate border delineation, overall image quality, and diagnostic confidence. PI-RADS scoring and EPE suspicion were analyzed. The statistical methods used included the Wilcoxon signed-rank test and Cohen's kappa for inter-reader agreement.

Results: T2HR significantly improved lesion contours (medians of 5 vs. 4, p < 0.001), prostate border delineation (medians of 5 vs. 4, p < 0.001), and overall image quality (medians of 5 vs. 4, p < 0.001) compared to T2S. However, motion artifacts were significantly worse in T2HR. Substantial inter-reader agreement was observed in the PI-RADS scoring. EPE detection marginally increased with T2HR, though histopathological validation was limited.

Conclusion: T2HR imaging with DLR enhances image quality, lesion delineation, and diagnostic confidence without significantly prolonged acquisition time. It shows potential for improving EPE assessment in prostate cancer but requires further validation in larger studies.

目的:本研究评估高分辨率t2加权成像(T2HR)结合深度学习图像重建(DLR)对前列腺多参数MRI (mpMRI)图像质量、病变描绘和前列腺外展(EPE)评估的影响。材料和方法:本回顾性研究纳入了69例患者,这些患者在2023年4月至2024年3月期间在3t扫描仪上进行了前列腺mpMRI检查。遵循前列腺成像报告和数据系统(PI-RADS) v2.1的常规mpMRI方案,包括额外的T2HR序列[2mm切片厚度,4:31分钟与标准T2 (T2S) 4:12分钟]。图像数据集由两名放射科医生使用李克特量表进行评估,范围从1到5,其中5代表清晰度,病变轮廓,运动伪影,前列腺边界划定,整体图像质量和诊断置信度。分析PI-RADS评分和EPE怀疑。使用的统计方法包括Wilcoxon sign -rank检验和Cohen's kappa对读者间协议的检验。结果:T2HR显著改善病变轮廓(中位数为5 vs. 4, p p p S)。然而,T2HR患者的运动伪影明显加重。在PI-RADS评分中观察到大量的读者间一致。尽管组织病理学验证有限,但EPE检测随T2HR轻微增加。结论:T2HR成像与DLR增强图像质量,病变描绘,和诊断的信心,没有明显延长采集时间。它显示了改善前列腺癌EPE评估的潜力,但需要在更大规模的研究中进一步验证。
{"title":"High-resolution deep learning-reconstructed T2-weighted imaging for the improvement of image quality and extraprostatic extension assessment in prostate MRI.","authors":"Sebastian Gassenmaier, Franziska Katharina Staber, Stephan Ursprung, Judith Herrmann, Sebastian Werner, Andreas Lingg, Lisa C Adams, Haidara Almansour, Konstantin Nikolaou, Saif Afat","doi":"10.3389/fradi.2025.1695043","DOIUrl":"10.3389/fradi.2025.1695043","url":null,"abstract":"<p><strong>Purpose: </strong>This study evaluates the impact of high-resolution T2-weighted imaging (T2<sub>HR</sub>) combined with deep learning image reconstruction (DLR) on image quality, lesion delineation, and extraprostatic extension (EPE) assessment in prostate multiparametric MRI (mpMRI).</p><p><strong>Materials and methods: </strong>This retrospective study included 69 patients who underwent mpMRI of the prostate on a 3 T scanner with DLR between April 2023 and March 2024. Routine mpMRI protocols adhering to the Prostate Imaging Reporting and Data System (PI-RADS) v2.1 were used, including an additional T2<sub>HR</sub> sequence [2 mm slice thickness, 4:31 min vs. 4:12 min for standard T2 (T2<sub>S</sub>)]. The image datasets were evaluated by two radiologists using a Likert scale ranging from 1 to 5, with 5 being the best for sharpness, lesion contours, motion artifacts, prostate border delineation, overall image quality, and diagnostic confidence. PI-RADS scoring and EPE suspicion were analyzed. The statistical methods used included the Wilcoxon signed-rank test and Cohen's kappa for inter-reader agreement.</p><p><strong>Results: </strong>T2<sub>HR</sub> significantly improved lesion contours (medians of 5 vs. 4, <i>p</i> < 0.001), prostate border delineation (medians of 5 vs. 4, <i>p</i> < 0.001), and overall image quality (medians of 5 vs. 4, <i>p</i> < 0.001) compared to T2<sub>S</sub>. However, motion artifacts were significantly worse in T2<sub>HR</sub>. Substantial inter-reader agreement was observed in the PI-RADS scoring. EPE detection marginally increased with T2<sub>HR</sub>, though histopathological validation was limited.</p><p><strong>Conclusion: </strong>T2<sub>HR</sub> imaging with DLR enhances image quality, lesion delineation, and diagnostic confidence without significantly prolonged acquisition time. It shows potential for improving EPE assessment in prostate cancer but requires further validation in larger studies.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"5 ","pages":"1695043"},"PeriodicalIF":2.3,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12615415/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145544332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A systematic review and meta-analysis of GPT-based differential diagnostic accuracy in radiological cases: 2023-2025. 基于gpt的放射病例鉴别诊断准确性的系统回顾和荟萃分析:2023-2025。
IF 2.3 Pub Date : 2025-10-28 eCollection Date: 2025-01-01 DOI: 10.3389/fradi.2025.1670517
Daniel Nguyen, Isaac Bronson, Ryan Chen, Young H Kim

Objective: To systematically evaluate the diagnostic accuracy of various GPT models in radiology, focusing on differential diagnosis performance across textual and visual input modalities, model versions, and clinical contexts.

Methods: A systematic review and meta-analysis were conducted using PubMed and SCOPUS databases on March 24, 2025, retrieving 639 articles. Studies were eligible if they evaluated GPT model diagnostic accuracy on radiology cases. Non-radiology applications, fine-tuned/custom models, board-style multiple-choice questions, or studies lacking accuracy data were excluded. After screening, 28 studies were included. Risk of bias was assessed using the Newcastle-Ottawa Scale (NOS). Diagnostic accuracy was assessed as top diagnosis accuracy (correct diagnosis listed first) and differential accuracy (correct diagnosis listed anywhere). Statistical analysis involved Mann-Whitney U tests using study-level median (median) accuracy with interquartile ranges (IQR), and a generalized linear mixed-effects model (GLMM) to evaluate predictors influencing model performance.

Results: Analysis included 8,852 radiological cases across multiple radiology subspecialties. Differential accuracy varied significantly among GPT models, with newer models (GPT-4T: 72.00%, median 82.32%; GPT-4o: 57.23%, median 53.75%; GPT-4: 56.46%, median 56.65%) outperforming earlier versions (GPT-3.5: 37.87%, median 36.33%). Textual inputs demonstrated higher accuracy (GPT-4: 56.46%, median 58.23%) compared to visual inputs (GPT-4V: 42.32%, median 41.41%). The provision of clinical history was associated with improved diagnostic accuracy in the GLMM (OR = 1.27, p = .001), despite unadjusted medians showing lower performance when history was provided (61.74% vs. 52.28%). Private data (86.51%, median 94.00%) yielded higher accuracy than public data (47.62%, median 46.45%). Accuracy trends indicated improvement in newer models over time, while GPT-3.5's accuracy declined. GLMM results showed higher odds of accuracy for advanced models (OR = 1.84), and lower odds for visual inputs (OR = 0.29) and public datasets (OR = 0.34), while accuracy showed no significant trend over successive study years (p = 0.57). Egger's test found no significant publication bias, though considerable methodological heterogeneity was observed.

Conclusion: This meta-analysis highlights significant variability in GPT model performance influenced by input modality, data source, and model version. High methodological heterogeneity across studies emphasizes the need for standardized protocols in future research, and readers should interpret pooled estimates and medians with this variability in mind.

目的:系统评估各种GPT模型在放射学中的诊断准确性,重点关注文本和视觉输入方式、模型版本和临床背景下的鉴别诊断性能。方法:于2025年3月24日在PubMed和SCOPUS数据库中检索639篇文献,进行系统评价和meta分析。如果研究评估了GPT模型对放射学病例的诊断准确性,则该研究是合格的。非放射学应用、微调/定制模型、板式选择题或缺乏准确性数据的研究被排除在外。筛选后,纳入了28项研究。偏倚风险采用纽卡斯尔-渥太华量表(NOS)进行评估。诊断准确性被评估为最高诊断准确性(正确诊断列在首位)和鉴别准确性(正确诊断列在任何位置)。统计分析包括使用四分位数范围(IQR)的研究水平中位数(中位数)准确性的Mann-Whitney U检验,以及广义线性混合效应模型(GLMM)来评估影响模型性能的预测因子。结果:分析包括8852例放射学病例,涵盖多个放射学亚专科。GPT模型之间的差异准确率差异显著,较新的模型(GPT- 4t: 72.00%,中位数82.32%;GPT- 40: 57.23%,中位数53.75%;GPT-4: 56.46%,中位数56.65%)优于早期版本(GPT-3.5: 37.87%,中位数36.33%)。文本输入的准确率(GPT-4: 56.46%,中位数58.23%)高于视觉输入(GPT-4V: 42.32%,中位数41.41%)。临床病史的提供与GLMM诊断准确性的提高相关(OR = 1.27, p =。001),尽管未调整的中位数在提供历史记录时显示较低的性能(61.74%对52.28%)。私人数据(86.51%,中位数94.00%)的准确率高于公共数据(47.62%,中位数46.45%)。准确率趋势表明,随着时间的推移,新型号的准确率有所提高,而GPT-3.5的准确率则有所下降。GLMM结果显示,先进模型的准确率几率较高(OR = 1.84),视觉输入(OR = 0.29)和公共数据集的准确率几率较低(OR = 0.34),而准确率在连续研究年份中没有显著趋势(p = 0.57)。Egger的检验没有发现显著的发表偏倚,尽管观察到相当大的方法异质性。结论:本荟萃分析突出了GPT模型性能受输入方式、数据源和模型版本影响的显著变异性。研究方法的高度异质性强调了在未来的研究中需要标准化的方案,读者在解释汇总估计值和中位数时应牢记这种可变性。
{"title":"A systematic review and meta-analysis of GPT-based differential diagnostic accuracy in radiological cases: 2023-2025.","authors":"Daniel Nguyen, Isaac Bronson, Ryan Chen, Young H Kim","doi":"10.3389/fradi.2025.1670517","DOIUrl":"10.3389/fradi.2025.1670517","url":null,"abstract":"<p><strong>Objective: </strong>To systematically evaluate the diagnostic accuracy of various GPT models in radiology, focusing on differential diagnosis performance across textual and visual input modalities, model versions, and clinical contexts.</p><p><strong>Methods: </strong>A systematic review and meta-analysis were conducted using PubMed and SCOPUS databases on March 24, 2025, retrieving 639 articles. Studies were eligible if they evaluated GPT model diagnostic accuracy on radiology cases. Non-radiology applications, fine-tuned/custom models, board-style multiple-choice questions, or studies lacking accuracy data were excluded. After screening, 28 studies were included. Risk of bias was assessed using the Newcastle-Ottawa Scale (NOS). Diagnostic accuracy was assessed as top diagnosis accuracy (correct diagnosis listed first) and differential accuracy (correct diagnosis listed anywhere). Statistical analysis involved Mann-Whitney U tests using study-level median (median) accuracy with interquartile ranges (IQR), and a generalized linear mixed-effects model (GLMM) to evaluate predictors influencing model performance.</p><p><strong>Results: </strong>Analysis included 8,852 radiological cases across multiple radiology subspecialties. Differential accuracy varied significantly among GPT models, with newer models (GPT-4T: 72.00%, median 82.32%; GPT-4o: 57.23%, median 53.75%; GPT-4: 56.46%, median 56.65%) outperforming earlier versions (GPT-3.5: 37.87%, median 36.33%). Textual inputs demonstrated higher accuracy (GPT-4: 56.46%, median 58.23%) compared to visual inputs (GPT-4V: 42.32%, median 41.41%). The provision of clinical history was associated with improved diagnostic accuracy in the GLMM (OR = 1.27, <i>p</i> = .001), despite unadjusted medians showing lower performance when history was provided (61.74% vs. 52.28%). Private data (86.51%, median 94.00%) yielded higher accuracy than public data (47.62%, median 46.45%). Accuracy trends indicated improvement in newer models over time, while GPT-3.5's accuracy declined. GLMM results showed higher odds of accuracy for advanced models (OR = 1.84), and lower odds for visual inputs (OR = 0.29) and public datasets (OR = 0.34), while accuracy showed no significant trend over successive study years (<i>p</i> = 0.57). Egger's test found no significant publication bias, though considerable methodological heterogeneity was observed.</p><p><strong>Conclusion: </strong>This meta-analysis highlights significant variability in GPT model performance influenced by input modality, data source, and model version. High methodological heterogeneity across studies emphasizes the need for standardized protocols in future research, and readers should interpret pooled estimates and medians with this variability in mind.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"5 ","pages":"1670517"},"PeriodicalIF":2.3,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12602482/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145507966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating clinical indications and patient demographics for multilabel abnormality classification and automated report generation in 3D chest CT scans. 整合临床适应症和患者人口统计学的多标签异常分类和自动报告生成在3D胸部CT扫描。
IF 2.3 Pub Date : 2025-10-24 eCollection Date: 2025-01-01 DOI: 10.3389/fradi.2025.1672364
Theo Di Piazza, Carole Lazarus, Olivier Nempont, Loic Boussel

The increasing number of computed tomography (CT) scan examinations and the time-intensive nature of manual analysis necessitate efficient automated methods to assist radiologists in managing their increasing workload. While deep learning approaches primarily classify abnormalities from three-dimensional (3D) CT images, radiologists also incorporate clinical indications and patient demographics, such as age and sex, for diagnosis. This study aims to enhance multilabel abnormality classification and automated report generation by integrating imaging and non-imaging data. We propose a multimodal deep learning model that combines 3D chest CT scans, clinical information reports, patient age, and sex to improve diagnostic accuracy. Our method extracts visual features from 3D volumes using a visual encoder, textual features from clinical indications via a pretrained language model, and demographic features through a lightweight feedforward neural network. These extracted features are projected into a shared representation space, concatenated, and processed by a projection head to predict abnormalities. For the multilabel classification task, incorporating clinical indications and patient demographics into an existing visual encoder, called CT-Net, improves the F1 score to 51.58, representing a + Δ 6.13 % increase over CT-Net alone. For the automated report generation task, we extend two existing methods, CT2Rep and CT-AGRG, by integrating clinical indications and demographic data. This integration enhances Clinical Efficacy metrics, yielding an F1 score improvement of + Δ 14.78 % for the CT2Rep extension and + Δ 6.69 % for the CT-AGRG extension. Our findings suggest that incorporating patient demographics and clinical information into deep learning frameworks can significantly improve automated CT scan analysis. This approach has the potential to enhance radiological workflows and facilitate more comprehensive and accurate abnormality detection in clinical practice.

计算机断层扫描(CT)检查数量的增加和人工分析的时间密集性需要有效的自动化方法来帮助放射科医生管理他们不断增加的工作量。虽然深度学习方法主要是从三维(3D) CT图像中对异常进行分类,但放射科医生也会结合临床适应症和患者人口统计数据(如年龄和性别)进行诊断。本研究旨在整合影像与非影像资料,加强多标签异常分类与自动报告生成。我们提出了一种多模态深度学习模型,该模型结合了3D胸部CT扫描、临床信息报告、患者年龄和性别来提高诊断准确性。我们的方法使用视觉编码器从3D体中提取视觉特征,通过预训练的语言模型从临床适应症中提取文本特征,并通过轻量级前馈神经网络提取人口统计学特征。这些提取的特征被投影到一个共享的表示空间中,由一个投影头进行连接和处理,以预测异常。对于多标签分类任务,将临床适应症和患者人口统计数据合并到现有的视觉编码器中,称为CT-Net,将F1得分提高到51.58,比单独使用CT-Net提高了+ Δ 6.13%。对于自动报告生成任务,我们通过整合临床适应症和人口统计数据扩展了现有的两种方法CT2Rep和CT-AGRG。这种整合增强了临床疗效指标,CT2Rep扩展的F1评分提高了+ Δ 14.78%, CT-AGRG扩展的F1评分提高了+ Δ 6.69%。我们的研究结果表明,将患者人口统计学和临床信息纳入深度学习框架可以显着改善自动CT扫描分析。这种方法有可能增强放射学工作流程,并在临床实践中促进更全面和准确的异常检测。
{"title":"Integrating clinical indications and patient demographics for multilabel abnormality classification and automated report generation in 3D chest CT scans.","authors":"Theo Di Piazza, Carole Lazarus, Olivier Nempont, Loic Boussel","doi":"10.3389/fradi.2025.1672364","DOIUrl":"10.3389/fradi.2025.1672364","url":null,"abstract":"<p><p>The increasing number of computed tomography (CT) scan examinations and the time-intensive nature of manual analysis necessitate efficient automated methods to assist radiologists in managing their increasing workload. While deep learning approaches primarily classify abnormalities from three-dimensional (3D) CT images, radiologists also incorporate clinical indications and patient demographics, such as age and sex, for diagnosis. This study aims to enhance multilabel abnormality classification and automated report generation by integrating imaging and non-imaging data. We propose a multimodal deep learning model that combines 3D chest CT scans, clinical information reports, patient age, and sex to improve diagnostic accuracy. Our method extracts visual features from 3D volumes using a visual encoder, textual features from clinical indications via a pretrained language model, and demographic features through a lightweight feedforward neural network. These extracted features are projected into a shared representation space, concatenated, and processed by a projection head to predict abnormalities. For the multilabel classification task, incorporating clinical indications and patient demographics into an existing visual encoder, called CT-Net, improves the F1 score to 51.58, representing a <math><mo>+</mo> <mi>Δ</mi> <mn>6.13</mn> <mi>%</mi></math> increase over CT-Net alone. For the automated report generation task, we extend two existing methods, CT2Rep and CT-AGRG, by integrating clinical indications and demographic data. This integration enhances Clinical Efficacy metrics, yielding an F1 score improvement of <math><mo>+</mo> <mi>Δ</mi> <mn>14.78</mn> <mi>%</mi></math> for the CT2Rep extension and <math><mo>+</mo> <mi>Δ</mi> <mn>6.69</mn> <mi>%</mi></math> for the CT-AGRG extension. Our findings suggest that incorporating patient demographics and clinical information into deep learning frameworks can significantly improve automated CT scan analysis. This approach has the potential to enhance radiological workflows and facilitate more comprehensive and accurate abnormality detection in clinical practice.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"5 ","pages":"1672364"},"PeriodicalIF":2.3,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12592200/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145483888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feasibility of artificial intelligence-assisted fast magnetic resonance imaging technology in the ankle joint injury: a comparison of the proton density-weighted image. 人工智能辅助快速磁共振成像技术在踝关节损伤中的可行性:质子密度加权图像的比较。
IF 2.3 Pub Date : 2025-10-24 eCollection Date: 2025-01-01 DOI: 10.3389/fradi.2025.1673619
Sihan Xu, Wenjuan Cao, Luyi Wang, Pangxing Guo, Yuhai Cao, Honghai Chen

Objective: To evaluate the image quality and diagnostic efficacy of proton density-weighted MRI with intelligent quick magnetic resonance (iQMR) technology in the ankle joint injury.

Materials and methods: Forty-six patients with ankle injuries were prospectively enrolled, and proton density-weighted fat suppression imaging was performed on a 3.0T MRI scanner using both an iQMR protocol (48.28 s) and a Conventional protocol (113.00 s), respectively. The original image was processed using iQMR to improve spatial resolution and reduce noise interference. Thus, four sets of images (iQMR raw, iQMR-processed, Conventional raw, and Conventional-processed) were generated. Image quality and diagnostic efficacy were assessed by objective metrics (signal-to-noise ratio, SNR and contrast-to-noise ratio, CNR), subjective scores (tissue edge clarity/sharpness, signal uniformity, fat suppression uniformity, vascular pulsation artifacts, and overall image quality), and ligaments/tendons injury grade.

Results: The SNRs (tibia, talus, etc.) and CNRs (talus-flexor hallucis longus, etc.) of iQMR-processed images were significantly higher than those of Conventional raw images (P < 0.05), except for the SNR of Achilles tendon (P > 0.05). And the iQMR-processed images were superior to the Conventional raw images in the scores of edge clarity/sharpness, signal uniformity and overall image quality (P < 0.05), with no significant differences in fat suppression uniformity and vascular pulsation artifacts (P > 0.05). There was no significant difference among the four groups of images in ligaments/tendons injury grading (P > 0.05), but the iQMR-processed images improved diagnostic confidence [κ (kappa) = 0.919].

Conclusion: The iQMR technology can effectively shorten the scan time, improve the image quality without affecting the diagnostic accuracy, which is especially suitable for the motion artifacts-sensitive patients and optimizes clinical workflow.

目的:评价质子密度加权磁共振智能快速磁共振(iQMR)技术对踝关节损伤的图像质量及诊断效果。材料和方法:前瞻性纳入46例踝关节损伤患者,在3.0T MRI扫描仪上分别采用iQMR方案(48.28 s)和常规方案(113.00 s)进行质子密度加权脂肪抑制成像。对原始图像进行iQMR处理,提高空间分辨率,降低噪声干扰。因此,生成了四组图像(iQMR raw、iQMR-processed、Conventional raw和Conventional-processed)。通过客观指标(信噪比、信噪比和对比噪声比、CNR)、主观评分(组织边缘清晰度/清晰度、信号均匀性、脂肪抑制均匀性、血管搏动伪影和整体图像质量)和韧带/肌腱损伤等级来评估图像质量和诊断效果。结果:iqmr处理图像的snr(胫骨、距骨等)和cnr(距骨-幻觉长屈肌等)均显著高于常规原始图像(P < 0.05)。iqmr处理后的图像在边缘清晰度/清晰度、信号均匀性和整体图像质量方面均优于常规原始图像(P < 0.05)。四组图像对韧带/肌腱损伤分级差异无统计学意义(P < 0.05),但经iqmr处理后的图像提高了诊断置信度[κ (kappa) = 0.919]。结论:iQMR技术可在不影响诊断准确性的前提下,有效缩短扫描时间,提高图像质量,特别适用于运动伪影敏感的患者,优化临床工作流程。
{"title":"Feasibility of artificial intelligence-assisted fast magnetic resonance imaging technology in the ankle joint injury: a comparison of the proton density-weighted image.","authors":"Sihan Xu, Wenjuan Cao, Luyi Wang, Pangxing Guo, Yuhai Cao, Honghai Chen","doi":"10.3389/fradi.2025.1673619","DOIUrl":"10.3389/fradi.2025.1673619","url":null,"abstract":"<p><strong>Objective: </strong>To evaluate the image quality and diagnostic efficacy of proton density-weighted MRI with intelligent quick magnetic resonance (iQMR) technology in the ankle joint injury.</p><p><strong>Materials and methods: </strong>Forty-six patients with ankle injuries were prospectively enrolled, and proton density-weighted fat suppression imaging was performed on a 3.0T MRI scanner using both an iQMR protocol (48.28 s) and a Conventional protocol (113.00 s), respectively. The original image was processed using iQMR to improve spatial resolution and reduce noise interference. Thus, four sets of images (iQMR raw, iQMR-processed, Conventional raw, and Conventional-processed) were generated. Image quality and diagnostic efficacy were assessed by objective metrics (signal-to-noise ratio, SNR and contrast-to-noise ratio, CNR), subjective scores (tissue edge clarity/sharpness, signal uniformity, fat suppression uniformity, vascular pulsation artifacts, and overall image quality), and ligaments/tendons injury grade.</p><p><strong>Results: </strong>The SNRs (tibia, talus, etc.) and CNRs (talus-flexor hallucis longus, etc.) of iQMR-processed images were significantly higher than those of Conventional raw images (<i>P</i> < 0.05), except for the SNR of Achilles tendon (<i>P</i> > 0.05). And the iQMR-processed images were superior to the Conventional raw images in the scores of edge clarity/sharpness, signal uniformity and overall image quality (<i>P</i> < 0.05), with no significant differences in fat suppression uniformity and vascular pulsation artifacts (<i>P</i> > 0.05). There was no significant difference among the four groups of images in ligaments/tendons injury grading (<i>P</i> > 0.05), but the iQMR-processed images improved diagnostic confidence [<i>κ</i> (kappa) = 0.919].</p><p><strong>Conclusion: </strong>The iQMR technology can effectively shorten the scan time, improve the image quality without affecting the diagnostic accuracy, which is especially suitable for the motion artifacts-sensitive patients and optimizes clinical workflow.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"5 ","pages":"1673619"},"PeriodicalIF":2.3,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12592152/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145483838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in radiology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1