Ling Yang, Xinyu Liang, Zhanyu Wang, Ziyu Diao, Xuan Huang, Die Shen, Xin Tan, Haifeng Li, Zhenghao Chen, Shijun Qiu, Luping Zhou
Purpose To develop MedXChat, a large language model (LLM) capable of integrating radiology report generation, visual question answering (VQA), and text-to-image synthesis and evaluate its performance via computational metrics and expert radiologist assessments. Materials and Methods In this retrospective study, MedXChat was trained on the MIMIC Chest X-ray (MIMIC-CXR) database, comprising 270 790 chest radiograph-report pairs, 54 138 VQA samples, and 7500 text-to-image instruction pairs. Data were collected from 2011 to 2016. Computational evaluations of MedXChat performance were conducted using the F1 score, area under the receiver operating characteristic curve (AUC), and Fréchet inception distance (FID). Radiologist evaluations involved six experts-three junior, two senior, and one supervisor-who assessed 50 random MedXChat outputs for accuracy, consistency, and alignment with clinical standards. Results In the chest radiograph-to-report test set, MedXChat achieved an AUC of 0.67 (95% CI: 0.61, 0.75), higher than UniXGen (AUC, 0.54; P < .001) and LLM-CXR (AUC, 0.63; P = .02). Its F1 score was 0.44 versus 0.26 (P < .001) and 0.41 (P = .04), respectively. In chest radiograph-VQA, MedXChat showed higher accuracy for edema (73% vs 54% for LLM-CXR and 60% for LLaVA-Med) and pleural effusion (80% vs 53% for LLM-CXR and 61% for LLaVA-Med; all P ≤ .01). In text-to-image synthesis, it achieved the lowest FID (43.46 vs 73.29 and 106.17; P < .001) and the highest classification accuracy (71.5% vs 68.6% and 67.2%; P ≤ .05), producing high-quality images including lateral views. Conclusion MedXChat integrated report generation, VQA, and image synthesis within a unified framework, achieving state-of-the-art performance. MedXChat may support future professional applications and enhance radiologic workflows, education, and data augmentation. Keywords: Computer Aided Diagnosis (CAD), Applications - Decision Support, Applications - Multimodal, Outcomes Analysis, Technology Assessment, Comparative Studies Supplemental material is available for this article. © RSNA, 2025.
求助PDF
{"title":"Constructing a Unified Vision-Language Model for Chest Radiograph-based Diagnostics, Medical Education, and Data Augmentation.","authors":"Ling Yang, Xinyu Liang, Zhanyu Wang, Ziyu Diao, Xuan Huang, Die Shen, Xin Tan, Haifeng Li, Zhenghao Chen, Shijun Qiu, Luping Zhou","doi":"10.1148/ryct.250033","DOIUrl":"https://doi.org/10.1148/ryct.250033","url":null,"abstract":"<p><p>Purpose To develop MedXChat, a large language model (LLM) capable of integrating radiology report generation, visual question answering (VQA), and text-to-image synthesis and evaluate its performance via computational metrics and expert radiologist assessments. Materials and Methods In this retrospective study, MedXChat was trained on the MIMIC Chest X-ray (MIMIC-CXR) database, comprising 270 790 chest radiograph-report pairs, 54 138 VQA samples, and 7500 text-to-image instruction pairs. Data were collected from 2011 to 2016. Computational evaluations of MedXChat performance were conducted using the F1 score, area under the receiver operating characteristic curve (AUC), and Fréchet inception distance (FID). Radiologist evaluations involved six experts-three junior, two senior, and one supervisor-who assessed 50 random MedXChat outputs for accuracy, consistency, and alignment with clinical standards. Results In the chest radiograph-to-report test set, MedXChat achieved an AUC of 0.67 (95% CI: 0.61, 0.75), higher than UniXGen (AUC, 0.54; <i>P</i> < .001) and LLM-CXR (AUC, 0.63; <i>P</i> = .02). Its F1 score was 0.44 versus 0.26 (<i>P</i> < .001) and 0.41 (<i>P</i> = .04), respectively. In chest radiograph-VQA, MedXChat showed higher accuracy for edema (73% vs 54% for LLM-CXR and 60% for LLaVA-Med) and pleural effusion (80% vs 53% for LLM-CXR and 61% for LLaVA-Med; all <i>P</i> ≤ .01). In text-to-image synthesis, it achieved the lowest FID (43.46 vs 73.29 and 106.17; <i>P</i> < .001) and the highest classification accuracy (71.5% vs 68.6% and 67.2%; <i>P</i> ≤ .05), producing high-quality images including lateral views. Conclusion MedXChat integrated report generation, VQA, and image synthesis within a unified framework, achieving state-of-the-art performance. MedXChat may support future professional applications and enhance radiologic workflows, education, and data augmentation. <b>Keywords:</b> Computer Aided Diagnosis (CAD), Applications - Decision Support, Applications - Multimodal, Outcomes Analysis, Technology Assessment, Comparative Studies <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":21168,"journal":{"name":"Radiology. Cardiothoracic imaging","volume":"7 6","pages":"e250033"},"PeriodicalIF":4.2,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145775316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
引用
批量引用