{"title":"ChatGPT-4o's Performance in Brain Tumor Diagnosis and MRI Findings: A Comparative Analysis with Radiologists.","authors":"Cemre Ozenbas, Duygu Engin, Tayfun Altinok, Emrah Akcay, Ulas Aktas, Alper Tabanli","doi":"10.1016/j.acra.2025.01.033","DOIUrl":null,"url":null,"abstract":"<p><strong>Rationale and objectives: </strong>To evaluate the accuracy of ChatGPT-4o in identifying magnetic resonance imaging (MRI) findings and diagnosing brain tumors by comparing its performance with that of experienced radiologists.</p><p><strong>Materials and methods: </strong>This retrospective study included 46 patients with pathologically confirmed brain tumors who underwent preoperative MRI between January 2021 and October 2024. Two experienced radiologists and ChatGPT 4o independently evaluated the anonymized MRI images. Eight questions focusing on MRI sequences, lesion characteristics, and diagnoses were answered. ChatGPT-4o's responses were compared to those of the radiologists and the pathology outcomes. Statistical analyses were performed, which included accuracy, sensitivity, specificity, and the McNemar test, with p<0.05 considered to indicate a statistically significant difference.</p><p><strong>Results: </strong>ChatGPT-4o successfully identified 44 of the 46 (95.7%) lesions; it achieved 88.3% accuracy in identifying MRI sequences, 81% in perilesional edema, 79.5% in signal characteristics, and 82.2% in contrast enhancement. However, its accuracy in localizing lesions was 53.6% and that in distinguishing extra-axial from intra-axial lesions was 26.3%. As such, ChatGPT-4o achieved success rates of 56.8% and 29.5% for differential diagnoses and most likely diagnoses when compared to 93.2-90.9% and 70.5-65.9% for radiologists, respectively (p<0.005).</p><p><strong>Conclusion: </strong>ChatGPT-4o demonstrated high accuracy in identifying certain MRI features but underperformed in diagnostic tasks in comparison with the radiologists. Despite its current limitations, future updates and advancements have the potential to enable large language models to facilitate diagnosis and offer a reliable second opinion to radiologists.</p>","PeriodicalId":50928,"journal":{"name":"Academic Radiology","volume":" ","pages":""},"PeriodicalIF":3.8000,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Academic Radiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.acra.2025.01.033","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
Abstract
Rationale and objectives: To evaluate the accuracy of ChatGPT-4o in identifying magnetic resonance imaging (MRI) findings and diagnosing brain tumors by comparing its performance with that of experienced radiologists.
Materials and methods: This retrospective study included 46 patients with pathologically confirmed brain tumors who underwent preoperative MRI between January 2021 and October 2024. Two experienced radiologists and ChatGPT 4o independently evaluated the anonymized MRI images. Eight questions focusing on MRI sequences, lesion characteristics, and diagnoses were answered. ChatGPT-4o's responses were compared to those of the radiologists and the pathology outcomes. Statistical analyses were performed, which included accuracy, sensitivity, specificity, and the McNemar test, with p<0.05 considered to indicate a statistically significant difference.
Results: ChatGPT-4o successfully identified 44 of the 46 (95.7%) lesions; it achieved 88.3% accuracy in identifying MRI sequences, 81% in perilesional edema, 79.5% in signal characteristics, and 82.2% in contrast enhancement. However, its accuracy in localizing lesions was 53.6% and that in distinguishing extra-axial from intra-axial lesions was 26.3%. As such, ChatGPT-4o achieved success rates of 56.8% and 29.5% for differential diagnoses and most likely diagnoses when compared to 93.2-90.9% and 70.5-65.9% for radiologists, respectively (p<0.005).
Conclusion: ChatGPT-4o demonstrated high accuracy in identifying certain MRI features but underperformed in diagnostic tasks in comparison with the radiologists. Despite its current limitations, future updates and advancements have the potential to enable large language models to facilitate diagnosis and offer a reliable second opinion to radiologists.
期刊介绍:
Academic Radiology publishes original reports of clinical and laboratory investigations in diagnostic imaging, the diagnostic use of radioactive isotopes, computed tomography, positron emission tomography, magnetic resonance imaging, ultrasound, digital subtraction angiography, image-guided interventions and related techniques. It also includes brief technical reports describing original observations, techniques, and instrumental developments; state-of-the-art reports on clinical issues, new technology and other topics of current medical importance; meta-analyses; scientific studies and opinions on radiologic education; and letters to the Editor.