Evaluating Artificial Intelligence Competency in Education: Performance of ChatGPT-4 in the American Registry of Radiologic Technologists (ARRT) Radiography Certification Exam
Yousif Al-Naser MRT(R) , Felobater Halka , Boris Ng BEng , Dwight Mountford MRT(R) MSc , Sonali Sharma , Ken Niure MRT(R) , Charlotte Yong-Hing MD , Faisal Khosa MD , Christian Van der Pol MD
{"title":"Evaluating Artificial Intelligence Competency in Education: Performance of ChatGPT-4 in the American Registry of Radiologic Technologists (ARRT) Radiography Certification Exam","authors":"Yousif Al-Naser MRT(R) , Felobater Halka , Boris Ng BEng , Dwight Mountford MRT(R) MSc , Sonali Sharma , Ken Niure MRT(R) , Charlotte Yong-Hing MD , Faisal Khosa MD , Christian Van der Pol MD","doi":"10.1016/j.acra.2024.08.009","DOIUrl":null,"url":null,"abstract":"<div><h3>Rationale and Objectives</h3><div>The American Registry of Radiologic Technologists (ARRT) leads the certification process with an exam comprising 200 multiple-choice questions. This study aims to evaluate ChatGPT-4's performance in responding to practice questions similar to those found in the ARRT board examination.</div></div><div><h3>Materials and Methods</h3><div>We used a dataset of 200 practice multiple-choice questions for the ARRT certification exam from BoardVitals. Each question was fed to ChatGPT-4 fifteen times, resulting in 3000 observations to account for response variability.</div></div><div><h3>Results</h3><div>ChatGPT's overall performance was 80.56%, with higher accuracy on text-based questions (86.3%) compared to image-based questions (45.6%). Response times were longer for image-based questions (18.01 s) than for text-based questions (13.27 s). Performance varied by domain: 72.6% for Safety, 70.6% for Image Production, 67.3% for Patient Care, and 53.4% for Procedures. As anticipated, performance was best on on easy questions (78.5%).</div></div><div><h3>Conclusion</h3><div>ChatGPT demonstrated effective performance on the BoardVitals question bank for ARRT certification. Future studies could benefit from analyzing the correlation between BoardVitals scores and actual exam outcomes. Further development in AI, particularly in image processing and interpretation, is necessary to enhance its utility in educational settings.</div></div>","PeriodicalId":50928,"journal":{"name":"Academic Radiology","volume":"32 2","pages":"Pages 597-603"},"PeriodicalIF":3.8000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Academic Radiology","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1076633224005725","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
Abstract
Rationale and Objectives
The American Registry of Radiologic Technologists (ARRT) leads the certification process with an exam comprising 200 multiple-choice questions. This study aims to evaluate ChatGPT-4's performance in responding to practice questions similar to those found in the ARRT board examination.
Materials and Methods
We used a dataset of 200 practice multiple-choice questions for the ARRT certification exam from BoardVitals. Each question was fed to ChatGPT-4 fifteen times, resulting in 3000 observations to account for response variability.
Results
ChatGPT's overall performance was 80.56%, with higher accuracy on text-based questions (86.3%) compared to image-based questions (45.6%). Response times were longer for image-based questions (18.01 s) than for text-based questions (13.27 s). Performance varied by domain: 72.6% for Safety, 70.6% for Image Production, 67.3% for Patient Care, and 53.4% for Procedures. As anticipated, performance was best on on easy questions (78.5%).
Conclusion
ChatGPT demonstrated effective performance on the BoardVitals question bank for ARRT certification. Future studies could benefit from analyzing the correlation between BoardVitals scores and actual exam outcomes. Further development in AI, particularly in image processing and interpretation, is necessary to enhance its utility in educational settings.
期刊介绍:
Academic Radiology publishes original reports of clinical and laboratory investigations in diagnostic imaging, the diagnostic use of radioactive isotopes, computed tomography, positron emission tomography, magnetic resonance imaging, ultrasound, digital subtraction angiography, image-guided interventions and related techniques. It also includes brief technical reports describing original observations, techniques, and instrumental developments; state-of-the-art reports on clinical issues, new technology and other topics of current medical importance; meta-analyses; scientific studies and opinions on radiologic education; and letters to the Editor.