Avi A Gajjar, Harshitha Valluri, Tarun Prabhala, Amanda Custozzo, Alan S. Boulos, John C. Dalfino, Nicholas C. Field, Alexandra R. Paul
{"title":"评估 ChatGPT-4o 视觉功能在基于图像的 USMLE 第 1 步、第 2 步和第 3 步考试问题上的表现","authors":"Avi A Gajjar, Harshitha Valluri, Tarun Prabhala, Amanda Custozzo, Alan S. Boulos, John C. Dalfino, Nicholas C. Field, Alexandra R. Paul","doi":"10.1101/2024.06.18.24309092","DOIUrl":null,"url":null,"abstract":"Introduction\nArtificial intelligence (AI) has significant potential in medicine, especially in diagnostics and education. ChatGPT has achieved levels comparable to medical students on text-based USMLE questions, yet there's a gap in its evaluation on image-based questions. Methods\nThis study evaluated ChatGPT-4's performance on image-based questions from USMLE Step 1, Step 2, and Step 3. A total of 376 questions, including 54 image-based, were tested using an image-captioning system to generate descriptions for the images. Results\nThe overall performance of ChatGPT-4 on USMLE Steps 1, 2, and 3 was evaluated using 376 questions, including 54 with images. The accuracy was 85.7% for Step 1, 92.5% for Step 2, and 86.9% for Step 3. For image-based questions, the accuracy was 70.8% for Step 1, 92.9% for Step 2, and 62.5% for Step 3. In contrast, text-based questions showed higher accuracy: 89.5% for Step 1, 92.5% for Step 2, and 90.1% for Step 3. Performance dropped significantly for difficult image-based questions in Steps 1 and 3 (p=0.0196 and p=0.0020 respectively), but not in Step 2 (p=0.9574). Despite these challenges, the AI's accuracy on image-based questions exceeded the passing rate for all three exams. Conclusions\nChatGPT-4 can handle image-based USMLE questions above the passing rate, showing promise for its use in medical education and diagnostics. Further development is needed to improve its direct image processing capabilities and overall performance.","PeriodicalId":501387,"journal":{"name":"medRxiv - Medical Education","volume":"41 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evaluating the Performance of ChatGPT-4o Vision Capabilities on Image-Based USMLE Step 1, Step 2, and Step 3 Examination Questions\",\"authors\":\"Avi A Gajjar, Harshitha Valluri, Tarun Prabhala, Amanda Custozzo, Alan S. Boulos, John C. Dalfino, Nicholas C. Field, Alexandra R. Paul\",\"doi\":\"10.1101/2024.06.18.24309092\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Introduction\\nArtificial intelligence (AI) has significant potential in medicine, especially in diagnostics and education. ChatGPT has achieved levels comparable to medical students on text-based USMLE questions, yet there's a gap in its evaluation on image-based questions. Methods\\nThis study evaluated ChatGPT-4's performance on image-based questions from USMLE Step 1, Step 2, and Step 3. A total of 376 questions, including 54 image-based, were tested using an image-captioning system to generate descriptions for the images. Results\\nThe overall performance of ChatGPT-4 on USMLE Steps 1, 2, and 3 was evaluated using 376 questions, including 54 with images. The accuracy was 85.7% for Step 1, 92.5% for Step 2, and 86.9% for Step 3. For image-based questions, the accuracy was 70.8% for Step 1, 92.9% for Step 2, and 62.5% for Step 3. In contrast, text-based questions showed higher accuracy: 89.5% for Step 1, 92.5% for Step 2, and 90.1% for Step 3. Performance dropped significantly for difficult image-based questions in Steps 1 and 3 (p=0.0196 and p=0.0020 respectively), but not in Step 2 (p=0.9574). Despite these challenges, the AI's accuracy on image-based questions exceeded the passing rate for all three exams. Conclusions\\nChatGPT-4 can handle image-based USMLE questions above the passing rate, showing promise for its use in medical education and diagnostics. Further development is needed to improve its direct image processing capabilities and overall performance.\",\"PeriodicalId\":501387,\"journal\":{\"name\":\"medRxiv - Medical Education\",\"volume\":\"41 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-06-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"medRxiv - Medical Education\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1101/2024.06.18.24309092\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"medRxiv - Medical Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2024.06.18.24309092","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Evaluating the Performance of ChatGPT-4o Vision Capabilities on Image-Based USMLE Step 1, Step 2, and Step 3 Examination Questions
Introduction
Artificial intelligence (AI) has significant potential in medicine, especially in diagnostics and education. ChatGPT has achieved levels comparable to medical students on text-based USMLE questions, yet there's a gap in its evaluation on image-based questions. Methods
This study evaluated ChatGPT-4's performance on image-based questions from USMLE Step 1, Step 2, and Step 3. A total of 376 questions, including 54 image-based, were tested using an image-captioning system to generate descriptions for the images. Results
The overall performance of ChatGPT-4 on USMLE Steps 1, 2, and 3 was evaluated using 376 questions, including 54 with images. The accuracy was 85.7% for Step 1, 92.5% for Step 2, and 86.9% for Step 3. For image-based questions, the accuracy was 70.8% for Step 1, 92.9% for Step 2, and 62.5% for Step 3. In contrast, text-based questions showed higher accuracy: 89.5% for Step 1, 92.5% for Step 2, and 90.1% for Step 3. Performance dropped significantly for difficult image-based questions in Steps 1 and 3 (p=0.0196 and p=0.0020 respectively), but not in Step 2 (p=0.9574). Despite these challenges, the AI's accuracy on image-based questions exceeded the passing rate for all three exams. Conclusions
ChatGPT-4 can handle image-based USMLE questions above the passing rate, showing promise for its use in medical education and diagnostics. Further development is needed to improve its direct image processing capabilities and overall performance.