ChatGPT-4 Performance on USMLE Step 1 Style Questions and Its Implications for Medical Education: A Comparative Study Across Systems and Disciplines.

IF 1.9 Q2 EDUCATION, SCIENTIFIC DISCIPLINES Medical Science Educator Pub Date : 2023-12-27 eCollection Date: 2024-02-01 DOI:10.1007/s40670-023-01956-z
Razmig Garabet, Brendan P Mackey, James Cross, Michael Weingarten
{"title":"ChatGPT-4 Performance on USMLE Step 1 Style Questions and Its Implications for Medical Education: A Comparative Study Across Systems and Disciplines.","authors":"Razmig Garabet, Brendan P Mackey, James Cross, Michael Weingarten","doi":"10.1007/s40670-023-01956-z","DOIUrl":null,"url":null,"abstract":"<p><p>We assessed the performance of OpenAI's ChatGPT-4 on United States Medical Licensing Exam STEP 1 style questions across the systems and disciplines appearing on the examination. ChatGPT-4 answered 86% of the 1300 questions accurately, exceeding the estimated passing score of 60% with no significant differences in performance across clinical domains. Findings demonstrated an improvement over earlier models as well as consistent performance in topics ranging from complex biological processes to ethical considerations in patient care. Its proficiency provides support for the use of artificial intelligence (AI) as an interactive learning tool and furthermore raises questions about how the technology can be used to educate students in the preclinical component of their medical education. The authors provide an example and discuss how students can leverage AI to receive real-time analogies and explanations tailored to their desired level of education. An appropriate application of this technology potentially enables enhancement of learning outcomes for medical students in the preclinical component of their education.</p>","PeriodicalId":37113,"journal":{"name":"Medical Science Educator","volume":"34 1","pages":"145-152"},"PeriodicalIF":1.9000,"publicationDate":"2023-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10948644/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical Science Educator","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s40670-023-01956-z","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/2/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
引用次数: 0

Abstract

We assessed the performance of OpenAI's ChatGPT-4 on United States Medical Licensing Exam STEP 1 style questions across the systems and disciplines appearing on the examination. ChatGPT-4 answered 86% of the 1300 questions accurately, exceeding the estimated passing score of 60% with no significant differences in performance across clinical domains. Findings demonstrated an improvement over earlier models as well as consistent performance in topics ranging from complex biological processes to ethical considerations in patient care. Its proficiency provides support for the use of artificial intelligence (AI) as an interactive learning tool and furthermore raises questions about how the technology can be used to educate students in the preclinical component of their medical education. The authors provide an example and discuss how students can leverage AI to receive real-time analogies and explanations tailored to their desired level of education. An appropriate application of this technology potentially enables enhancement of learning outcomes for medical students in the preclinical component of their education.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
ChatGPT-4 在 USMLE 第 1 步风格问题上的表现及其对医学教育的影响:跨系统和学科的比较研究。
我们评估了 OpenAI 的 ChatGPT-4 在美国医学执照考试 STEP 1 风格问题上的表现,涉及考试中出现的各个系统和学科。在 1300 个问题中,ChatGPT-4 准确回答了 86%,超过了 60% 的估计及格分数,在不同临床领域的表现没有明显差异。研究结果表明,从复杂的生物过程到病人护理中的伦理考量,ChatGPT-4 比以前的模型有了很大的改进,而且性能稳定。其熟练程度为使用人工智能(AI)作为互动学习工具提供了支持,并进一步提出了如何利用该技术在医学教育的临床前部分对学生进行教育的问题。作者提供了一个例子,并讨论了学生如何利用人工智能接收适合其所需教育水平的实时类比和解释。这项技术的适当应用有可能提高医学生在临床前教育中的学习效果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Medical Science Educator
Medical Science Educator Social Sciences-Education
CiteScore
2.90
自引率
11.80%
发文量
202
期刊介绍: Medical Science Educator is the successor of the journal JIAMSE. It is the peer-reviewed publication of the International Association of Medical Science Educators (IAMSE). The Journal offers all who teach in healthcare the most current information to succeed in their task by publishing scholarly activities, opinions, and resources in medical science education. Published articles focus on teaching the sciences fundamental to modern medicine and health, and include basic science education, clinical teaching, and the use of modern education technologies. The Journal provides the readership a better understanding of teaching and learning techniques in order to advance medical science education.
期刊最新文献
Letter from the Editor. Mini Self-Retrieval Practices of Skeletal Muscles in the Human Gross Anatomy Course. Letter from the Editor. Correction to: Early Exposure of Medical Students to a Formal Research Program Promotes Successful Scholarship in a Multi-Campus Medical School. Addressing Health Disparities in Hypertension: A Comprehensive Medical Elective and Survey Study Among Medical Students and Professionals.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1