Human versus Artificial Intelligence: ChatGPT-4 Outperforming Bing, Bard, ChatGPT-3.5 and Humans in Clinical Chemistry Multiple-Choice Questions.

IF 1.8 Q2 EDUCATION, SCIENTIFIC DISCIPLINES Advances in Medical Education and Practice Pub Date : 2024-09-20 eCollection Date: 2024-01-01 DOI:10.2147/AMEP.S479801
Malik Sallam, Khaled Al-Salahat, Huda Eid, Jan Egger, Behrus Puladi
{"title":"Human versus Artificial Intelligence: ChatGPT-4 Outperforming Bing, Bard, ChatGPT-3.5 and Humans in Clinical Chemistry Multiple-Choice Questions.","authors":"Malik Sallam, Khaled Al-Salahat, Huda Eid, Jan Egger, Behrus Puladi","doi":"10.2147/AMEP.S479801","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Artificial intelligence (AI) chatbots excel in language understanding and generation. These models can transform healthcare education and practice. However, it is important to assess the performance of such AI models in various topics to highlight its strengths and possible limitations. This study aimed to evaluate the performance of ChatGPT (GPT-3.5 and GPT-4), Bing, and Bard compared to human students at a postgraduate master's level in Medical Laboratory Sciences.</p><p><strong>Methods: </strong>The study design was based on the METRICS checklist for the design and reporting of AI-based studies in healthcare. The study utilized a dataset of 60 Clinical Chemistry multiple-choice questions (MCQs) initially conceived for assessing 20 MSc students. The revised Bloom's taxonomy was used as the framework for classifying the MCQs into four cognitive categories: Remember, Understand, Analyze, and Apply. A modified version of the CLEAR tool was used for the assessment of the quality of AI-generated content, with Cohen's κ for inter-rater agreement.</p><p><strong>Results: </strong>Compared to the mean students' score which was 0.68±0.23, GPT-4 scored 0.90 ± 0.30, followed by Bing (0.77 ± 0.43), GPT-3.5 (0.73 ± 0.45), and Bard (0.67 ± 0.48). Statistically significant better performance was noted in lower cognitive domains (Remember and Understand) in GPT-3.5 (<i>P</i>=0.041), GPT-4 (<i>P</i>=0.003), and Bard (<i>P</i>=0.017) compared to the higher cognitive domains (Apply and Analyze). The CLEAR scores indicated that ChatGPT-4 performance was \"Excellent\" compared to the \"Above average\" performance of ChatGPT-3.5, Bing, and Bard.</p><p><strong>Discussion: </strong>The findings indicated that ChatGPT-4 excelled in the Clinical Chemistry exam, while ChatGPT-3.5, Bing, and Bard were above average. Given that the MCQs were directed to postgraduate students with a high degree of specialization, the performance of these AI chatbots was remarkable. Due to the risk of academic dishonesty and possible dependence on these AI models, the appropriateness of MCQs as an assessment tool in higher education should be re-evaluated.</p>","PeriodicalId":47404,"journal":{"name":"Advances in Medical Education and Practice","volume":null,"pages":null},"PeriodicalIF":1.8000,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11421444/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advances in Medical Education and Practice","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2147/AMEP.S479801","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
引用次数: 0

Abstract

Introduction: Artificial intelligence (AI) chatbots excel in language understanding and generation. These models can transform healthcare education and practice. However, it is important to assess the performance of such AI models in various topics to highlight its strengths and possible limitations. This study aimed to evaluate the performance of ChatGPT (GPT-3.5 and GPT-4), Bing, and Bard compared to human students at a postgraduate master's level in Medical Laboratory Sciences.

Methods: The study design was based on the METRICS checklist for the design and reporting of AI-based studies in healthcare. The study utilized a dataset of 60 Clinical Chemistry multiple-choice questions (MCQs) initially conceived for assessing 20 MSc students. The revised Bloom's taxonomy was used as the framework for classifying the MCQs into four cognitive categories: Remember, Understand, Analyze, and Apply. A modified version of the CLEAR tool was used for the assessment of the quality of AI-generated content, with Cohen's κ for inter-rater agreement.

Results: Compared to the mean students' score which was 0.68±0.23, GPT-4 scored 0.90 ± 0.30, followed by Bing (0.77 ± 0.43), GPT-3.5 (0.73 ± 0.45), and Bard (0.67 ± 0.48). Statistically significant better performance was noted in lower cognitive domains (Remember and Understand) in GPT-3.5 (P=0.041), GPT-4 (P=0.003), and Bard (P=0.017) compared to the higher cognitive domains (Apply and Analyze). The CLEAR scores indicated that ChatGPT-4 performance was "Excellent" compared to the "Above average" performance of ChatGPT-3.5, Bing, and Bard.

Discussion: The findings indicated that ChatGPT-4 excelled in the Clinical Chemistry exam, while ChatGPT-3.5, Bing, and Bard were above average. Given that the MCQs were directed to postgraduate students with a high degree of specialization, the performance of these AI chatbots was remarkable. Due to the risk of academic dishonesty and possible dependence on these AI models, the appropriateness of MCQs as an assessment tool in higher education should be re-evaluated.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
人类与人工智能的较量:ChatGPT-4 在临床化学多项选择题中的表现优于 Bing、Bard、ChatGPT-3.5 和人类。
简介人工智能(AI)聊天机器人擅长语言理解和生成。这些模型可以改变医疗保健教育和实践。然而,评估此类人工智能模型在不同主题中的表现以突出其优势和可能存在的局限性非常重要。本研究旨在评估 ChatGPT(GPT-3.5 和 GPT-4)、Bing 和 Bard 在医学检验科学硕士研究生阶段与人类学生相比的表现:研究设计基于 METRICS 核对表,该核对表用于设计和报告基于人工智能的医疗保健研究。研究利用了一个包含 60 道临床化学选择题(MCQ)的数据集,该数据集最初是为评估 20 名硕士生而设计的。修订版布鲁姆分类法被用作将 MCQ 分为四个认知类别的框架:记忆、理解、分析和应用。在评估人工智能生成内容的质量时,使用了改进版的 CLEAR 工具,用 Cohen's κ 表示评分者之间的一致性:与学生的平均得分(0.68±0.23)相比,GPT-4 得分为 0.90±0.30,其次是 Bing(0.77±0.43)、GPT-3.5(0.73±0.45)和 Bard(0.67±0.48)。与较高的认知领域(应用和分析)相比,GPT-3.5(P=0.041)、GPT-4(P=0.003)和 Bard(P=0.017)在较低认知领域(记忆和理解)的表现有明显的统计学意义。CLEAR 分数表明,与 ChatGPT-3.5、Bing 和 Bard 的 "高于平均水平 "的表现相比,ChatGPT-4 的表现为 "优秀":讨论:研究结果表明,ChatGPT-4 在临床化学考试中表现优异,而 ChatGPT-3.5、Bing 和 Bard 则高于平均水平。鉴于 MCQ 针对的是专业程度较高的研究生,这些人工智能聊天机器人的表现可圈可点。由于学术不诚实的风险和对这些人工智能模型的可能依赖,应重新评估 MCQ 作为高等教育评估工具的适当性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Advances in Medical Education and Practice
Advances in Medical Education and Practice EDUCATION, SCIENTIFIC DISCIPLINES-
CiteScore
3.10
自引率
10.00%
发文量
189
审稿时长
16 weeks
期刊最新文献
Evaluating Interprofessional Education Readiness and Perceptions Among Health Professions Students [Letter]. Variations in Trauma Education Practices Across Emergency Medicine Residencies: Insights from a National Survey of Program Directors. Medical Healthcare Student's Knowledge, Attitude, and Practices Regarding Hand Hygiene and Its Relation to Patient Safety - A Global Scoping Review. Collaborative Teaching and Curricular Integration in Pre-Intern Clinical Placements: Insights from the Greater Bay Area. Dual Coaching of Medical Clerkship Students' History-Taking Skills by Volunteer Inpatients at the Bedside and Faculty Physicians on Zoom during the COVID-19 Pandemic [Letter].
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1