Evaluating ChatGPT as a self-learning tool in medical biochemistry: A performance assessment in undergraduate medical university examination

IF 1.2 4区 教育学 Q4 BIOCHEMISTRY & MOLECULAR BIOLOGY Biochemistry and Molecular Biology Education Pub Date : 2023-12-19 DOI:10.1002/bmb.21808
Krishna Mohan Surapaneni, Anusha Rajajagadeesan, Lakshmi Goudhaman, Shalini Lakshmanan, Saranya Sundaramoorthi, Dineshkumar Ravi, Kalaiselvi Rajendiran, Porchelvan Swaminathan
{"title":"Evaluating ChatGPT as a self-learning tool in medical biochemistry: A performance assessment in undergraduate medical university examination","authors":"Krishna Mohan Surapaneni,&nbsp;Anusha Rajajagadeesan,&nbsp;Lakshmi Goudhaman,&nbsp;Shalini Lakshmanan,&nbsp;Saranya Sundaramoorthi,&nbsp;Dineshkumar Ravi,&nbsp;Kalaiselvi Rajendiran,&nbsp;Porchelvan Swaminathan","doi":"10.1002/bmb.21808","DOIUrl":null,"url":null,"abstract":"<p>The emergence of ChatGPT as one of the most advanced chatbots and its ability to generate diverse data has given room for numerous discussions worldwide regarding its utility, particularly in advancing medical education and research. This study seeks to assess the performance of ChatGPT in medical biochemistry to evaluate its potential as an effective self-learning tool for medical students. This evaluation was carried out using the university examination question papers of both parts 1 and 2 of medical biochemistry which comprised theory and multiple choice questions (MCQs) accounting for a total of 100 in each part. The questions were used to interact with ChatGPT, and three raters independently reviewed and scored the answers to prevent bias in scoring. We conducted the inter-item correlation matrix and the interclass correlation between raters 1, 2, and 3. For MCQs, symmetric measures in the form of kappa value (a measure of agreement) were performed between raters 1, 2, and 3. ChatGPT generated relevant and appropriate answers to all questions along with explanations for MCQs. ChatGPT has “passed” the medical biochemistry university examination with an average score of 117 out of 200 (58%) in both papers. In Paper 1, ChatGPT has secured 60 ± 2.29 and 57 ± 4.36 in Paper 2. The kappa value for all the cross-analysis of Rater 1, Rater 2, and Rater 3 scores in MCQ was 1.000. The evaluation of ChatGPT as a self-learning tool in medical biochemistry has yielded important insights. While it is encouraging that ChatGPT has demonstrated proficiency in this area, the overall score of 58% indicates that there is work to be done. To unlock its full potential as a self-learning tool, ChatGPT must focus on generating not only accurate but also comprehensive and contextually relevant content.</p>","PeriodicalId":8830,"journal":{"name":"Biochemistry and Molecular Biology Education","volume":null,"pages":null},"PeriodicalIF":1.2000,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biochemistry and Molecular Biology Education","FirstCategoryId":"95","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/bmb.21808","RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"BIOCHEMISTRY & MOLECULAR BIOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

The emergence of ChatGPT as one of the most advanced chatbots and its ability to generate diverse data has given room for numerous discussions worldwide regarding its utility, particularly in advancing medical education and research. This study seeks to assess the performance of ChatGPT in medical biochemistry to evaluate its potential as an effective self-learning tool for medical students. This evaluation was carried out using the university examination question papers of both parts 1 and 2 of medical biochemistry which comprised theory and multiple choice questions (MCQs) accounting for a total of 100 in each part. The questions were used to interact with ChatGPT, and three raters independently reviewed and scored the answers to prevent bias in scoring. We conducted the inter-item correlation matrix and the interclass correlation between raters 1, 2, and 3. For MCQs, symmetric measures in the form of kappa value (a measure of agreement) were performed between raters 1, 2, and 3. ChatGPT generated relevant and appropriate answers to all questions along with explanations for MCQs. ChatGPT has “passed” the medical biochemistry university examination with an average score of 117 out of 200 (58%) in both papers. In Paper 1, ChatGPT has secured 60 ± 2.29 and 57 ± 4.36 in Paper 2. The kappa value for all the cross-analysis of Rater 1, Rater 2, and Rater 3 scores in MCQ was 1.000. The evaluation of ChatGPT as a self-learning tool in medical biochemistry has yielded important insights. While it is encouraging that ChatGPT has demonstrated proficiency in this area, the overall score of 58% indicates that there is work to be done. To unlock its full potential as a self-learning tool, ChatGPT must focus on generating not only accurate but also comprehensive and contextually relevant content.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
评估作为医学生物化学自学工具的 ChatGPT:医科大学本科生考试成绩评估。
作为最先进的聊天机器人之一,ChatGPT 的出现及其生成各种数据的能力在全球范围内引发了有关其实用性的大量讨论,尤其是在推进医学教育和研究方面。本研究旨在评估 ChatGPT 在医学生物化学方面的表现,以评估其作为医学生有效自学工具的潜力。评估使用了医学生物化学第一部分和第二部分的大学考试试卷,其中包括理论和多项选择题(MCQ),每部分共 100 道。这些试题用于与 ChatGPT 互动,由三名评分者独立审阅和评分,以防止评分出现偏差。我们在评分者 1、2 和 3 之间进行了项目间相关矩阵和类间相关。对于 MCQ,我们在评分者 1、2 和 3 之间以 kappa 值(一种衡量一致性的方法)的形式进行了对称测量。ChatGPT 为所有问题生成了相关的适当答案,并对 MCQ 进行了解释。ChatGPT 在大学医学生物化学考试中均以 117 分(满分 200 分,占 58%)的成绩 "通过 "了两份试卷。在试卷 1 中,ChatGPT 取得了 60 ± 2.29 的高分,在试卷 2 中取得了 57 ± 4.36 的高分。在 MCQ 中,所有评分者 1、评分者 2 和评分者 3 的交叉分析的卡帕值均为 1.000。对 ChatGPT 作为医学生物化学自学工具的评估产生了重要的启示。虽然 ChatGPT 在这一领域表现出的熟练程度令人鼓舞,但 58% 的总分表明还有很多工作要做。要充分释放其作为自学工具的潜力,ChatGPT 必须专注于生成不仅准确,而且全面和与上下文相关的内容。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Biochemistry and Molecular Biology Education
Biochemistry and Molecular Biology Education 生物-生化与分子生物学
CiteScore
2.60
自引率
14.30%
发文量
99
审稿时长
6-12 weeks
期刊介绍: The aim of BAMBED is to enhance teacher preparation and student learning in Biochemistry, Molecular Biology, and related sciences such as Biophysics and Cell Biology, by promoting the world-wide dissemination of educational materials. BAMBED seeks and communicates articles on many topics, including: Innovative techniques in teaching and learning. New pedagogical approaches. Research in biochemistry and molecular biology education. Reviews on emerging areas of Biochemistry and Molecular Biology to provide background for the preparation of lectures, seminars, student presentations, dissertations, etc. Historical Reviews describing "Paths to Discovery". Novel and proven laboratory experiments that have both skill-building and discovery-based characteristics. Reviews of relevant textbooks, software, and websites. Descriptions of software for educational use. Descriptions of multimedia materials such as tutorials on various aspects of biochemistry and molecular biology.
期刊最新文献
An effective CaenorhabditiselegansCRISPR training module for high school and undergraduate summer research experiences in molecular biology A primer for junior trainees: Recognition of RNA modifications by RNA-binding proteins. Impact of an instructional team's feedback on an instructor's teaching practices in a Biology of Cancer course. Isolation and identification of primary cells: A comprehensive primary cell culture experiment for graduate students. From primers to pipettes: An immersive course introducing high school students to qPCR for quantifying chemical defense gene expression.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1