Performance of ChatGPT and Dental Students on Concepts of Periodontal Surgery.

IF 1.7 4区 教育学 Q3 DENTISTRY, ORAL SURGERY & MEDICINE European Journal of Dental Education Pub Date : 2024-10-24 DOI:10.1111/eje.13047
Chen Li, Jinmei Zhang, John Abdul-Masih, Sihan Zhang, Jingmei Yang
{"title":"Performance of ChatGPT and Dental Students on Concepts of Periodontal Surgery.","authors":"Chen Li, Jinmei Zhang, John Abdul-Masih, Sihan Zhang, Jingmei Yang","doi":"10.1111/eje.13047","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>As a large language model, chat generative pretrained transformer (ChatGPT) has provided a valuable tool for various medical scenarios with its interactive dialogue-based interface. However, there is a lack of studies on ChatGPT's effectiveness in handling dental tasks. This study aimed to compare the knowledge and comprehension abilities of ChatGPT-3.5/4 with that of dental students about periodontal surgery.</p><p><strong>Materials and methods: </strong>A total of 134 dental students participated in this study. We designed a questionnaire consisting of four questions about the inclination for ChatGPT, 25 multiple-choice, and one open-ended question. As the comparison of ChatGPT-3.5 and 4, the question about the inclination was removed, and the rest was the same. The response time of ChatGPT-3.5 and 4 as well as the comparison of ChatGPT-3.5 and 4' performances with dental students were measured. Regarding students' feedback on the open-ended question, we also compared the outcomes of ChatGPT-4' and teacher's review.</p><p><strong>Results: </strong>On average, ChatGPT-3.5 and 4 required 3.63 ± 1.18 s (95% confidence interval [CI], 3.14, 4.11) and 12.49 ± 7.29 s (95% CI, 9.48, 15.50), respectively (p < 0.001) for each multiple-choice question. For these 25 questions, the accuracy was 21.51 ± 2.72, 14 and 20 for students, ChatGPT-3.5 and 4, respectively. Furthermore, the outcomes of ChatGPT-4's review were consistent with that of teacher's review.</p><p><strong>Conclusions: </strong>For dental examinations related to periodontal surgery, ChatGPT's accuracy was not yet comparable to that of the students. Nevertheless, ChatGPT shows promise in assisting students with the curriculum and helping practitioners with clinical letters and reviews of students' textual descriptions.</p>","PeriodicalId":50488,"journal":{"name":"European Journal of Dental Education","volume":" ","pages":""},"PeriodicalIF":1.7000,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Journal of Dental Education","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1111/eje.13047","RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"DENTISTRY, ORAL SURGERY & MEDICINE","Score":null,"Total":0}
引用次数: 0

Abstract

Introduction: As a large language model, chat generative pretrained transformer (ChatGPT) has provided a valuable tool for various medical scenarios with its interactive dialogue-based interface. However, there is a lack of studies on ChatGPT's effectiveness in handling dental tasks. This study aimed to compare the knowledge and comprehension abilities of ChatGPT-3.5/4 with that of dental students about periodontal surgery.

Materials and methods: A total of 134 dental students participated in this study. We designed a questionnaire consisting of four questions about the inclination for ChatGPT, 25 multiple-choice, and one open-ended question. As the comparison of ChatGPT-3.5 and 4, the question about the inclination was removed, and the rest was the same. The response time of ChatGPT-3.5 and 4 as well as the comparison of ChatGPT-3.5 and 4' performances with dental students were measured. Regarding students' feedback on the open-ended question, we also compared the outcomes of ChatGPT-4' and teacher's review.

Results: On average, ChatGPT-3.5 and 4 required 3.63 ± 1.18 s (95% confidence interval [CI], 3.14, 4.11) and 12.49 ± 7.29 s (95% CI, 9.48, 15.50), respectively (p < 0.001) for each multiple-choice question. For these 25 questions, the accuracy was 21.51 ± 2.72, 14 and 20 for students, ChatGPT-3.5 and 4, respectively. Furthermore, the outcomes of ChatGPT-4's review were consistent with that of teacher's review.

Conclusions: For dental examinations related to periodontal surgery, ChatGPT's accuracy was not yet comparable to that of the students. Nevertheless, ChatGPT shows promise in assisting students with the curriculum and helping practitioners with clinical letters and reviews of students' textual descriptions.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
ChatGPT 和牙科学生在牙周手术概念方面的表现。
简介作为一种大型语言模型,聊天生成预训练转换器(ChatGPT)的交互式对话界面为各种医疗场景提供了宝贵的工具。然而,有关 ChatGPT 在处理牙科任务时的有效性的研究还很缺乏。本研究旨在比较 ChatGPT-3.5/4 与牙科学生在牙周手术方面的知识和理解能力:本研究共有 134 名牙科学生参加。我们设计了一份问卷,其中包括 4 个有关 ChatGPT 倾向的问题、25 个选择题和 1 个开放式问题。作为 ChatGPT-3.5 和 ChatGPT-4 的对比,去掉了关于倾向性的问题,其余问题相同。我们测量了 ChatGPT-3.5 和 4 的反应时间,以及 ChatGPT-3.5 和 4 与牙科学生的表现对比。关于学生对开放式问题的反馈,我们还比较了 ChatGPT-4 和教师评阅的结果:结果:ChatGPT-3.5 和 4 平均分别需要 3.63 ± 1.18 秒(95% 置信区间 [CI],3.14, 4.11)和 12.49 ± 7.29 秒(95% 置信区间 [CI],9.48, 15.50)(P对于与牙周手术相关的牙科检查,ChatGPT 的准确性还无法与学生的准确性相提并论。尽管如此,ChatGPT 在帮助学生学习课程、帮助医生撰写临床信函和审查学生的文字描述方面还是大有可为的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
4.10
自引率
16.70%
发文量
127
审稿时长
6-12 weeks
期刊介绍: The aim of the European Journal of Dental Education is to publish original topical and review articles of the highest quality in the field of Dental Education. The Journal seeks to disseminate widely the latest information on curriculum development teaching methodologies assessment techniques and quality assurance in the fields of dental undergraduate and postgraduate education and dental auxiliary personnel training. The scope includes the dental educational aspects of the basic medical sciences the behavioural sciences the interface with medical education information technology and distance learning and educational audit. Papers embodying the results of high-quality educational research of relevance to dentistry are particularly encouraged as are evidence-based reports of novel and established educational programmes and their outcomes.
期刊最新文献
The Graduating European Dentist Curriculum Framework: A 7-Year Review. Beyond the Drill: Understanding Empathy Among Undergraduate Dental Students. Future-Proofing Dentistry: A Qualitative Exploration of COVID-19 Responses in UK Dental Schools. Mapping the Landscape of Generative Language Models in Dental Education: A Comparison Between ChatGPT and Google Bard. Performance of a Generative Pre-Trained Transformer in Generating Scientific Abstracts in Dentistry: A Comparative Observational Study.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1