Chen Li, Jinmei Zhang, John Abdul-Masih, Sihan Zhang, Jingmei Yang
{"title":"Performance of ChatGPT and Dental Students on Concepts of Periodontal Surgery.","authors":"Chen Li, Jinmei Zhang, John Abdul-Masih, Sihan Zhang, Jingmei Yang","doi":"10.1111/eje.13047","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>As a large language model, chat generative pretrained transformer (ChatGPT) has provided a valuable tool for various medical scenarios with its interactive dialogue-based interface. However, there is a lack of studies on ChatGPT's effectiveness in handling dental tasks. This study aimed to compare the knowledge and comprehension abilities of ChatGPT-3.5/4 with that of dental students about periodontal surgery.</p><p><strong>Materials and methods: </strong>A total of 134 dental students participated in this study. We designed a questionnaire consisting of four questions about the inclination for ChatGPT, 25 multiple-choice, and one open-ended question. As the comparison of ChatGPT-3.5 and 4, the question about the inclination was removed, and the rest was the same. The response time of ChatGPT-3.5 and 4 as well as the comparison of ChatGPT-3.5 and 4' performances with dental students were measured. Regarding students' feedback on the open-ended question, we also compared the outcomes of ChatGPT-4' and teacher's review.</p><p><strong>Results: </strong>On average, ChatGPT-3.5 and 4 required 3.63 ± 1.18 s (95% confidence interval [CI], 3.14, 4.11) and 12.49 ± 7.29 s (95% CI, 9.48, 15.50), respectively (p < 0.001) for each multiple-choice question. For these 25 questions, the accuracy was 21.51 ± 2.72, 14 and 20 for students, ChatGPT-3.5 and 4, respectively. Furthermore, the outcomes of ChatGPT-4's review were consistent with that of teacher's review.</p><p><strong>Conclusions: </strong>For dental examinations related to periodontal surgery, ChatGPT's accuracy was not yet comparable to that of the students. Nevertheless, ChatGPT shows promise in assisting students with the curriculum and helping practitioners with clinical letters and reviews of students' textual descriptions.</p>","PeriodicalId":50488,"journal":{"name":"European Journal of Dental Education","volume":" ","pages":""},"PeriodicalIF":1.7000,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Journal of Dental Education","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1111/eje.13047","RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"DENTISTRY, ORAL SURGERY & MEDICINE","Score":null,"Total":0}
引用次数: 0
Abstract
Introduction: As a large language model, chat generative pretrained transformer (ChatGPT) has provided a valuable tool for various medical scenarios with its interactive dialogue-based interface. However, there is a lack of studies on ChatGPT's effectiveness in handling dental tasks. This study aimed to compare the knowledge and comprehension abilities of ChatGPT-3.5/4 with that of dental students about periodontal surgery.
Materials and methods: A total of 134 dental students participated in this study. We designed a questionnaire consisting of four questions about the inclination for ChatGPT, 25 multiple-choice, and one open-ended question. As the comparison of ChatGPT-3.5 and 4, the question about the inclination was removed, and the rest was the same. The response time of ChatGPT-3.5 and 4 as well as the comparison of ChatGPT-3.5 and 4' performances with dental students were measured. Regarding students' feedback on the open-ended question, we also compared the outcomes of ChatGPT-4' and teacher's review.
Results: On average, ChatGPT-3.5 and 4 required 3.63 ± 1.18 s (95% confidence interval [CI], 3.14, 4.11) and 12.49 ± 7.29 s (95% CI, 9.48, 15.50), respectively (p < 0.001) for each multiple-choice question. For these 25 questions, the accuracy was 21.51 ± 2.72, 14 and 20 for students, ChatGPT-3.5 and 4, respectively. Furthermore, the outcomes of ChatGPT-4's review were consistent with that of teacher's review.
Conclusions: For dental examinations related to periodontal surgery, ChatGPT's accuracy was not yet comparable to that of the students. Nevertheless, ChatGPT shows promise in assisting students with the curriculum and helping practitioners with clinical letters and reviews of students' textual descriptions.
期刊介绍:
The aim of the European Journal of Dental Education is to publish original topical and review articles of the highest quality in the field of Dental Education. The Journal seeks to disseminate widely the latest information on curriculum development teaching methodologies assessment techniques and quality assurance in the fields of dental undergraduate and postgraduate education and dental auxiliary personnel training. The scope includes the dental educational aspects of the basic medical sciences the behavioural sciences the interface with medical education information technology and distance learning and educational audit. Papers embodying the results of high-quality educational research of relevance to dentistry are particularly encouraged as are evidence-based reports of novel and established educational programmes and their outcomes.