Comparative analysis of diagnostic accuracy in endodontic assessments: dental students vs. artificial intelligence.

IF 2.2 Q2 MEDICINE, GENERAL & INTERNAL Diagnosis Pub Date : 2024-05-03 eCollection Date: 2024-08-01 DOI:10.1515/dx-2024-0034
Abubaker Qutieshat, Alreem Al Rusheidi, Samiya Al Ghammari, Abdulghani Alarabi, Abdurahman Salem, Maja Zelihic
{"title":"Comparative analysis of diagnostic accuracy in endodontic assessments: dental students vs. artificial intelligence.","authors":"Abubaker Qutieshat, Alreem Al Rusheidi, Samiya Al Ghammari, Abdulghani Alarabi, Abdurahman Salem, Maja Zelihic","doi":"10.1515/dx-2024-0034","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>This study evaluates the comparative diagnostic accuracy of dental students and artificial intelligence (AI), specifically a modified ChatGPT 4, in endodontic assessments related to pulpal and apical conditions. The findings are intended to offer insights into the potential role of AI in augmenting dental education.</p><p><strong>Methods: </strong>Involving 109 dental students divided into junior (54) and senior (55) groups, the study compared their diagnostic accuracy against ChatGPT's across seven clinical scenarios. Juniors had the American Association of Endodontists (AEE) terminology assistance, while seniors relied on prior knowledge. Accuracy was measured against a gold standard by experienced endodontists, using statistical analysis including Kruskal-Wallis and Dwass-Steel-Critchlow-Fligner tests.</p><p><strong>Results: </strong>ChatGPT achieved significantly higher accuracy (99.0 %) compared to seniors (79.7 %) and juniors (77.0 %). Median accuracy was 100.0 % for ChatGPT, 85.7 % for seniors, and 82.1 % for juniors. Statistical tests indicated significant differences between ChatGPT and both student groups (p<0.001), with no notable difference between the student cohorts.</p><p><strong>Conclusions: </strong>The study reveals AI's capability to outperform dental students in diagnostic accuracy regarding endodontic assessments. This underscores AIs potential as a reference tool that students could utilize to enhance their understanding and diagnostic skills. Nevertheless, the potential for overreliance on AI, which may affect the development of critical analytical and decision-making abilities, necessitates a balanced integration of AI with human expertise and clinical judgement in dental education. Future research is essential to navigate the ethical and legal frameworks for incorporating AI tools such as ChatGPT into dental education and clinical practices effectively.</p>","PeriodicalId":11273,"journal":{"name":"Diagnosis","volume":null,"pages":null},"PeriodicalIF":2.2000,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Diagnosis","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1515/dx-2024-0034","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/8/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0

Abstract

Objectives: This study evaluates the comparative diagnostic accuracy of dental students and artificial intelligence (AI), specifically a modified ChatGPT 4, in endodontic assessments related to pulpal and apical conditions. The findings are intended to offer insights into the potential role of AI in augmenting dental education.

Methods: Involving 109 dental students divided into junior (54) and senior (55) groups, the study compared their diagnostic accuracy against ChatGPT's across seven clinical scenarios. Juniors had the American Association of Endodontists (AEE) terminology assistance, while seniors relied on prior knowledge. Accuracy was measured against a gold standard by experienced endodontists, using statistical analysis including Kruskal-Wallis and Dwass-Steel-Critchlow-Fligner tests.

Results: ChatGPT achieved significantly higher accuracy (99.0 %) compared to seniors (79.7 %) and juniors (77.0 %). Median accuracy was 100.0 % for ChatGPT, 85.7 % for seniors, and 82.1 % for juniors. Statistical tests indicated significant differences between ChatGPT and both student groups (p<0.001), with no notable difference between the student cohorts.

Conclusions: The study reveals AI's capability to outperform dental students in diagnostic accuracy regarding endodontic assessments. This underscores AIs potential as a reference tool that students could utilize to enhance their understanding and diagnostic skills. Nevertheless, the potential for overreliance on AI, which may affect the development of critical analytical and decision-making abilities, necessitates a balanced integration of AI with human expertise and clinical judgement in dental education. Future research is essential to navigate the ethical and legal frameworks for incorporating AI tools such as ChatGPT into dental education and clinical practices effectively.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
牙髓病评估中诊断准确性的比较分析:牙科学生与人工智能。
研究目的本研究评估了牙科学生和人工智能(AI),特别是经过修改的 ChatGPT 4,在牙髓和根尖条件相关的牙髓病学评估中的诊断准确性比较。研究结果旨在深入探讨人工智能在牙科教育中的潜在作用:该研究将 109 名牙科学生分为低年级组(54 人)和高年级组(55 人),比较了他们在七个临床场景中与 ChatGPT 的诊断准确性。低年级学生有美国牙髓病学家协会(AEE)的术语帮助,而高年级学生则依靠先前的知识。由经验丰富的牙髓病学家采用 Kruskal-Wallis 和 Dwass-Steel-Critchlow-Fligner 测试等统计分析方法,对照黄金标准来衡量准确性:与高年级学生(79.7%)和低年级学生(77.0%)相比,ChatGPT 的准确率(99.0%)明显更高。ChatGPT 的中位准确率为 100.0%,高年级为 85.7%,低年级为 82.1%。统计测试表明,ChatGPT 和两个学生组之间存在明显差异(p 结论:本研究揭示了人工智能在学习中的作用:这项研究揭示了人工智能在牙髓评估诊断准确性方面优于牙科学生的能力。这强调了人工智能作为一种参考工具的潜力,学生可以利用它来提高自己的理解能力和诊断技能。然而,过度依赖人工智能可能会影响关键分析和决策能力的发展,因此有必要在牙科教育中将人工智能与人类专业知识和临床判断平衡地结合起来。未来的研究对于将 ChatGPT 等人工智能工具有效融入口腔医学教育和临床实践的伦理和法律框架至关重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Diagnosis
Diagnosis MEDICINE, GENERAL & INTERNAL-
CiteScore
7.20
自引率
5.70%
发文量
41
期刊介绍: Diagnosis focuses on how diagnosis can be advanced, how it is taught, and how and why it can fail, leading to diagnostic errors. The journal welcomes both fundamental and applied works, improvement initiatives, opinions, and debates to encourage new thinking on improving this critical aspect of healthcare quality.  Topics: -Factors that promote diagnostic quality and safety -Clinical reasoning -Diagnostic errors in medicine -The factors that contribute to diagnostic error: human factors, cognitive issues, and system-related breakdowns -Improving the value of diagnosis – eliminating waste and unnecessary testing -How culture and removing blame promote awareness of diagnostic errors -Training and education related to clinical reasoning and diagnostic skills -Advances in laboratory testing and imaging that improve diagnostic capability -Local, national and international initiatives to reduce diagnostic error
期刊最新文献
Bayesian intelligence for medical diagnosis: a pilot study on patient disposition for emergency medicine chest pain. Bringing team science to the ambulatory diagnostic process: how do patients and clinicians develop shared mental models? From stable teamwork to dynamic teaming in the ambulatory care diagnostic process. Interventions to improve timely cancer diagnosis: an integrative review. Implementation of a bundle to improve diagnosis in hospitalized patients: lessons learned.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1