Introduction
This study aims to evaluate the quality, accuracy, readability, and understandability of patient information provided by various Artificial intelligence (AI)-based chatbots regarding orthodontic tooth extractions
Materials and methods
Two researchers created a list of questions for patients to ask the chatbots. The questions were categorized into ‘Pre-extraction’ and ‘Post-extraction’, with 20 questions in each category. Four different criteria were used to evaluate the chatbot responses to 40 questions: the Global Quality Scale (GQS), the Simple Measure of Gobbledygook (SMOG), and the Understandability and Accuracy Index. Jamovi (The Jamovi Project, 2022, version 2.3; Sydney, Australia) software was used for all statistical analyses.
Results
The highest mean values were observed in Claude 3.5 sonnet for GQS, Readability, and Accuracy Index. In terms of readability, as measured by the SMOG index, all three AI-based chatbots required a college-level education for comprehension. In the 'Pre-extraction' and 'Post-extraction' sections, Claude 3.5 Sonnet demonstrated the highest mean values for the GQS, readability, and accuracy indices. In terms of Understandability subcriteria 1 and 2, statistically significant differences were observed among the three chatbots, primarily due to the variation between Gemini and Claude 3.5 Sonnet.
Conclusion
AI-based chatbots with a variety of features have generally provided answers of high quality, reliability, and difficult readability to questions. Although the medical information related to orthodontic tooth extraction supplied by chatbots is of higher quality, it is recommended that individuals consult their healthcare professionals on this issue.
扫码关注我们
求助内容:
应助结果提醒方式:
