Elisa Souza Camargo, Isabella Christina Costa Quadras, Roberto Ramos Garanhani, Cristiano Miranda de Araujo, Juliana Stuginski-Barbosa
{"title":"A Comparative Analysis of Three Large Language Models on Bruxism Knowledge.","authors":"Elisa Souza Camargo, Isabella Christina Costa Quadras, Roberto Ramos Garanhani, Cristiano Miranda de Araujo, Juliana Stuginski-Barbosa","doi":"10.1111/joor.13948","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Artificial Intelligence (AI) has been widely used in health research, but the effectiveness of large language models (LLMs) in providing accurate information on bruxism has not yet been evaluated.</p><p><strong>Objectives: </strong>To assess the readability, accuracy and consistency of three LLMs in responding to frequently asked questions about bruxism.</p><p><strong>Methods: </strong>This cross-sectional observational study utilised the Google Trends tool to identify the 10 most frequent topics about bruxism. Thirty frequently asked questions were selected, which were submitted to ChatGPT-3.5, ChatGPT-4 and Gemini at two different times (T1 and T2). The readability was measured using the Flesch Reading Ease (FRE) and Flesch-Kincaid Grade Level (FKG) metrics. The responses were evaluated for accuracy using a three-point scale, and consistency was verified by comparing responses between T1 and T2. Statistical analysis included ANOVA, chi-squared tests and Cohen's kappa coefficient considering a p value of 0.5.</p><p><strong>Results: </strong>In terms of readability, there was no difference in FRE. The Gemini model showed lower FKG scores than the Generative Pretrained Transformer (GPT)-3.5 and GPT-4 models. The average accuracy of the responses was 68.33% for GPT-3.5, 65% for GPT-4 and 55% for Gemini, with no significant differences between the models (p = 0.290). Consistency was substantial for all models, with the highest being in GPT-3.5 (95%). The three LLMs demonstrated substantial agreement between T1 and T2.</p><p><strong>Conclusion: </strong>Gemini's responses were potentially more accessible to a broader patient population. LLMs demonstrated substantial consistency and moderate accuracy, indicating that these tools should not replace professional dental guidance.</p>","PeriodicalId":16605,"journal":{"name":"Journal of oral rehabilitation","volume":" ","pages":""},"PeriodicalIF":3.1000,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of oral rehabilitation","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1111/joor.13948","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"DENTISTRY, ORAL SURGERY & MEDICINE","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Artificial Intelligence (AI) has been widely used in health research, but the effectiveness of large language models (LLMs) in providing accurate information on bruxism has not yet been evaluated.
Objectives: To assess the readability, accuracy and consistency of three LLMs in responding to frequently asked questions about bruxism.
Methods: This cross-sectional observational study utilised the Google Trends tool to identify the 10 most frequent topics about bruxism. Thirty frequently asked questions were selected, which were submitted to ChatGPT-3.5, ChatGPT-4 and Gemini at two different times (T1 and T2). The readability was measured using the Flesch Reading Ease (FRE) and Flesch-Kincaid Grade Level (FKG) metrics. The responses were evaluated for accuracy using a three-point scale, and consistency was verified by comparing responses between T1 and T2. Statistical analysis included ANOVA, chi-squared tests and Cohen's kappa coefficient considering a p value of 0.5.
Results: In terms of readability, there was no difference in FRE. The Gemini model showed lower FKG scores than the Generative Pretrained Transformer (GPT)-3.5 and GPT-4 models. The average accuracy of the responses was 68.33% for GPT-3.5, 65% for GPT-4 and 55% for Gemini, with no significant differences between the models (p = 0.290). Consistency was substantial for all models, with the highest being in GPT-3.5 (95%). The three LLMs demonstrated substantial agreement between T1 and T2.
Conclusion: Gemini's responses were potentially more accessible to a broader patient population. LLMs demonstrated substantial consistency and moderate accuracy, indicating that these tools should not replace professional dental guidance.
期刊介绍:
Journal of Oral Rehabilitation aims to be the most prestigious journal of dental research within all aspects of oral rehabilitation and applied oral physiology. It covers all diagnostic and clinical management aspects necessary to re-establish a subjective and objective harmonious oral function.
Oral rehabilitation may become necessary as a result of developmental or acquired disturbances in the orofacial region, orofacial traumas, or a variety of dental and oral diseases (primarily dental caries and periodontal diseases) and orofacial pain conditions. As such, oral rehabilitation in the twenty-first century is a matter of skilful diagnosis and minimal, appropriate intervention, the nature of which is intimately linked to a profound knowledge of oral physiology, oral biology, and dental and oral pathology.
The scientific content of the journal therefore strives to reflect the best of evidence-based clinical dentistry. Modern clinical management should be based on solid scientific evidence gathered about diagnostic procedures and the properties and efficacy of the chosen intervention (e.g. material science, biological, toxicological, pharmacological or psychological aspects). The content of the journal also reflects documentation of the possible side-effects of rehabilitation, and includes prognostic perspectives of the treatment modalities chosen.