Shaun Edalati, Shiven Sharma, Rahul Guda, Vikram Vasan, Shahed Mohamed, Sunder Gidumal, Satish Govindaraj, Alfred Marc Iloreta
{"title":"Assessing adult sinusitis guidelines: A comparative analysis of AAO-HNS and AI Chatbots","authors":"Shaun Edalati, Shiven Sharma, Rahul Guda, Vikram Vasan, Shahed Mohamed, Sunder Gidumal, Satish Govindaraj, Alfred Marc Iloreta","doi":"10.1016/j.amjoto.2024.104563","DOIUrl":null,"url":null,"abstract":"<div><h3>Objective</h3><div>To compare the guidelines offered by the American Academy of Otolaryngology—Head and Neck Surgery Foundation (AAO-HNS) on adult sinusitis to chatbots.</div></div><div><h3>Methods</h3><div>ChatGPT-3.5, ChatGPT-4.0, Bard, and Llama 2 represent openly accessible large language model-based chatbots. Accuracy, over-conclusiveness, supplemental, and incompleteness of chatbot responses were compared to the AAO-HNS Adult sinusitis clinical guidelines.</div></div><div><h3>Results</h3><div>12 guidelines consisting of 30 questions from the AAO-HNS were compared to 4 different chatbots. Adherence to AAO-HNS guidelines varied, with Llama 2 providing 80 % accurate responses, BARD 83.3 %, ChatGPT-4.0 80 %, and ChatGPT-3.5 73.3 %. Over-conclusive responses were minimal, with only one instance each from Llama 2 and ChatGPT-4.0. However, rates of incomplete responses varied, with Llama 2 exhibiting the highest at 40 %, followed by ChatGPT-4.0 at 33.3 %, BARD at 23.3 %, and ChatGPT-3.5 at 36.7 %. Fisher's Exact Test analysis revealed significant deviations from the guideline standard, with less accuracy (<em>p</em> = 0.012 for Llama 2, <em>p</em> = 0.026 for BARD, p = 0.012 for ChatGPT-4.0, <em>p</em> = 0.002 for ChatGPT-3.5), inclusion of supplemental data (<em>p</em> < 0.001 for all), and less completeness (<em>p</em> < 0.01 for all) across all chatbots, indicating potential areas for enhancement in their performance.</div></div><div><h3>Conclusion</h3><div>Although AI chatbots like Llama 2, Bard, and ChatGPT exhibit potential in sharing health-related information, their present performance in responding to clinical concerns concerning adult rhinosinusitis is not up to par with recognized clinical criteria. Future revisions should focus on addressing these shortcomings and placing an emphasis on accuracy, completeness, and conformity with evidence-based practices.</div></div>","PeriodicalId":7591,"journal":{"name":"American Journal of Otolaryngology","volume":"46 1","pages":"Article 104563"},"PeriodicalIF":1.8000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"American Journal of Otolaryngology","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0196070924003491","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OTORHINOLARYNGOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Objective
To compare the guidelines offered by the American Academy of Otolaryngology—Head and Neck Surgery Foundation (AAO-HNS) on adult sinusitis to chatbots.
Methods
ChatGPT-3.5, ChatGPT-4.0, Bard, and Llama 2 represent openly accessible large language model-based chatbots. Accuracy, over-conclusiveness, supplemental, and incompleteness of chatbot responses were compared to the AAO-HNS Adult sinusitis clinical guidelines.
Results
12 guidelines consisting of 30 questions from the AAO-HNS were compared to 4 different chatbots. Adherence to AAO-HNS guidelines varied, with Llama 2 providing 80 % accurate responses, BARD 83.3 %, ChatGPT-4.0 80 %, and ChatGPT-3.5 73.3 %. Over-conclusive responses were minimal, with only one instance each from Llama 2 and ChatGPT-4.0. However, rates of incomplete responses varied, with Llama 2 exhibiting the highest at 40 %, followed by ChatGPT-4.0 at 33.3 %, BARD at 23.3 %, and ChatGPT-3.5 at 36.7 %. Fisher's Exact Test analysis revealed significant deviations from the guideline standard, with less accuracy (p = 0.012 for Llama 2, p = 0.026 for BARD, p = 0.012 for ChatGPT-4.0, p = 0.002 for ChatGPT-3.5), inclusion of supplemental data (p < 0.001 for all), and less completeness (p < 0.01 for all) across all chatbots, indicating potential areas for enhancement in their performance.
Conclusion
Although AI chatbots like Llama 2, Bard, and ChatGPT exhibit potential in sharing health-related information, their present performance in responding to clinical concerns concerning adult rhinosinusitis is not up to par with recognized clinical criteria. Future revisions should focus on addressing these shortcomings and placing an emphasis on accuracy, completeness, and conformity with evidence-based practices.
期刊介绍:
Be fully informed about developments in otology, neurotology, audiology, rhinology, allergy, laryngology, speech science, bronchoesophagology, facial plastic surgery, and head and neck surgery. Featured sections include original contributions, grand rounds, current reviews, case reports and socioeconomics.