{"title":"Assessing the Competence of Artificial Intelligence Programs in Pediatric Ophthalmology and Strabismus and Comparing their Relative Advantages.","authors":"Eyupcan Sensoy, Mehmet Citirik","doi":"10.22336/rjo.2023.61","DOIUrl":null,"url":null,"abstract":"<p><p><b>Objective:</b> The aim of the study was to determine the knowledge levels of ChatGPT, Bing, and Bard artificial intelligence programs produced by three different manufacturers regarding pediatric ophthalmology and strabismus and to compare their strengths and weaknesses. <b>Methods:</b> Forty-four questions testing the knowledge levels of pediatric ophthalmology and strabismus were asked in ChatGPT, Bing, and Bard artificial intelligence programs. Questions were grouped as correct or incorrect. The accuracy rates were statistically compared. <b>Results:</b> ChatGPT chatbot gave 59.1% correct answers, Bing chatbot gave 70.5% correct answers, and Bard chatbot gave 72.7% correct answers to the questions asked. No significant difference was observed between the rates of correct answers to the questions in all 3 artificial intelligence programs (p=0.343, Pearson's chi-square test). <b>Conclusion:</b> Although information about pediatric ophthalmology and strabismus can be accessed using current artificial intelligence programs, the answers given may not always be accurate. Care should always be taken when evaluating this information.</p>","PeriodicalId":94355,"journal":{"name":"Romanian journal of ophthalmology","volume":"67 4","pages":"389-393"},"PeriodicalIF":0.0000,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10793362/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Romanian journal of ophthalmology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.22336/rjo.2023.61","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Objective: The aim of the study was to determine the knowledge levels of ChatGPT, Bing, and Bard artificial intelligence programs produced by three different manufacturers regarding pediatric ophthalmology and strabismus and to compare their strengths and weaknesses. Methods: Forty-four questions testing the knowledge levels of pediatric ophthalmology and strabismus were asked in ChatGPT, Bing, and Bard artificial intelligence programs. Questions were grouped as correct or incorrect. The accuracy rates were statistically compared. Results: ChatGPT chatbot gave 59.1% correct answers, Bing chatbot gave 70.5% correct answers, and Bard chatbot gave 72.7% correct answers to the questions asked. No significant difference was observed between the rates of correct answers to the questions in all 3 artificial intelligence programs (p=0.343, Pearson's chi-square test). Conclusion: Although information about pediatric ophthalmology and strabismus can be accessed using current artificial intelligence programs, the answers given may not always be accurate. Care should always be taken when evaluating this information.