Aarav S Parekh, Joseph A S McCahon, Amy Nghe, David I Pedowitz, Joseph N Daniel, Selene G Parekh
{"title":"足踝患者教育材料与人工智能聊天机器人:比较分析。","authors":"Aarav S Parekh, Joseph A S McCahon, Amy Nghe, David I Pedowitz, Joseph N Daniel, Selene G Parekh","doi":"10.1177/19386400241235834","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>The purpose of this study was to perform a comparative analysis of foot and ankle patient education material generated by the AI chatbots, as they compare to the American Orthopaedic Foot and Ankle Society (AOFAS)-recommended patient education website, FootCareMD.org.</p><p><strong>Methods: </strong>ChatGPT, Google Bard, and Bing AI were used to generate patient educational materials on 10 of the most common foot and ankle conditions. The content from these AI language model platforms was analyzed and compared with that in FootCareMD.org for accuracy of included information. Accuracy was determined for each of the 10 conditions on a basis of included information regarding background, symptoms, causes, diagnosis, treatments, surgical options, recovery procedures, and risks or preventions.</p><p><strong>Results: </strong>When compared to the reference standard of the AOFAS website FootCareMD.org, the AI language model platforms consistently scored below 60% in accuracy rates in all categories of the articles analyzed. ChatGPT was found to contain an average of 46.2% of key content across all included conditions when compared to FootCareMD.org. Comparatively, Google Bard and Bing AI contained 36.5% and 28.0% of information included on FootCareMD.org, respectively (P < .005).</p><p><strong>Conclusion: </strong>Patient education regarding common foot and ankle conditions generated by AI language models provides limited content accuracy across all 3 AI chatbot platforms.</p><p><strong>Level of evidence: </strong>Level IV.</p>","PeriodicalId":73046,"journal":{"name":"Foot & ankle specialist","volume":" ","pages":"19386400241235834"},"PeriodicalIF":0.0000,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Foot and Ankle Patient Education Materials and Artificial Intelligence Chatbots: A Comparative Analysis.\",\"authors\":\"Aarav S Parekh, Joseph A S McCahon, Amy Nghe, David I Pedowitz, Joseph N Daniel, Selene G Parekh\",\"doi\":\"10.1177/19386400241235834\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>The purpose of this study was to perform a comparative analysis of foot and ankle patient education material generated by the AI chatbots, as they compare to the American Orthopaedic Foot and Ankle Society (AOFAS)-recommended patient education website, FootCareMD.org.</p><p><strong>Methods: </strong>ChatGPT, Google Bard, and Bing AI were used to generate patient educational materials on 10 of the most common foot and ankle conditions. The content from these AI language model platforms was analyzed and compared with that in FootCareMD.org for accuracy of included information. Accuracy was determined for each of the 10 conditions on a basis of included information regarding background, symptoms, causes, diagnosis, treatments, surgical options, recovery procedures, and risks or preventions.</p><p><strong>Results: </strong>When compared to the reference standard of the AOFAS website FootCareMD.org, the AI language model platforms consistently scored below 60% in accuracy rates in all categories of the articles analyzed. ChatGPT was found to contain an average of 46.2% of key content across all included conditions when compared to FootCareMD.org. Comparatively, Google Bard and Bing AI contained 36.5% and 28.0% of information included on FootCareMD.org, respectively (P < .005).</p><p><strong>Conclusion: </strong>Patient education regarding common foot and ankle conditions generated by AI language models provides limited content accuracy across all 3 AI chatbot platforms.</p><p><strong>Level of evidence: </strong>Level IV.</p>\",\"PeriodicalId\":73046,\"journal\":{\"name\":\"Foot & ankle specialist\",\"volume\":\" \",\"pages\":\"19386400241235834\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-03-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Foot & ankle specialist\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/19386400241235834\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Foot & ankle specialist","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/19386400241235834","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Foot and Ankle Patient Education Materials and Artificial Intelligence Chatbots: A Comparative Analysis.
Background: The purpose of this study was to perform a comparative analysis of foot and ankle patient education material generated by the AI chatbots, as they compare to the American Orthopaedic Foot and Ankle Society (AOFAS)-recommended patient education website, FootCareMD.org.
Methods: ChatGPT, Google Bard, and Bing AI were used to generate patient educational materials on 10 of the most common foot and ankle conditions. The content from these AI language model platforms was analyzed and compared with that in FootCareMD.org for accuracy of included information. Accuracy was determined for each of the 10 conditions on a basis of included information regarding background, symptoms, causes, diagnosis, treatments, surgical options, recovery procedures, and risks or preventions.
Results: When compared to the reference standard of the AOFAS website FootCareMD.org, the AI language model platforms consistently scored below 60% in accuracy rates in all categories of the articles analyzed. ChatGPT was found to contain an average of 46.2% of key content across all included conditions when compared to FootCareMD.org. Comparatively, Google Bard and Bing AI contained 36.5% and 28.0% of information included on FootCareMD.org, respectively (P < .005).
Conclusion: Patient education regarding common foot and ankle conditions generated by AI language models provides limited content accuracy across all 3 AI chatbot platforms.