Responses of Five Different Artificial Intelligence Chatbots to the Top Searched Queries About Erectile Dysfunction: A Comparative Analysis

IF 3.5 3区 医学 Q1 HEALTH CARE SCIENCES & SERVICES Journal of Medical Systems Pub Date : 2024-04-03 DOI:10.1007/s10916-024-02056-0
Mehmet Fatih Şahin, Hüseyin Ateş, Anıl Keleş, Rıdvan Özcan, Çağrı Doğan, Murat Akgül, Cenk Murat Yazıcı
{"title":"Responses of Five Different Artificial Intelligence Chatbots to the Top Searched Queries About Erectile Dysfunction: A Comparative Analysis","authors":"Mehmet Fatih Şahin, Hüseyin Ateş, Anıl Keleş, Rıdvan Özcan, Çağrı Doğan, Murat Akgül, Cenk Murat Yazıcı","doi":"10.1007/s10916-024-02056-0","DOIUrl":null,"url":null,"abstract":"<p>The aim of the study is to evaluate and compare the quality and readability of responses generated by five different artificial intelligence (AI) chatbots—ChatGPT, Bard, Bing, Ernie, and Copilot—to the top searched queries of erectile dysfunction (ED). Google Trends was used to identify ED-related relevant phrases. Each AI chatbot received a specific sequence of 25 frequently searched terms as input. Responses were evaluated using DISCERN, Ensuring Quality Information for Patients (EQIP), and Flesch-Kincaid Grade Level (FKGL) and Reading Ease (FKRE) metrics. The top three most frequently searched phrases were “erectile dysfunction cause”, “how to erectile dysfunction,” and “erectile dysfunction treatment.” Zimbabwe, Zambia, and Ghana exhibited the highest level of interest in ED. None of the AI chatbots achieved the necessary degree of readability. However, Bard exhibited significantly higher FKRE and FKGL ratings (<i>p</i> = 0.001), and Copilot achieved better EQIP and DISCERN ratings than the other chatbots (<i>p</i> = 0.001). Bard exhibited the simplest linguistic framework and posed the least challenge in terms of readability and comprehension, and Copilot’s text quality on ED was superior to the other chatbots. As new chatbots are introduced, their understandability and text quality increase, providing better guidance to patients.</p>","PeriodicalId":16338,"journal":{"name":"Journal of Medical Systems","volume":"36 1","pages":""},"PeriodicalIF":3.5000,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Systems","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s10916-024-02056-0","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

The aim of the study is to evaluate and compare the quality and readability of responses generated by five different artificial intelligence (AI) chatbots—ChatGPT, Bard, Bing, Ernie, and Copilot—to the top searched queries of erectile dysfunction (ED). Google Trends was used to identify ED-related relevant phrases. Each AI chatbot received a specific sequence of 25 frequently searched terms as input. Responses were evaluated using DISCERN, Ensuring Quality Information for Patients (EQIP), and Flesch-Kincaid Grade Level (FKGL) and Reading Ease (FKRE) metrics. The top three most frequently searched phrases were “erectile dysfunction cause”, “how to erectile dysfunction,” and “erectile dysfunction treatment.” Zimbabwe, Zambia, and Ghana exhibited the highest level of interest in ED. None of the AI chatbots achieved the necessary degree of readability. However, Bard exhibited significantly higher FKRE and FKGL ratings (p = 0.001), and Copilot achieved better EQIP and DISCERN ratings than the other chatbots (p = 0.001). Bard exhibited the simplest linguistic framework and posed the least challenge in terms of readability and comprehension, and Copilot’s text quality on ED was superior to the other chatbots. As new chatbots are introduced, their understandability and text quality increase, providing better guidance to patients.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
五种不同的人工智能聊天机器人对有关勃起功能障碍的热门搜索的响应:比较分析
本研究的目的是评估和比较五个不同的人工智能(AI)聊天机器人--ChatGPT、Bard、Bing、Ernie 和 Copilot 对勃起功能障碍(ED)热门搜索查询所生成回复的质量和可读性。谷歌趋势用于识别与 ED 相关的短语。每个人工智能聊天机器人接收 25 个常用搜索词的特定序列作为输入。使用 DISCERN、确保患者信息质量(EQIP)、Flesch-Kincaid 等级(FKGL)和阅读轻松度(FKRE)指标对回复进行评估。最常搜索的前三个短语是 "勃起功能障碍的原因"、"如何勃起功能障碍 "和 "勃起功能障碍的治疗"。津巴布韦、赞比亚和加纳对 ED 的兴趣最高。没有一个人工智能聊天机器人达到了必要的可读性。不过,Bard 的 FKRE 和 FKGL 评分明显高于其他聊天机器人(p = 0.001),Copilot 的 EQIP 和 DISCERN 评分也高于其他聊天机器人(p = 0.001)。Bard 展现了最简单的语言框架,在可读性和理解方面带来的挑战最小,Copilot 在 ED 上的文本质量优于其他聊天机器人。随着新聊天机器人的推出,其可理解性和文本质量也会提高,从而为患者提供更好的指导。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Journal of Medical Systems
Journal of Medical Systems 医学-卫生保健
CiteScore
11.60
自引率
1.90%
发文量
83
审稿时长
4.8 months
期刊介绍: Journal of Medical Systems provides a forum for the presentation and discussion of the increasingly extensive applications of new systems techniques and methods in hospital clinic and physician''s office administration; pathology radiology and pharmaceutical delivery systems; medical records storage and retrieval; and ancillary patient-support systems. The journal publishes informative articles essays and studies across the entire scale of medical systems from large hospital programs to novel small-scale medical services. Education is an integral part of this amalgamation of sciences and selected articles are published in this area. Since existing medical systems are constantly being modified to fit particular circumstances and to solve specific problems the journal includes a special section devoted to status reports on current installations.
期刊最新文献
Garbage In, Garbage Out? Negative Impact of Physiological Waveform Artifacts in a Hospital Clinical Data Warehouse. 21st Century Cures Act and Information Blocking: How Have Different Specialties Responded? Self-Supervised Learning for Near-Wild Cognitive Workload Estimation. Electronic Health Records Sharing Based on Consortium Blockchain. Large Language Models in Healthcare: An Urgent Call for Ongoing, Rigorous Validation.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1