Evaluation of information accuracy and clarity: ChatGPT responses to the most frequently asked questions about premature ejaculation.

IF 2.6 3区 医学 Q1 MEDICINE, GENERAL & INTERNAL Sexual Medicine Pub Date : 2024-06-02 eCollection Date: 2024-06-01 DOI:10.1093/sexmed/qfae036
Mehmet Fatih Şahin, Anil Keleş, Rıdvan Özcan, Çağrı Doğan, Erdem Can Topkaç, Murat Akgül, Cenk Murat Yazıci
{"title":"Evaluation of information accuracy and clarity: ChatGPT responses to the most frequently asked questions about premature ejaculation.","authors":"Mehmet Fatih Şahin, Anil Keleş, Rıdvan Özcan, Çağrı Doğan, Erdem Can Topkaç, Murat Akgül, Cenk Murat Yazıci","doi":"10.1093/sexmed/qfae036","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Premature ejaculation (PE) is the most prevalent sexual dysfunction in men, and like many diseases and conditions, patients use Internet sources like ChatGPT, which is a popular artificial intelligence-based language model, for queries about this andrological disorder.</p><p><strong>Aim: </strong>The objective of this research was to evaluate the quality, readability, and understanding of texts produced by ChatGPT in response to frequently requested inquiries on PE.</p><p><strong>Methods: </strong>In this study we used Google Trends to identify the most frequently searched phrases related to PE. Subsequently, the discovered keywords were methodically entered into ChatGPT, and the resulting replies were assessed for quality using the Ensuring Quality Information for Patients (EQIP) program. The produced texts were assessed for readability using the Flesch-Kincaid Grade Level (FKGL), Flesch Reading Ease Score (FRES), and DISCERN metrics.</p><p><strong>Outcomes: </strong>This investigation has identified substantial concerns about the quality of texts produced by ChatGPT, highlighting severe problems with reading and understanding.</p><p><strong>Results: </strong>The mean EQIP score for the texts was determined to be 45.93 ± 4.34, while the FRES was 15.8 ± 8.73. Additionally, the FKGL score was computed to be 15.68 ± 1.67 and the DISCERN score was 38.1 ± 3.78. The comparatively low average EQIP and DISCERN scores suggest that improvements are required to increase the quality and dependability of the presented information. In addition, the FKGL scores indicate a significant degree of linguistic intricacy, requiring a level of knowledge comparable to about 14 to 15 years of formal schooling in order to understand. The texts about treatment, which are the most frequently searched items, are more difficult to understand compared to other texts about other categories.</p><p><strong>Clinical implications: </strong>The results of this research suggest that compared to texts on other topics the PE texts produced by ChatGPT exhibit a higher degree of complexity, which exceeds the recommended reading threshold for effective health communication. Currently, ChatGPT is cannot be considered a substitute for comprehensive medical consultations.</p><p><strong>Strengths and limitations: </strong>This study is to our knowledge the first reported research investigating the quality and comprehensibility of information generated by ChatGPT in relation to frequently requested queries about PE. The main limitation is that the investigation included only the first 25 popular keywords in English.</p><p><strong>Conclusion: </strong>ChatGPT is incapable of replacing the need for thorough medical consultations.</p>","PeriodicalId":21782,"journal":{"name":"Sexual Medicine","volume":null,"pages":null},"PeriodicalIF":2.6000,"publicationDate":"2024-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11144523/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Sexual Medicine","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1093/sexmed/qfae036","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/6/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Premature ejaculation (PE) is the most prevalent sexual dysfunction in men, and like many diseases and conditions, patients use Internet sources like ChatGPT, which is a popular artificial intelligence-based language model, for queries about this andrological disorder.

Aim: The objective of this research was to evaluate the quality, readability, and understanding of texts produced by ChatGPT in response to frequently requested inquiries on PE.

Methods: In this study we used Google Trends to identify the most frequently searched phrases related to PE. Subsequently, the discovered keywords were methodically entered into ChatGPT, and the resulting replies were assessed for quality using the Ensuring Quality Information for Patients (EQIP) program. The produced texts were assessed for readability using the Flesch-Kincaid Grade Level (FKGL), Flesch Reading Ease Score (FRES), and DISCERN metrics.

Outcomes: This investigation has identified substantial concerns about the quality of texts produced by ChatGPT, highlighting severe problems with reading and understanding.

Results: The mean EQIP score for the texts was determined to be 45.93 ± 4.34, while the FRES was 15.8 ± 8.73. Additionally, the FKGL score was computed to be 15.68 ± 1.67 and the DISCERN score was 38.1 ± 3.78. The comparatively low average EQIP and DISCERN scores suggest that improvements are required to increase the quality and dependability of the presented information. In addition, the FKGL scores indicate a significant degree of linguistic intricacy, requiring a level of knowledge comparable to about 14 to 15 years of formal schooling in order to understand. The texts about treatment, which are the most frequently searched items, are more difficult to understand compared to other texts about other categories.

Clinical implications: The results of this research suggest that compared to texts on other topics the PE texts produced by ChatGPT exhibit a higher degree of complexity, which exceeds the recommended reading threshold for effective health communication. Currently, ChatGPT is cannot be considered a substitute for comprehensive medical consultations.

Strengths and limitations: This study is to our knowledge the first reported research investigating the quality and comprehensibility of information generated by ChatGPT in relation to frequently requested queries about PE. The main limitation is that the investigation included only the first 25 popular keywords in English.

Conclusion: ChatGPT is incapable of replacing the need for thorough medical consultations.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
评估信息的准确性和清晰度:ChatGPT 对早泄最常见问题的回答。
背景:早泄(PE)是男性最常见的性功能障碍,与许多疾病和病症一样,患者也会使用 ChatGPT 等互联网资源来查询有关这种性功能障碍的信息,ChatGPT 是一种流行的基于人工智能的语言模型:在这项研究中,我们使用谷歌趋势来确定与 PE 相关的最常搜索短语。随后,将发现的关键词有条不紊地输入 ChatGPT,并使用 "确保患者信息质量(EQIP)"程序对所生成的回复进行质量评估。使用弗莱什-金凯德等级水平(FKGL)、弗莱什阅读容易程度评分(FRES)和 DISCERN 指标对生成的文本进行了可读性评估:本次调查发现了 ChatGPT 制作的文本质量存在严重问题,突出表现在阅读和理解方面:文本的平均 EQIP 得分为 45.93 ± 4.34,FRES 为 15.8 ± 8.73。此外,FKGL 得分为 15.68 ± 1.67,DISCERN 得分为 38.1 ± 3.78。EQIP 和 DISCERN 的平均得分相对较低,这表明需要改进以提高所提供信息的质量和可靠性。此外,FKGL 分数表明语言的复杂程度相当高,需要大约 14 至 15 年正规学校教育的知识水平才能理解。与其他类别的文本相比,有关治疗的文本更难理解,而这些文本是搜索频率最高的项目:研究结果表明,与其他主题的文本相比,ChatGPT 制作的 PE 文本显示出更高的复杂性,超过了有效健康交流的推荐阅读阈值。目前,还不能将 ChatGPT 视为全面医疗咨询的替代品:据我们所知,本研究是首次对 ChatGPT 生成的与 PE 常见询问相关的信息的质量和可理解性进行调查的研究报告。主要的局限性在于调查只包括了前 25 个常用的英文关键词:结论:ChatGPT 无法取代全面医疗咨询。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Sexual Medicine
Sexual Medicine MEDICINE, GENERAL & INTERNAL-
CiteScore
5.40
自引率
0.00%
发文量
103
审稿时长
22 weeks
期刊介绍: Sexual Medicine is an official publication of the International Society for Sexual Medicine, and serves the field as the peer-reviewed, open access journal for rapid dissemination of multidisciplinary clinical and basic research in all areas of global sexual medicine, and particularly acts as a venue for topics of regional or sub-specialty interest. The journal is focused on issues in clinical medicine and epidemiology but also publishes basic science papers with particular relevance to specific populations. Sexual Medicine offers clinicians and researchers a rapid route to publication and the opportunity to publish in a broadly distributed and highly visible global forum. The journal publishes high quality articles from all over the world and actively seeks submissions from countries with expanding sexual medicine communities. Sexual Medicine relies on the same expert panel of editors and reviewers as The Journal of Sexual Medicine and Sexual Medicine Reviews.
期刊最新文献
Translating and validating the gay affirmative practice scale for nurses in mainland China. A review of Peyronie's disease insurance coverage. Beyond conventional wisdom: unexplored risk factors for penile fracture. Clinical case of 45,X/46,XY mosaic male with ejaculatory disorder associated with seminal vesicle dysplasia: a case report. Female sexual dysfunctions in multiple sclerosis patients with lower urinary tract symptoms: an Italian case-control study.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1