The Role of Artificial Intelligence in Patient Education: A Bladder Cancer Consultation with ChatGPT

Allen Ao Guo, Basil Razi, Paul Kim, Ashan Canagasingham, Justin Vass, Venu Chalasani, Krishan Rasiah, Amanda Chung
{"title":"The Role of Artificial Intelligence in Patient Education: A Bladder Cancer Consultation with ChatGPT","authors":"Allen Ao Guo, Basil Razi, Paul Kim, Ashan Canagasingham, Justin Vass, Venu Chalasani, Krishan Rasiah, Amanda Chung","doi":"10.3390/siuj5030032","DOIUrl":null,"url":null,"abstract":"Objectives: ChatGPT is a large language model that is able to generate human-like text. The aim of this study was to evaluate ChatGPT as a potential supplement to urological clinical practice by exploring its capacity, efficacy and accuracy when delivering information on frequently asked questions from patients with bladder cancer. Methods: We proposed 10 hypothetical questions to ChatGPT to simulate a doctor–patient consultation for patients recently diagnosed with bladder cancer. The responses were then assessed using two predefined scales of accuracy and completeness by Specialist Urologists. Results: ChatGPT provided coherent answers that were concise and easily comprehensible. Overall, mean accuracy scores for the 10 questions ranged from 3.7 to 6.0, with a median of 5.0. Mean completeness scores ranged from 1.3 to 2.3, with a median of 1.8. ChatGPT was also cognizant of its own limitations and recommended all patients should adhere closely to medical advice dispensed by their healthcare provider. Conclusions: This study provides further insight into the role of ChatGPT as an adjunct consultation tool for answering frequently asked questions from patients with bladder cancer diagnosis. Whilst it was able to provide information in a concise and coherent manner, there were concerns regarding the completeness of information conveyed. Further development and research into this rapidly evolving tool are required to ascertain the potential impacts of AI models such as ChatGPT in urology and the broader healthcare landscape.","PeriodicalId":21961,"journal":{"name":"Société Internationale d’Urologie Journal","volume":"21 16","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Société Internationale d’Urologie Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/siuj5030032","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Objectives: ChatGPT is a large language model that is able to generate human-like text. The aim of this study was to evaluate ChatGPT as a potential supplement to urological clinical practice by exploring its capacity, efficacy and accuracy when delivering information on frequently asked questions from patients with bladder cancer. Methods: We proposed 10 hypothetical questions to ChatGPT to simulate a doctor–patient consultation for patients recently diagnosed with bladder cancer. The responses were then assessed using two predefined scales of accuracy and completeness by Specialist Urologists. Results: ChatGPT provided coherent answers that were concise and easily comprehensible. Overall, mean accuracy scores for the 10 questions ranged from 3.7 to 6.0, with a median of 5.0. Mean completeness scores ranged from 1.3 to 2.3, with a median of 1.8. ChatGPT was also cognizant of its own limitations and recommended all patients should adhere closely to medical advice dispensed by their healthcare provider. Conclusions: This study provides further insight into the role of ChatGPT as an adjunct consultation tool for answering frequently asked questions from patients with bladder cancer diagnosis. Whilst it was able to provide information in a concise and coherent manner, there were concerns regarding the completeness of information conveyed. Further development and research into this rapidly evolving tool are required to ascertain the potential impacts of AI models such as ChatGPT in urology and the broader healthcare landscape.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
人工智能在患者教育中的作用:使用 ChatGPT 进行膀胱癌咨询
目标ChatGPT 是一种大型语言模型,能够生成类人文本。本研究的目的是评估 ChatGPT 作为泌尿科临床实践潜在补充的能力,探索其在提供膀胱癌患者常见问题信息时的能力、功效和准确性。方法:我们向 ChatGPT 提出了 10 个假设性问题,以模拟医生与最近被诊断为膀胱癌的患者之间的问诊。然后由泌尿科专家使用两个预定义的准确性和完整性量表对回复进行评估。结果:ChatGPT 提供了简明易懂的连贯答案。总体而言,10 个问题的平均准确性得分在 3.7 到 6.0 之间,中位数为 5.0。完整度的平均分在 1.3 到 2.3 之间,中位数为 1.8。ChatGPT 也认识到自身的局限性,建议所有患者应严格遵守医疗保健提供者的医疗建议。结论本研究进一步揭示了 ChatGPT 作为辅助咨询工具在回答膀胱癌患者常见问题方面的作用。虽然它能以简洁连贯的方式提供信息,但所传达信息的完整性仍令人担忧。要确定 ChatGPT 等人工智能模型对泌尿外科和更广泛的医疗保健领域的潜在影响,还需要对这一快速发展的工具进行进一步的开发和研究。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
The Role of Artificial Intelligence in Patient Education: A Bladder Cancer Consultation with ChatGPT Perioperative Blood Transfusion Is Associated with Worse Survival in Patients Undergoing Radical Cystectomy after Neoadjuvant Chemotherapy for Muscle-Invasive Bladder Cancer RE: Prevalence of MRI Lesions in Men Responding to a GP-Led Invitation for a Prostate Health Check: A Prospective Cohort Study Quality and Readability of Google Search Information on HoLEP for Benign Prostate Hyperplasia A Quality and Completeness Assessment of Testicular Cancer Health Information on TikTok
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1