人工智能改善了泌尿科肿瘤患者的教育和咨询。

IF 1.2 4区 医学 Q3 UROLOGY & NEPHROLOGY Canadian Journal of Urology Pub Date : 2024-10-01
Yash B Shah, Anushka Ghosh, Aaron Hochberg, James R Mark, Costas D Lallas, Mihir S Shah
{"title":"人工智能改善了泌尿科肿瘤患者的教育和咨询。","authors":"Yash B Shah, Anushka Ghosh, Aaron Hochberg, James R Mark, Costas D Lallas, Mihir S Shah","doi":"","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Patients seek support from online resources when facing a troubling urologic cancer diagnosis. Physician-written resources exceed the recommended 6-8th grade reading level, creating confusion and driving patients towards unregulated online materials like AI chatbots. We aim to compare the readability and quality of patient education on ChatGPT against Epic and Urology Care Foundation (UCF).</p><p><strong>Materials and methods: </strong>We analyzed prostate, bladder, and kidney cancer content from ChatGPT, Epic, and UCF. We further studied readability-adjusted responses using specific AI prompting (ChatGPT-a) and Epic material designated as Easy to Read. Blinded reviewers completed descriptive textual analysis, readability analysis via six validated formulas, and quality analysis via DISCERN, PEMAT, and Likert tools.</p><p><strong>Results: </strong>Epic met the recommended grade level, while UCF and ChatGPT exceeded it (5.81 vs. 8.44 vs. 12.16, p < 0.001). ChatGPT text was longer with more complex wording (p < 0.001). Quality was fair for Epic, good for UCF, and excellent for ChatGPT (49.5 vs. 61.67 vs. 64.33). Actionability was overall poor but particularly lowest (37%) for Epic. On qualitative analysis, Epic lagged on all quality measures. When adjusted for user education level (ChatGPT-a and Epic Easy to Read), readability improved (7.50 and 3.53), but only ChatGPT-a retained high quality.</p><p><strong>Conclusions: </strong>Online urologic oncology patient materials largely exceed the average American's literacy level and often lack real-world utility for patients. Our ChatGPT-a model indicates that AI technology can improve accessibility and usefulness. With development, a healthcare-specific AI program may help providers create content that is accessible and personalized to improve shared decision-making for urology patients.</p>","PeriodicalId":56323,"journal":{"name":"Canadian Journal of Urology","volume":"31 5","pages":"12013-12018"},"PeriodicalIF":1.2000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial intelligence improves urologic oncology patient education and counseling.\",\"authors\":\"Yash B Shah, Anushka Ghosh, Aaron Hochberg, James R Mark, Costas D Lallas, Mihir S Shah\",\"doi\":\"\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Introduction: </strong>Patients seek support from online resources when facing a troubling urologic cancer diagnosis. Physician-written resources exceed the recommended 6-8th grade reading level, creating confusion and driving patients towards unregulated online materials like AI chatbots. We aim to compare the readability and quality of patient education on ChatGPT against Epic and Urology Care Foundation (UCF).</p><p><strong>Materials and methods: </strong>We analyzed prostate, bladder, and kidney cancer content from ChatGPT, Epic, and UCF. We further studied readability-adjusted responses using specific AI prompting (ChatGPT-a) and Epic material designated as Easy to Read. Blinded reviewers completed descriptive textual analysis, readability analysis via six validated formulas, and quality analysis via DISCERN, PEMAT, and Likert tools.</p><p><strong>Results: </strong>Epic met the recommended grade level, while UCF and ChatGPT exceeded it (5.81 vs. 8.44 vs. 12.16, p < 0.001). ChatGPT text was longer with more complex wording (p < 0.001). Quality was fair for Epic, good for UCF, and excellent for ChatGPT (49.5 vs. 61.67 vs. 64.33). Actionability was overall poor but particularly lowest (37%) for Epic. On qualitative analysis, Epic lagged on all quality measures. When adjusted for user education level (ChatGPT-a and Epic Easy to Read), readability improved (7.50 and 3.53), but only ChatGPT-a retained high quality.</p><p><strong>Conclusions: </strong>Online urologic oncology patient materials largely exceed the average American's literacy level and often lack real-world utility for patients. Our ChatGPT-a model indicates that AI technology can improve accessibility and usefulness. With development, a healthcare-specific AI program may help providers create content that is accessible and personalized to improve shared decision-making for urology patients.</p>\",\"PeriodicalId\":56323,\"journal\":{\"name\":\"Canadian Journal of Urology\",\"volume\":\"31 5\",\"pages\":\"12013-12018\"},\"PeriodicalIF\":1.2000,\"publicationDate\":\"2024-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Canadian Journal of Urology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"UROLOGY & NEPHROLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Canadian Journal of Urology","FirstCategoryId":"3","ListUrlMain":"","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"UROLOGY & NEPHROLOGY","Score":null,"Total":0}
引用次数: 0

摘要

导言:患者在面对棘手的泌尿系统癌症诊断时会寻求在线资源的支持。医生撰写的资源超过了建议的 6-8 年级阅读水平,给患者造成了困惑,使他们转向人工智能聊天机器人等不规范的在线材料。我们旨在比较 ChatGPT 与 Epic 和泌尿外科护理基金会(UCF)上患者教育的可读性和质量:我们分析了来自 ChatGPT、Epic 和 UCF 的前列腺癌、膀胱癌和肾癌内容。我们进一步研究了使用特定人工智能提示(ChatGPT-a)和被指定为易读的 Epic 资料的可读性调整回复。盲审稿人完成了描述性文本分析、通过六个有效公式进行的可读性分析,以及通过 DISCERN、PEMAT 和 Likert 工具进行的质量分析:Epic达到了建议的年级水平,而UCF和ChatGPT超过了建议的年级水平(5.81 vs. 8.44 vs. 12.16,p < 0.001)。ChatGPT 的文本更长,措辞更复杂(p < 0.001)。Epic 的质量尚可,UCF 的质量良好,而 ChatGPT 的质量极佳(49.5 vs. 61.67 vs. 64.33)。可操作性总体较差,但 Epic 的可操作性最低(37%)。从定性分析来看,Epic 在所有质量指标上都落后于其他公司。根据用户教育水平(ChatGPT-a 和 Epic 易读性)进行调整后,可读性有所提高(7.50 和 3.53),但只有 ChatGPT-a 保持了较高的质量:结论:在线泌尿肿瘤患者资料在很大程度上超过了美国人的平均文化水平,对患者而言往往缺乏实际效用。我们的 ChatGPT-a 模型表明,人工智能技术可以提高可访问性和实用性。经过开发,医疗保健专用的人工智能程序可以帮助医疗服务提供者创建可访问且个性化的内容,从而改善泌尿科患者的共同决策。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Artificial intelligence improves urologic oncology patient education and counseling.

Introduction: Patients seek support from online resources when facing a troubling urologic cancer diagnosis. Physician-written resources exceed the recommended 6-8th grade reading level, creating confusion and driving patients towards unregulated online materials like AI chatbots. We aim to compare the readability and quality of patient education on ChatGPT against Epic and Urology Care Foundation (UCF).

Materials and methods: We analyzed prostate, bladder, and kidney cancer content from ChatGPT, Epic, and UCF. We further studied readability-adjusted responses using specific AI prompting (ChatGPT-a) and Epic material designated as Easy to Read. Blinded reviewers completed descriptive textual analysis, readability analysis via six validated formulas, and quality analysis via DISCERN, PEMAT, and Likert tools.

Results: Epic met the recommended grade level, while UCF and ChatGPT exceeded it (5.81 vs. 8.44 vs. 12.16, p < 0.001). ChatGPT text was longer with more complex wording (p < 0.001). Quality was fair for Epic, good for UCF, and excellent for ChatGPT (49.5 vs. 61.67 vs. 64.33). Actionability was overall poor but particularly lowest (37%) for Epic. On qualitative analysis, Epic lagged on all quality measures. When adjusted for user education level (ChatGPT-a and Epic Easy to Read), readability improved (7.50 and 3.53), but only ChatGPT-a retained high quality.

Conclusions: Online urologic oncology patient materials largely exceed the average American's literacy level and often lack real-world utility for patients. Our ChatGPT-a model indicates that AI technology can improve accessibility and usefulness. With development, a healthcare-specific AI program may help providers create content that is accessible and personalized to improve shared decision-making for urology patients.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Canadian Journal of Urology
Canadian Journal of Urology UROLOGY & NEPHROLOGY-
CiteScore
1.90
自引率
0.00%
发文量
86
审稿时长
6-12 weeks
期刊介绍: The CJU publishes articles of interest to the field of urology and related specialties who treat urologic diseases.
期刊最新文献
A Chief Wellness Officer, Every Hospital Should Have One; Marlon Brando Was Right. Artificial intelligence improves urologic oncology patient education and counseling. Factors associated with surgical refusal and non-surgical candidacy in stage 1 kidney cancer: a National Cancer Database (NCDB) analysis. How I Do It:  EnPlace sacrospinous ligament fixation. Implications of MRI contrast enhancement following focal prostate cancer cryoablation.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1