{"title":"Evaluating the Use of ChatGPT to Accurately Simplify Patient-centered Information about Breast Cancer Prevention and Screening.","authors":"Hana L Haver, Anuj K Gupta, Emily B Ambinder, Manisha Bahl, Eniola T Oluyemi, Jean Jeudy, Paul H Yi","doi":"10.1148/rycan.230086","DOIUrl":null,"url":null,"abstract":"<p><p>Purpose To evaluate the use of ChatGPT as a tool to simplify answers to common questions about breast cancer prevention and screening. Materials and Methods In this retrospective, exploratory study, ChatGPT was requested to simplify responses to 25 questions about breast cancer to a sixth-grade reading level in March and August 2023. Simplified responses were evaluated for clinical appropriateness. All original and simplified responses were assessed for reading ease on the Flesch Reading Ease Index and for readability on five scales: Flesch-Kincaid Grade Level, Gunning Fog Index, Coleman-Liau Index, Automated Readability Index, and the Simple Measure of Gobbledygook (ie, SMOG) Index. Mean reading ease, readability, and word count were compared between original and simplified responses using paired <i>t</i> tests. McNemar test was used to compare the proportion of responses with adequate reading ease (score of 60 or greater) and readability (sixth-grade level). Results ChatGPT improved mean reading ease (original responses, 46 vs simplified responses, 70; <i>P</i> < .001) and readability (original, grade 13 vs simplified, grade 8.9; <i>P</i> < .001) and decreased word count (original, 193 vs simplified, 173; <i>P</i> < .001). Ninety-two percent (23 of 25) of simplified responses were considered clinically appropriate. All 25 (100%) simplified responses met criteria for adequate reading ease, compared with only two of 25 original responses (<i>P</i> < .001). Two of the 25 simplified responses (8%) met criteria for adequate readability. Conclusion ChatGPT simplified answers to common breast cancer screening and prevention questions by improving the readability by four grade levels, though the potential to produce incorrect information necessitates physician oversight when using this tool. <b>Keywords:</b> Mammography, Screening, Informatics, Breast, Education, Health Policy and Practice, Oncology, Technology Assessment <i>Supplemental material is available for this article.</i> © RSNA, 2023.</p>","PeriodicalId":20786,"journal":{"name":"Radiology. Imaging cancer","volume":null,"pages":null},"PeriodicalIF":5.6000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10988327/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiology. Imaging cancer","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1148/rycan.230086","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ONCOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Purpose To evaluate the use of ChatGPT as a tool to simplify answers to common questions about breast cancer prevention and screening. Materials and Methods In this retrospective, exploratory study, ChatGPT was requested to simplify responses to 25 questions about breast cancer to a sixth-grade reading level in March and August 2023. Simplified responses were evaluated for clinical appropriateness. All original and simplified responses were assessed for reading ease on the Flesch Reading Ease Index and for readability on five scales: Flesch-Kincaid Grade Level, Gunning Fog Index, Coleman-Liau Index, Automated Readability Index, and the Simple Measure of Gobbledygook (ie, SMOG) Index. Mean reading ease, readability, and word count were compared between original and simplified responses using paired t tests. McNemar test was used to compare the proportion of responses with adequate reading ease (score of 60 or greater) and readability (sixth-grade level). Results ChatGPT improved mean reading ease (original responses, 46 vs simplified responses, 70; P < .001) and readability (original, grade 13 vs simplified, grade 8.9; P < .001) and decreased word count (original, 193 vs simplified, 173; P < .001). Ninety-two percent (23 of 25) of simplified responses were considered clinically appropriate. All 25 (100%) simplified responses met criteria for adequate reading ease, compared with only two of 25 original responses (P < .001). Two of the 25 simplified responses (8%) met criteria for adequate readability. Conclusion ChatGPT simplified answers to common breast cancer screening and prevention questions by improving the readability by four grade levels, though the potential to produce incorrect information necessitates physician oversight when using this tool. Keywords: Mammography, Screening, Informatics, Breast, Education, Health Policy and Practice, Oncology, Technology Assessment Supplemental material is available for this article. © RSNA, 2023.
评估使用 ChatGPT 准确简化以患者为中心的乳腺癌预防和筛查信息。
目的 评估将 ChatGPT 用作简化乳腺癌预防和筛查常见问题答案的工具的使用情况。材料和方法 在这项回顾性探索研究中,要求 ChatGPT 在 2023 年 3 月和 8 月将 25 个有关乳腺癌问题的回答简化为六年级的阅读水平。对简化后的回答进行了临床适宜性评估。根据 Flesch 阅读容易程度指数和五个量表对所有原始和简化回复的阅读容易程度进行了评估:弗莱什-金凯德等级水平、古宁雾指数、科尔曼-利亚指数、自动可读性指数和简单拗口指数(即 SMOG)。使用配对 t 检验比较了原始答案和简化答案的平均阅读难易度、可读性和字数。McNemar 检验用于比较具有适当阅读难度(60 分或以上)和可读性(六年级水平)的答卷比例。结果 ChatGPT 提高了平均阅读难度(原始答案 46 分,简化答案 70 分;P < .001)和可读性(原始答案 13 级,简化答案 8.9 级;P < .001),减少了字数(原始答案 193 字,简化答案 173 字;P < .001)。92%的简化回复(25 份中的 23 份)被认为符合临床需要。所有 25 份(100%)简化答卷都符合足够易读性的标准,而 25 份原始答卷中只有 2 份符合标准(P < .001)。25 个简化回复中有两个(8%)符合充分易读性标准。结论 ChatGPT 简化了常见的乳腺癌筛查和预防问题的答案,将可读性提高了四个等级,但由于可能会产生错误信息,因此医生在使用该工具时有必要进行监督。关键词乳房 X 线照相术 筛查 信息学 乳腺癌 教育 卫生政策与实践 肿瘤学 技术评估 本文有补充材料。© RSNA, 2023.
本文章由计算机程序翻译,如有差异,请以英文原文为准。