Michael Warn, Leo L T Meller, Daniella Chan, Sina J Torabi, Benjamin F Bitner, Bobby A Tajudeen, Edward C Kuan
{"title":"评估经人工智能修改和生成的内窥镜颅底手术患者教育材料的可读性、可靠性和质量。","authors":"Michael Warn, Leo L T Meller, Daniella Chan, Sina J Torabi, Benjamin F Bitner, Bobby A Tajudeen, Edward C Kuan","doi":"10.1177/19458924241273055","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Despite National Institutes of Health and American Medical Association recommendations to publish online patient education materials at or below sixth-grade literacy, those pertaining to endoscopic skull base surgery (ESBS) have lacked readability and quality. ChatGPT is an artificial intelligence (AI) system capable of synthesizing vast internet data to generate responses to user queries but its utility in improving patient education materials has not been explored.</p><p><strong>Objective: </strong>To examine the current state of readability and quality of online patient education materials and determined the utility of ChatGPT for improving articles and generating patient education materials.</p><p><strong>Methods: </strong>An article search was performed utilizing 10 different search terms related to ESBS. The ten least readable existing patient-facing articles were modified with ChatGPT and iterative queries were used to generate an article <i>de novo</i>. The Flesch Reading Ease (FRE) and related metrics measured overall readability and content literacy level, while DISCERN assessed article reliability and quality.</p><p><strong>Results: </strong>Sixty-six articles were located. ChatGPT improved FRE readability of the 10 least readable online articles (19.7 ± 4.4 vs. 56.9 ± 5.9, <i>p</i> < 0.001), from university to 10th grade level. The generated article was more readable than 48.5% of articles (38.9 vs. 39.4 ± 12.4) and higher quality than 94% (51.0 vs. 37.6 ± 6.1). 56.7% of the online articles had \"poor\" quality.</p><p><strong>Conclusions: </strong>ChatGPT improves the readability of articles, though most still remain above the recommended literacy level for patient education materials. With iterative queries, ChatGPT can generate more reliable and higher quality patient education materials compared to most existing online articles and can be tailored to match readability of average online articles.</p>","PeriodicalId":7650,"journal":{"name":"American Journal of Rhinology & Allergy","volume":" ","pages":"396-402"},"PeriodicalIF":2.5000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Assessing the Readability, Reliability, and Quality of AI-Modified and Generated Patient Education Materials for Endoscopic Skull Base Surgery.\",\"authors\":\"Michael Warn, Leo L T Meller, Daniella Chan, Sina J Torabi, Benjamin F Bitner, Bobby A Tajudeen, Edward C Kuan\",\"doi\":\"10.1177/19458924241273055\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Despite National Institutes of Health and American Medical Association recommendations to publish online patient education materials at or below sixth-grade literacy, those pertaining to endoscopic skull base surgery (ESBS) have lacked readability and quality. ChatGPT is an artificial intelligence (AI) system capable of synthesizing vast internet data to generate responses to user queries but its utility in improving patient education materials has not been explored.</p><p><strong>Objective: </strong>To examine the current state of readability and quality of online patient education materials and determined the utility of ChatGPT for improving articles and generating patient education materials.</p><p><strong>Methods: </strong>An article search was performed utilizing 10 different search terms related to ESBS. The ten least readable existing patient-facing articles were modified with ChatGPT and iterative queries were used to generate an article <i>de novo</i>. The Flesch Reading Ease (FRE) and related metrics measured overall readability and content literacy level, while DISCERN assessed article reliability and quality.</p><p><strong>Results: </strong>Sixty-six articles were located. ChatGPT improved FRE readability of the 10 least readable online articles (19.7 ± 4.4 vs. 56.9 ± 5.9, <i>p</i> < 0.001), from university to 10th grade level. The generated article was more readable than 48.5% of articles (38.9 vs. 39.4 ± 12.4) and higher quality than 94% (51.0 vs. 37.6 ± 6.1). 56.7% of the online articles had \\\"poor\\\" quality.</p><p><strong>Conclusions: </strong>ChatGPT improves the readability of articles, though most still remain above the recommended literacy level for patient education materials. With iterative queries, ChatGPT can generate more reliable and higher quality patient education materials compared to most existing online articles and can be tailored to match readability of average online articles.</p>\",\"PeriodicalId\":7650,\"journal\":{\"name\":\"American Journal of Rhinology & Allergy\",\"volume\":\" \",\"pages\":\"396-402\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2024-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"American Journal of Rhinology & Allergy\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1177/19458924241273055\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/8/21 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"OTORHINOLARYNGOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"American Journal of Rhinology & Allergy","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/19458924241273055","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/8/21 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"OTORHINOLARYNGOLOGY","Score":null,"Total":0}
引用次数: 0
摘要
背景:尽管美国国立卫生研究院(National Institutes of Health)和美国医学会(American Medical Association)建议在线患者教育材料的识字率应达到或低于六年级,但与内窥镜颅底手术(ESBS)相关的材料却缺乏可读性和质量。ChatGPT是一种人工智能(AI)系统,能够综合大量互联网数据,生成对用户查询的回复,但其在改进患者教育材料方面的效用尚未得到探讨:研究在线患者教育资料的可读性和质量现状,确定 ChatGPT 在改进文章和生成患者教育资料方面的作用:方法:利用与 ESBS 相关的 10 个不同搜索词进行文章搜索。使用 ChatGPT 对现有的十篇可读性最低的面向患者的文章进行了修改,并使用迭代查询重新生成一篇文章。弗莱什阅读容易度(FRE)和相关指标衡量了整体可读性和内容素养水平,而 DISCERN 则评估了文章的可靠性和质量:结果:共找到 66 篇文章。ChatGPT 提高了 10 篇可读性最低的在线文章的 FRE 可读性(19.7 ± 4.4 vs. 56.9 ± 5.9,p < 0.001),从大学水平提高到 10 年级水平。生成的文章可读性高于48.5%的文章(38.9 vs. 39.4 ± 12.4),质量高于94%的文章(51.0 vs. 37.6 ± 6.1)。56.7%的在线文章质量 "差":ChatGPT提高了文章的可读性,尽管大多数文章的可读性仍高于推荐的患者教育材料的识字水平。通过迭代查询,ChatGPT 可以生成比大多数现有在线文章更可靠、质量更高的患者教育材料,并且可以根据一般在线文章的可读性进行定制。
Assessing the Readability, Reliability, and Quality of AI-Modified and Generated Patient Education Materials for Endoscopic Skull Base Surgery.
Background: Despite National Institutes of Health and American Medical Association recommendations to publish online patient education materials at or below sixth-grade literacy, those pertaining to endoscopic skull base surgery (ESBS) have lacked readability and quality. ChatGPT is an artificial intelligence (AI) system capable of synthesizing vast internet data to generate responses to user queries but its utility in improving patient education materials has not been explored.
Objective: To examine the current state of readability and quality of online patient education materials and determined the utility of ChatGPT for improving articles and generating patient education materials.
Methods: An article search was performed utilizing 10 different search terms related to ESBS. The ten least readable existing patient-facing articles were modified with ChatGPT and iterative queries were used to generate an article de novo. The Flesch Reading Ease (FRE) and related metrics measured overall readability and content literacy level, while DISCERN assessed article reliability and quality.
Results: Sixty-six articles were located. ChatGPT improved FRE readability of the 10 least readable online articles (19.7 ± 4.4 vs. 56.9 ± 5.9, p < 0.001), from university to 10th grade level. The generated article was more readable than 48.5% of articles (38.9 vs. 39.4 ± 12.4) and higher quality than 94% (51.0 vs. 37.6 ± 6.1). 56.7% of the online articles had "poor" quality.
Conclusions: ChatGPT improves the readability of articles, though most still remain above the recommended literacy level for patient education materials. With iterative queries, ChatGPT can generate more reliable and higher quality patient education materials compared to most existing online articles and can be tailored to match readability of average online articles.
期刊介绍:
The American Journal of Rhinology & Allergy is a peer-reviewed, scientific publication committed to expanding knowledge and publishing the best clinical and basic research within the fields of Rhinology & Allergy. Its focus is to publish information which contributes to improved quality of care for patients with nasal and sinus disorders. Its primary readership consists of otolaryngologists, allergists, and plastic surgeons. Published material includes peer-reviewed original research, clinical trials, and review articles.