{"title":"Cholesteatoma: Conventional patient-focused versus AI-generated resources","authors":"Samantha N Little B.S., M.S., MPH","doi":"10.1016/j.jnma.2024.07.084","DOIUrl":null,"url":null,"abstract":"<div><h3>Introduction</h3><p>A diagnosis of cholesteatoma necessitates a collaborative effort between otolaryngologists and patients for effective treatment. While physicians play a central role in providing medical education, patients often seek additional information from external sources to enhance the comprehension of their diagnosis. This study compared patient-focused cholesteatoma literature from established sources to ChatGPT-generated material. Emphasizing health literacy's crucial influence on patient health outcomes, it evaluates both sources' accuracy, readability, understandability, and actionability to highlight potential differences in these frequently accessed resources.</p></div><div><h3>Methods</h3><p>A quantitative assessment was conducted by calculating the Patient Education Materials Assessment Tool (PEMAT) score, DISCERN score, Flesch-Kincaid Grade Level (FKGL), and Flesch Reading Ease Score (FRES) for each website and ChatGPT response. Raters determined accuracy by quantifying the number of errors in each resource.</p></div><div><h3>Results</h3><p>The patient-focused content was associated with better understandability compared to ChatGPT responses with a mean PEMAT-U score of 80.2 ± 10.6 and 60.0 ± 3.72 (P < .001), respectively. There was a significant difference in readability and quality demonstrated by average FKGL (P < .001), FRES (P < .001), and DISCERN scores in the individual ChatGPT responses (P < .001). There was no significant difference with regard to DISCERN combined responses (p = 0.224) or PEMAT-A scores (p = 0.567). An average of 2.6 ± 1.1 errors were found in the combined ChatGPT responses.</p></div><div><h3>Conclusion</h3><p>Patient-focused content from established sources on cholesteatoma was easier to read, more understandable, and more accurate when compared to responses from ChatGPT. k T Nickles, B.S.</p></div>","PeriodicalId":17369,"journal":{"name":"Journal of the National Medical Association","volume":"116 4","pages":"Page 448"},"PeriodicalIF":2.5000,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the National Medical Association","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0027968424001652","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0
Abstract
Introduction
A diagnosis of cholesteatoma necessitates a collaborative effort between otolaryngologists and patients for effective treatment. While physicians play a central role in providing medical education, patients often seek additional information from external sources to enhance the comprehension of their diagnosis. This study compared patient-focused cholesteatoma literature from established sources to ChatGPT-generated material. Emphasizing health literacy's crucial influence on patient health outcomes, it evaluates both sources' accuracy, readability, understandability, and actionability to highlight potential differences in these frequently accessed resources.
Methods
A quantitative assessment was conducted by calculating the Patient Education Materials Assessment Tool (PEMAT) score, DISCERN score, Flesch-Kincaid Grade Level (FKGL), and Flesch Reading Ease Score (FRES) for each website and ChatGPT response. Raters determined accuracy by quantifying the number of errors in each resource.
Results
The patient-focused content was associated with better understandability compared to ChatGPT responses with a mean PEMAT-U score of 80.2 ± 10.6 and 60.0 ± 3.72 (P < .001), respectively. There was a significant difference in readability and quality demonstrated by average FKGL (P < .001), FRES (P < .001), and DISCERN scores in the individual ChatGPT responses (P < .001). There was no significant difference with regard to DISCERN combined responses (p = 0.224) or PEMAT-A scores (p = 0.567). An average of 2.6 ± 1.1 errors were found in the combined ChatGPT responses.
Conclusion
Patient-focused content from established sources on cholesteatoma was easier to read, more understandable, and more accurate when compared to responses from ChatGPT. k T Nickles, B.S.
期刊介绍:
Journal of the National Medical Association, the official journal of the National Medical Association, is a peer-reviewed publication whose purpose is to address medical care disparities of persons of African descent.
The Journal of the National Medical Association is focused on specialized clinical research activities related to the health problems of African Americans and other minority groups. Special emphasis is placed on the application of medical science to improve the healthcare of underserved populations both in the United States and abroad. The Journal has the following objectives: (1) to expand the base of original peer-reviewed literature and the quality of that research on the topic of minority health; (2) to provide greater dissemination of this research; (3) to offer appropriate and timely recognition of the significant contributions of physicians who serve these populations; and (4) to promote engagement by member and non-member physicians in the overall goals and objectives of the National Medical Association.