Background
Parental health literacy significantly affects pediatric ophthalmology follow-up care and adherence to treatment regimens. Yet patient education materials (PEMs) often exceed the American Medical Association’s recommended 6th-grade reading level. Large-language models (LLMs) can improve the readability of PEMs without sacrificing quality. This study evaluated the baseline readability, quality, and accuracy of PEMs by the American Association for Pediatric Ophthalmology and Strabismus (AAPOS) and assessed how LLMs may improve these PEMs.
Methods
This cross-sectional study analyzed 111 PEMs from the AAPOS website. Readability was assessed using the Flesch-Kincaid Grade Level (FKGL) and Simple Measure of Gobbledygook (SMOG). Quality and understandability were evaluated using the DISCERN and the Patient Education Materials Assessment Tool (PEMAT), respectively. Accuracy was assessed using the Likert misinformation scale. Each PEM was separately rewritten by ChatGPT-4 and Gemini Advanced after initial analysis. Changes were analyzed.
Results
Baseline PEMs were written on average at a 9th-grade reading level (SMOG, 9.0 ± 1.6; FKGL, 9.6 ± 2.1), with only 3.6% meeting the 6th-grade recommendation. ChatGPT-4 rewrites improved readability to a 7th-grade level without compromising quality, while Gemini Advanced rewrites met the 6th-grade threshold but showed modestly reduced quality (DISCERN: 3; P < 0.001). Both models enhanced understandability (ChatGPT-4, 90.9%; Gemini Advanced, 91.3%; [P < 0.001]), and their rewrites contained no misinformation (Likert = 1).
Conclusions
AAPOS PEMs were high in quality and accurate at baseline, but written at a high school level. As supplemental tools, LLMs can improve PEMs’ readability and understandability. PEMs should be thoroughly reviewed by physicians to ensure optimal safety and education.
扫码关注我们
求助内容:
应助结果提醒方式:
