Assessing AI Simplification of Medical Texts: Readability and Content Fidelity

IF 3.7 2区 医学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS International Journal of Medical Informatics Pub Date : 2024-12-01 DOI:10.1016/j.ijmedinf.2024.105743
Bryce Picton , Saman Andalib , Aidin Spina , Brandon Camp , Sean S. Solomon , Jason Liang , Patrick M. Chen , Jefferson W. Chen , Frank P. Hsu , Michael Y. Oh
{"title":"Assessing AI Simplification of Medical Texts: Readability and Content Fidelity","authors":"Bryce Picton ,&nbsp;Saman Andalib ,&nbsp;Aidin Spina ,&nbsp;Brandon Camp ,&nbsp;Sean S. Solomon ,&nbsp;Jason Liang ,&nbsp;Patrick M. Chen ,&nbsp;Jefferson W. Chen ,&nbsp;Frank P. Hsu ,&nbsp;Michael Y. Oh","doi":"10.1016/j.ijmedinf.2024.105743","DOIUrl":null,"url":null,"abstract":"<div><h3>Introduction</h3><div>The escalating complexity of medical literature necessitates tools to enhance readability for patients. This study aimed to evaluate the efficacy of ChatGPT-4 in simplifying neurology and neurosurgical abstracts and patient education materials (PEMs) while assessing content preservation using Latent Semantic Analysis (LSA).</div></div><div><h3>Methods</h3><div>A total of 100 abstracts (25 each from <em>Neurosurgery, Journal of Neurosurgery, Lancet Neurology,</em> and <em>JAMA Neurology</em>) and 340 PEMs (66 from the <em>American Association of Neurological Surgeons,</em> 274 from the <em>American Academy</em> of <em>Neurology)</em> were transformed by a GPT-4.0 prompt requesting a 5th grade reading level. Flesch-Kincaid Grade Level (FKGL) and Flesch Reading Ease (FKRE) scores were used before/after transformation. Content fidelity was validated via LSA (ranging 0–1, 1 meaning identical topics) and by expert assessment (0–1) for a subset (n = 40). Pearson correlation coefficient compared assessments.</div></div><div><h3>Results</h3><div>FKGL decreased from 12th to 5th grade for abstracts and 13th to 5th for PEMs (p &lt; 0.001). FKRE scores showed similar improvement (p &lt; 0.001). LSA confirmed high content similarity for abstracts (mean cosine similarity 0.746) and PEMs (mean 0.953). Expert assessment indicated a mean topic similarity of 0.775 for abstracts and 0.715 for PEMs. The Pearson coefficient between LSA and expert assessment of textual similarity was 0.598 for abstracts and −0.167 for PEMs. Segmented analysis of similarity correlations revealed a correlation of 0.48 (p = 0.02) below 450 words and a −0.20 (p = 0.43) correlation above 450 words.</div></div><div><h3>Conclusion</h3><div>GPT-4.0 markedly improved the readability of medical texts, predominantly maintaining content integrity as substantiated by LSA and expert evaluations. LSA emerged as a reliable tool for assessing content fidelity within moderate-length texts, but its utility diminished for longer documents, overestimating similarity. These findings support the potential of AI in combating low health literacy, however, the similarity scores indicate expert validation is crucial. Future research must strive to improve transformation precision and develop validation methodologies.</div></div>","PeriodicalId":54950,"journal":{"name":"International Journal of Medical Informatics","volume":"195 ","pages":"Article 105743"},"PeriodicalIF":3.7000,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Medical Informatics","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1386505624004064","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Introduction

The escalating complexity of medical literature necessitates tools to enhance readability for patients. This study aimed to evaluate the efficacy of ChatGPT-4 in simplifying neurology and neurosurgical abstracts and patient education materials (PEMs) while assessing content preservation using Latent Semantic Analysis (LSA).

Methods

A total of 100 abstracts (25 each from Neurosurgery, Journal of Neurosurgery, Lancet Neurology, and JAMA Neurology) and 340 PEMs (66 from the American Association of Neurological Surgeons, 274 from the American Academy of Neurology) were transformed by a GPT-4.0 prompt requesting a 5th grade reading level. Flesch-Kincaid Grade Level (FKGL) and Flesch Reading Ease (FKRE) scores were used before/after transformation. Content fidelity was validated via LSA (ranging 0–1, 1 meaning identical topics) and by expert assessment (0–1) for a subset (n = 40). Pearson correlation coefficient compared assessments.

Results

FKGL decreased from 12th to 5th grade for abstracts and 13th to 5th for PEMs (p < 0.001). FKRE scores showed similar improvement (p < 0.001). LSA confirmed high content similarity for abstracts (mean cosine similarity 0.746) and PEMs (mean 0.953). Expert assessment indicated a mean topic similarity of 0.775 for abstracts and 0.715 for PEMs. The Pearson coefficient between LSA and expert assessment of textual similarity was 0.598 for abstracts and −0.167 for PEMs. Segmented analysis of similarity correlations revealed a correlation of 0.48 (p = 0.02) below 450 words and a −0.20 (p = 0.43) correlation above 450 words.

Conclusion

GPT-4.0 markedly improved the readability of medical texts, predominantly maintaining content integrity as substantiated by LSA and expert evaluations. LSA emerged as a reliable tool for assessing content fidelity within moderate-length texts, but its utility diminished for longer documents, overestimating similarity. These findings support the potential of AI in combating low health literacy, however, the similarity scores indicate expert validation is crucial. Future research must strive to improve transformation precision and develop validation methodologies.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
评估AI简化医学文本:可读性和内容保真度。
简介:医学文献的复杂性不断升级,需要工具来提高可读性。本研究旨在评估ChatGPT-4在简化神经病学和神经外科摘要和患者教育材料(PEMs)方面的功效,同时使用潜在语义分析(LSA)评估内容保存。方法:通过要求5年级阅读水平的GPT-4.0提示转换共100篇摘要(来自《神经外科》、《神经外科杂志》、《柳叶刀神经病学》和《美国医学会神经病学》各25篇)和340篇医学论文(来自美国神经外科协会66篇,美国神经病学学会274篇)。转换前后分别采用Flesch- kincaid Grade Level (FKGL)和Flesch Reading Ease (FKRE)评分。内容保真度通过LSA(范围0- 1,1表示相同的主题)和专家评估(0-1)对一个子集(n = 40)进行验证。Pearson相关系数比较评估。结果:摘要的FKGL从12级下降到5级,医学论文的FKGL从13级下降到5级(p结论:GPT-4.0显著提高了医学文献的可读性,主要保持了内容的完整性,LSA和专家评价证实了这一点。LSA作为评估中等长度文本内容保真度的可靠工具出现,但对于较长的文档,它的效用降低了,高估了相似性。这些发现支持人工智能在应对低健康素养方面的潜力,然而,相似度得分表明专家验证至关重要。未来的研究必须努力提高转换精度和开发验证方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
International Journal of Medical Informatics
International Journal of Medical Informatics 医学-计算机:信息系统
CiteScore
8.90
自引率
4.10%
发文量
217
审稿时长
42 days
期刊介绍: International Journal of Medical Informatics provides an international medium for dissemination of original results and interpretative reviews concerning the field of medical informatics. The Journal emphasizes the evaluation of systems in healthcare settings. The scope of journal covers: Information systems, including national or international registration systems, hospital information systems, departmental and/or physician''s office systems, document handling systems, electronic medical record systems, standardization, systems integration etc.; Computer-aided medical decision support systems using heuristic, algorithmic and/or statistical methods as exemplified in decision theory, protocol development, artificial intelligence, etc. Educational computer based programs pertaining to medical informatics or medicine in general; Organizational, economic, social, clinical impact, ethical and cost-benefit aspects of IT applications in health care.
期刊最新文献
Machine learning for predicting outcomes of transcatheter aortic valve implantation: A systematic review AI-driven triage in emergency departments: A review of benefits, challenges, and future directions Predicting cancer survival at different stages: Insights from fair and explainable machine learning approaches The fading structural prominence of explanations in clinical studies Utilization, challenges, and training needs of digital health technologies: Perspectives from healthcare professionals
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1