Promoting Health Literacy With Human-in-the-Loop Video Understandability Classification of YouTube Videos: Development and Evaluation Study.

IF 6 2区 医学 Q1 HEALTH CARE SCIENCES & SERVICES Journal of Medical Internet Research Pub Date : 2025-04-08 DOI:10.2196/56080
Xiao Liu, Anjana Susarla, Rema Padman
{"title":"Promoting Health Literacy With Human-in-the-Loop Video Understandability Classification of YouTube Videos: Development and Evaluation Study.","authors":"Xiao Liu, Anjana Susarla, Rema Padman","doi":"10.2196/56080","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>An estimated 93% of adults in the United States access the internet, with up to 80% looking for health information. However, only 12% of US adults are proficient enough in health literacy to interpret health information and make informed health care decisions meaningfully. With the vast amount of health information available in multimedia formats on social media platforms such as YouTube and Facebook, there is an urgent need and a unique opportunity to design an automated approach to curate online health information using multiple criteria to meet the health literacy needs of a diverse population.</p><p><strong>Objective: </strong>This study aimed to develop an automated approach to assessing the understandability of patient educational videos according to the Patient Education Materials Assessment Tool (PEMAT) guidelines and evaluating the impact of video understandability on viewer engagement. We also offer insights for content creators and health care organizations on how to improve engagement with these educational videos on user-generated content platforms.</p><p><strong>Methods: </strong>We developed a human-in-the-loop, augmented intelligence approach that explicitly focused on the human-algorithm interaction, combining PEMAT-based patient education constructs mapped to features extracted from the videos, annotations of the videos by domain experts, and cotraining methods from machine learning to assess the understandability of videos on diabetes and classify them. We further examined the impact of understandability on several dimensions of viewer engagement with the videos.</p><p><strong>Results: </strong>We collected 9873 YouTube videos on diabetes using search keywords extracted from a patient-oriented forum and reviewed by a medical expert. Our machine learning methods achieved a weighted precision of 0.84, a weighted recall of 0.79, and an F<sub>1</sub>-score of 0.81 in classifying video understandability and could effectively identify patient educational videos that medical experts would like to recommend for patients. Videos rated as highly understandable had an average higher view count (average treatment effect [ATE]=2.55; P<.001), like count (ATE=2.95; P<.001), and comment count (ATE=3.10; P<.001) than less understandable videos. In addition, in a user study, 4 medical experts recommended 72% (144/200) of the top 10 videos ranked by understandability compared to 40% (80/200) of the top 10 videos ranked by YouTube's default algorithm for 20 ramdomly selected search keywords.</p><p><strong>Conclusions: </strong>We developed a human-in-the-loop, scalable algorithm to assess the understandability of health information on YouTube. Our method optimally combines expert input with algorithmic support, enhancing engagement and aiding medical experts in recommending educational content. This solution also guides health care organizations in creating effective patient education materials for underserved health topics.</p>","PeriodicalId":16337,"journal":{"name":"Journal of Medical Internet Research","volume":"27 ","pages":"e56080"},"PeriodicalIF":6.0000,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11984000/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Internet Research","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.2196/56080","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

Background: An estimated 93% of adults in the United States access the internet, with up to 80% looking for health information. However, only 12% of US adults are proficient enough in health literacy to interpret health information and make informed health care decisions meaningfully. With the vast amount of health information available in multimedia formats on social media platforms such as YouTube and Facebook, there is an urgent need and a unique opportunity to design an automated approach to curate online health information using multiple criteria to meet the health literacy needs of a diverse population.

Objective: This study aimed to develop an automated approach to assessing the understandability of patient educational videos according to the Patient Education Materials Assessment Tool (PEMAT) guidelines and evaluating the impact of video understandability on viewer engagement. We also offer insights for content creators and health care organizations on how to improve engagement with these educational videos on user-generated content platforms.

Methods: We developed a human-in-the-loop, augmented intelligence approach that explicitly focused on the human-algorithm interaction, combining PEMAT-based patient education constructs mapped to features extracted from the videos, annotations of the videos by domain experts, and cotraining methods from machine learning to assess the understandability of videos on diabetes and classify them. We further examined the impact of understandability on several dimensions of viewer engagement with the videos.

Results: We collected 9873 YouTube videos on diabetes using search keywords extracted from a patient-oriented forum and reviewed by a medical expert. Our machine learning methods achieved a weighted precision of 0.84, a weighted recall of 0.79, and an F1-score of 0.81 in classifying video understandability and could effectively identify patient educational videos that medical experts would like to recommend for patients. Videos rated as highly understandable had an average higher view count (average treatment effect [ATE]=2.55; P<.001), like count (ATE=2.95; P<.001), and comment count (ATE=3.10; P<.001) than less understandable videos. In addition, in a user study, 4 medical experts recommended 72% (144/200) of the top 10 videos ranked by understandability compared to 40% (80/200) of the top 10 videos ranked by YouTube's default algorithm for 20 ramdomly selected search keywords.

Conclusions: We developed a human-in-the-loop, scalable algorithm to assess the understandability of health information on YouTube. Our method optimally combines expert input with algorithmic support, enhancing engagement and aiding medical experts in recommending educational content. This solution also guides health care organizations in creating effective patient education materials for underserved health topics.

Abstract Image

Abstract Image

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
YouTube视频的可理解性分类:发展与评价研究。
背景:在美国,估计有93%的成年人上网,其中高达80%的人在寻找健康信息。然而,只有12%的美国成年人精通健康知识,能够理解健康信息并做出有意义的医疗保健决定。由于YouTube和Facebook等社交媒体平台上有大量多媒体格式的卫生信息,因此迫切需要设计一种自动化方法,利用多种标准管理在线卫生信息,以满足不同人群的卫生素养需求,这是一个独特的机会。目的:本研究旨在根据患者教育材料评估工具(PEMAT)指南开发一种自动化方法来评估患者教育视频的可理解性,并评估视频可理解性对观众参与度的影响。我们还为内容创作者和医疗保健组织提供有关如何在用户生成的内容平台上提高这些教育视频的参与度的见解。方法:我们开发了一种人类在循环中的增强智能方法,明确关注人类算法交互,将基于pmat的患者教育结构映射到从视频中提取的特征,领域专家对视频的注释以及机器学习的协同训练方法相结合,以评估糖尿病视频的可理解性并对其进行分类。我们进一步研究了可理解性对观众参与视频的几个维度的影响。结果:我们收集了9873个关于糖尿病的YouTube视频,使用搜索关键词从一个以患者为导向的论坛中提取,并由一位医学专家进行了审查。我们的机器学习方法在分类视频可理解性方面的加权精度为0.84,加权召回率为0.79,f1得分为0.81,可以有效地识别医学专家想要推荐给患者的患者教育视频。被评为高度可理解的视频的平均观看次数更高(平均治疗效果[ATE]=2.55;结论:我们开发了一种“人在循环”的可扩展算法来评估YouTube上健康信息的可理解性。我们的方法将专家输入与算法支持最佳地结合在一起,增强了参与度,并帮助医学专家推荐教育内容。该解决方案还指导卫生保健组织为服务不足的卫生主题创建有效的患者教育材料。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
14.40
自引率
5.40%
发文量
654
审稿时长
1 months
期刊介绍: The Journal of Medical Internet Research (JMIR) is a highly respected publication in the field of health informatics and health services. With a founding date in 1999, JMIR has been a pioneer in the field for over two decades. As a leader in the industry, the journal focuses on digital health, data science, health informatics, and emerging technologies for health, medicine, and biomedical research. It is recognized as a top publication in these disciplines, ranking in the first quartile (Q1) by Impact Factor. Notably, JMIR holds the prestigious position of being ranked #1 on Google Scholar within the "Medical Informatics" discipline.
期刊最新文献
Artificial Intelligence, Connected Care, and Enabling Digital Health Technologies in Rare Diseases With a Focus on Lysosomal Storage Disorders: Scoping Review. Ethical Handling of Occupational Health and Safety Data in the Fire Service: Empirical Interview and Focus Group Study of Firefighter and Fire Service Leadership Privacy Preferences. Well-Being and Cognitive Factors Influencing Health Care Workers' Adherence to Internet-Based Stress Management: Mixed Methods Analysis of a Nonrandomized Controlled Study. Predictive Value of Machine Learning for Poststroke Mortality Risk: Systematic Review and Meta-Analysis. Effectiveness of the Components of a Digital Multiple Health Behavior Change Intervention Among Individuals Seeking Help Online (Coach): Factorial Randomized Trial.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1