LLMs for thematic summarization in qualitative healthcare research: feasibility and insights.

IF 2 JMIR AI Pub Date : 2025-02-27 DOI:10.2196/64447
Arturo Castellanos, Haoqiang Jiang, Paulo Gomes, Debra Vander Meer, Alfred Castillo
{"title":"LLMs for thematic summarization in qualitative healthcare research: feasibility and insights.","authors":"Arturo Castellanos, Haoqiang Jiang, Paulo Gomes, Debra Vander Meer, Alfred Castillo","doi":"10.2196/64447","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>The application of large language models (LLMs) in analyzing expert textual online data is a topic of growing importance in computational linguistics and qualitative research within healthcare settings.</p><p><strong>Objective: </strong>The objective of this study is to understand how large language models (LLMs) can help analyze expert textual data. Topic modeling enables scaling the thematic analysis of content of a large corpus of data, but it still requires interpretation. We investigate the use of LLMs to help researchers scale this interpretation.</p><p><strong>Methods: </strong>The primary methodological phases of this project were: (1) collecting data representing posts to an online nurse forum, as well as cleaning and pre-processing the data; (2) using LDA to derive topics; (3) using human categorization for topic modeling; (4) using LLMs to complement and scale the interpretation of thematic analysis. The purpose is to compare the outcomes of human interpretation with those derived from LLMs.</p><p><strong>Results: </strong>There is substantial agreement (80%) between LLM and human interpretation. For two thirds of the topics, human evaluation and LLMs agree on alignment and convergence of themes. Moreover, LLM sub-themes offer depth of analysis within LDA topics, providing detailed explanations that align with and build upon established human themes. Nonetheless, LLMs identify coherence and complementarity where human evaluation does not.</p><p><strong>Conclusions: </strong>LLMs enable the automation of the interpretation task in qualitative research. There are challenges in the use of LLMs for evaluation of the resulting themes.</p><p><strong>Clinicaltrial: </strong></p>","PeriodicalId":73551,"journal":{"name":"JMIR AI","volume":" ","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"JMIR AI","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2196/64447","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Background: The application of large language models (LLMs) in analyzing expert textual online data is a topic of growing importance in computational linguistics and qualitative research within healthcare settings.

Objective: The objective of this study is to understand how large language models (LLMs) can help analyze expert textual data. Topic modeling enables scaling the thematic analysis of content of a large corpus of data, but it still requires interpretation. We investigate the use of LLMs to help researchers scale this interpretation.

Methods: The primary methodological phases of this project were: (1) collecting data representing posts to an online nurse forum, as well as cleaning and pre-processing the data; (2) using LDA to derive topics; (3) using human categorization for topic modeling; (4) using LLMs to complement and scale the interpretation of thematic analysis. The purpose is to compare the outcomes of human interpretation with those derived from LLMs.

Results: There is substantial agreement (80%) between LLM and human interpretation. For two thirds of the topics, human evaluation and LLMs agree on alignment and convergence of themes. Moreover, LLM sub-themes offer depth of analysis within LDA topics, providing detailed explanations that align with and build upon established human themes. Nonetheless, LLMs identify coherence and complementarity where human evaluation does not.

Conclusions: LLMs enable the automation of the interpretation task in qualitative research. There are challenges in the use of LLMs for evaluation of the resulting themes.

Clinicaltrial:

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
法学硕士专题总结定性医疗保健研究:可行性和见解。
背景:大型语言模型(llm)在分析专家文本在线数据中的应用是医疗保健设置中计算语言学和定性研究中日益重要的主题。目的:本研究的目的是了解大型语言模型(llm)如何帮助分析专家文本数据。主题建模支持对大型数据语料库的内容进行主题分析,但它仍然需要解释。我们调查了法学硕士的使用,以帮助研究人员扩大这种解释。方法:本项目的主要方法学阶段为:(1)收集代表在线护士论坛帖子的数据,并对数据进行清理和预处理;(2)利用LDA推导主题;(3)利用人工分类技术进行主题建模;(4)利用法学硕士对专题分析的解释进行补充和扩展。目的是比较人类解释的结果与法学硕士的结果。结果:LLM和人类解释之间有很大的一致性(80%)。对于三分之二的主题,人类评估和法学硕士同意对齐和融合的主题。此外,LLM子主题在LDA主题中提供深度分析,提供与已建立的人类主题一致的详细解释。尽管如此,法学硕士识别一致性和互补性,而人类评估没有。结论:法学硕士使定性研究中的解释任务自动化。在使用法学硕士评估所产生的主题方面存在挑战。临床试验:
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Artificial Intelligence as a Catalyst for Value-Based Health Insurance in the United States: Narrative Review and Policy Perspective. Deep Learning for Age Estimation and Sex Prediction Using Mandibular-Cropped Cephalometric Images: Comparative Model Development and Validation Study. Perspectives on How Sociology Can Advance Theorizing About Human-Chatbot Interaction and Developing Chatbots for Social Good. Large Language Model-Based Chatbots and Agentic AI for Mental Health Counseling: Systematic Review of Methodologies, Evaluation Frameworks, and Ethical Safeguards. Correction: Real-World Evidence Synthesis of Digital Scribes Using Ambient Listening and Generative Artificial Intelligence for Clinician Documentation Workflows: Rapid Review.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1