Infusing Multi-Hop Medical Knowledge Into Smaller Language Models for Biomedical Question Answering

IF 6.8 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS IEEE Journal of Biomedical and Health Informatics Pub Date : 2025-03-19 DOI:10.1109/JBHI.2025.3547444
Jing Chen;Zhihua Wei;Wen Shen;Rui Shang
{"title":"Infusing Multi-Hop Medical Knowledge Into Smaller Language Models for Biomedical Question Answering","authors":"Jing Chen;Zhihua Wei;Wen Shen;Rui Shang","doi":"10.1109/JBHI.2025.3547444","DOIUrl":null,"url":null,"abstract":"MedQA-USMLE is a challenging biomedical question answering (BQA) task, as its questions typically involve multi-hop reasoning. To solve this task, BQA systems should possess not only extensive medical professional knowledge but also strong medical reasoning capabilities. While state-of-the-art larger language models, such as Med-PaLM 2, have overcome this challenge, smaller language models (SLMs) still struggle with it. To bridge this gap, we introduces a multi-hop medical knowledge infusion (MHMKI) procedure to endow SLMs with medical reasoning capabilities. Specifically, we categorize MedQA-USMLE questions into distinct reasoning types, then tailor pre-training instances for each type of questions using the semi-structured information and hyperlinks of Wikipedia articles. To enable SLMs to efficiently capture the multi-hop knowledge contained in these instances, we design a reasoning chain masked language model to further pre-train BERT models. Moreover, we convert the pre-training instances into a composite question answering dataset for intermediate fine-tuning of GPT models. We evaluate MHMKI on six SLMs across five datasets spanning three BQA tasks. The results demonstrate that MHMKI consistently improves SLMs' performance, particularly on tasks requiring substantial medical reasoning. For instance, the accuracy of MedQA-USMLE shows a significant increase of 5.3% on average.","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"29 7","pages":"5317-5328"},"PeriodicalIF":6.8000,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Biomedical and Health Informatics","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10932873/","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

MedQA-USMLE is a challenging biomedical question answering (BQA) task, as its questions typically involve multi-hop reasoning. To solve this task, BQA systems should possess not only extensive medical professional knowledge but also strong medical reasoning capabilities. While state-of-the-art larger language models, such as Med-PaLM 2, have overcome this challenge, smaller language models (SLMs) still struggle with it. To bridge this gap, we introduces a multi-hop medical knowledge infusion (MHMKI) procedure to endow SLMs with medical reasoning capabilities. Specifically, we categorize MedQA-USMLE questions into distinct reasoning types, then tailor pre-training instances for each type of questions using the semi-structured information and hyperlinks of Wikipedia articles. To enable SLMs to efficiently capture the multi-hop knowledge contained in these instances, we design a reasoning chain masked language model to further pre-train BERT models. Moreover, we convert the pre-training instances into a composite question answering dataset for intermediate fine-tuning of GPT models. We evaluate MHMKI on six SLMs across five datasets spanning three BQA tasks. The results demonstrate that MHMKI consistently improves SLMs' performance, particularly on tasks requiring substantial medical reasoning. For instance, the accuracy of MedQA-USMLE shows a significant increase of 5.3% on average.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
将多跳医学知识注入小语言模型用于生物医学问答。
MedQA-USMLE是一项具有挑战性的生物医学问答(BQA)任务,因为它的问题通常涉及多跳推理。为了完成这一任务,BQA系统需要具备丰富的医学专业知识和强大的医学推理能力。虽然最先进的大型语言模型(如Med-PaLM 2)已经克服了这一挑战,但较小的语言模型(slm)仍然在与之斗争。为了弥补这一差距,我们引入了一个多跳医学知识注入(MHMKI)过程来赋予slm医学推理能力。具体来说,我们将MedQA-USMLE问题分类为不同的推理类型,然后使用维基百科文章的半结构化信息和超链接创建针对每种类型问题的预训练实例。为了使slm能够有效地捕获嵌入在这些实例中的多跳知识,我们设计了一个推理链掩码语言模型,用于BERT模型的进一步预训练。此外,我们将这些预训练实例转换成一个组合的问答数据集,用于GPT模型的中间微调。我们在跨越三个BQA任务的五个数据集上使用六个slm(三个BERT模型和三个GPT模型)评估MHMKI。结果表明MHMKI在几乎所有任务中都有利于slm,特别是那些需要多跳推理的任务。例如,MedQA-USMLE的准确率平均显著提高了5.3%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Journal of Biomedical and Health Informatics
IEEE Journal of Biomedical and Health Informatics COMPUTER SCIENCE, INFORMATION SYSTEMS-COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
CiteScore
13.60
自引率
6.50%
发文量
1151
期刊介绍: IEEE Journal of Biomedical and Health Informatics publishes original papers presenting recent advances where information and communication technologies intersect with health, healthcare, life sciences, and biomedicine. Topics include acquisition, transmission, storage, retrieval, management, and analysis of biomedical and health information. The journal covers applications of information technologies in healthcare, patient monitoring, preventive care, early disease diagnosis, therapy discovery, and personalized treatment protocols. It explores electronic medical and health records, clinical information systems, decision support systems, medical and biological imaging informatics, wearable systems, body area/sensor networks, and more. Integration-related topics like interoperability, evidence-based medicine, and secure patient data are also addressed.
期刊最新文献
FedGSCA: Medical Federated Learning with Global Sample Selector and Client Adaptive Adjuster under Label Noise. DAMON: Difference-Aware Medical Visual Question Answering via Multimodal Large Language Model. Radar-Based Monitoring for Non-Contact Detection of Nocturnal Hypoglycemia in Diabetes: A Review. SSF-SET: A Discrete EEG Token-based Framework for Sleep Stage Forecasting. DiabLLM: An LLM-Based Framework for Blood Glucose Prediction in Type 1 Diabetes.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1