Domain adaptation of transformer-based neural network model for clinical note classification in Indian healthcare

Swati Saigaonkar, Vaibhav Narawade
{"title":"Domain adaptation of transformer-based neural network model for clinical note classification in Indian healthcare","authors":"Swati Saigaonkar, Vaibhav Narawade","doi":"10.1007/s41870-024-02053-z","DOIUrl":null,"url":null,"abstract":"<p>The exploration of clinical notes has garnered attention, primarily owing to the wealth of unstructured information they encompass. Although substantial research has been carried out, notable gaps persist. One such gap pertains to the absence of work on real-time Indian data. The work commenced by initially using Medical Information Mart for Intensive Care (MIMIC III) dataset, concentrating on diseases such as Asthma, Myocardial Infarction (MI), and Chronic Kidney Diseases (CKD), for training the model. A novel model, transformer-based, was built which incorporated knowledge of abbreviations, symptoms, and domain knowledge of the diseases, named as SM-DBERT + + . Subsequently, the model was applied to an Indian dataset using transfer learning, where domain knowledge extracted from Indian sources was utilized to adapt to domain differences. Further, an ensemble of pre-trained models was built, applying transfer learning principles. Through this comprehensive methodology, we aimed to bridge the gap pertaining to the application of deep learning techniques to Indian healthcare datasets. The results obtained were better than fine-tuned Bidirectional Encoder Representations from Transformers (BERT), Distilled BERT (DISTILBERT) and A Lite BERT (ALBERT) models and also other specialized models like Scientific BERT (SCI-BERT), Clinical Biomedical BERT (Clinical Bio-BERT), and Biomedical BERT (BIOBERT) with an accuracy of 0.93 when full notes were used and an accuracy of 0.89 when extracted sections were used. It has demonstrated that model trained on one dataset yields good results on another similar dataset as this model incorporates domain knowledge which could be modified during transfer learning to adapt to the new domain.</p>","PeriodicalId":14138,"journal":{"name":"International Journal of Information Technology","volume":"107 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Information Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s41870-024-02053-z","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The exploration of clinical notes has garnered attention, primarily owing to the wealth of unstructured information they encompass. Although substantial research has been carried out, notable gaps persist. One such gap pertains to the absence of work on real-time Indian data. The work commenced by initially using Medical Information Mart for Intensive Care (MIMIC III) dataset, concentrating on diseases such as Asthma, Myocardial Infarction (MI), and Chronic Kidney Diseases (CKD), for training the model. A novel model, transformer-based, was built which incorporated knowledge of abbreviations, symptoms, and domain knowledge of the diseases, named as SM-DBERT + + . Subsequently, the model was applied to an Indian dataset using transfer learning, where domain knowledge extracted from Indian sources was utilized to adapt to domain differences. Further, an ensemble of pre-trained models was built, applying transfer learning principles. Through this comprehensive methodology, we aimed to bridge the gap pertaining to the application of deep learning techniques to Indian healthcare datasets. The results obtained were better than fine-tuned Bidirectional Encoder Representations from Transformers (BERT), Distilled BERT (DISTILBERT) and A Lite BERT (ALBERT) models and also other specialized models like Scientific BERT (SCI-BERT), Clinical Biomedical BERT (Clinical Bio-BERT), and Biomedical BERT (BIOBERT) with an accuracy of 0.93 when full notes were used and an accuracy of 0.89 when extracted sections were used. It has demonstrated that model trained on one dataset yields good results on another similar dataset as this model incorporates domain knowledge which could be modified during transfer learning to adapt to the new domain.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于变压器的神经网络模型的领域适应性,用于印度医疗保健领域的临床病历分类
对临床笔记的研究之所以备受关注,主要是因为这些笔记包含大量非结构化信息。虽然已经开展了大量研究,但仍存在明显差距。其中一个差距就是缺乏对印度实时数据的研究。这项工作首先使用了重症监护医疗信息市场(MIMIC III)数据集,主要针对哮喘、心肌梗塞(MI)和慢性肾病(CKD)等疾病进行模型训练。我们建立了一个基于变压器的新模型,该模型结合了疾病的缩写、症状和领域知识,命名为 SM-DBERT + +。随后,利用迁移学习将该模型应用于印度数据集,利用从印度来源提取的领域知识来适应领域差异。此外,我们还利用迁移学习原理建立了一个预训练模型集合。通过这种综合方法,我们旨在弥补将深度学习技术应用于印度医疗数据集方面的差距。获得的结果优于经过微调的变压器双向编码器表示(BERT)、蒸馏 BERT(DISTILBERT)和 A Lite BERT(ALBERT)模型,以及其他专业模型,如科学 BERT(SCI-BERT)、临床生物医学 BERT(Clinical Bio-BERT)和生物医学 BERT(BIOBERT),使用完整笔记时的准确率为 0.93,使用提取部分时的准确率为 0.89。这表明,在一个数据集上训练的模型在另一个类似的数据集上也能产生良好的结果,因为该模型包含了领域知识,这些知识可以在迁移学习过程中进行修改,以适应新的领域。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Statistical cryptanalysis of seven classical lightweight ciphers CNN-BO-LSTM: an ensemble framework for prognosis of liver cancer Architecting lymphoma fusion: PROMETHEE-II guided optimization of combination therapeutic synergy RBCA-ETS: enhancing extractive text summarization with contextual embedding and word-level attention RAMD and transient analysis of a juice clarification unit in sugar plants
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1