Decoupled contrastive learning for multilingual multimodal medical pre-trained model

IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Neurocomputing Pub Date : 2025-06-07 Epub Date: 2025-03-01 DOI:10.1016/j.neucom.2025.129809
Qiyuan Li , Chen Qiu , Haijiang Liu , Jinguang Gu , Dan Luo
{"title":"Decoupled contrastive learning for multilingual multimodal medical pre-trained model","authors":"Qiyuan Li ,&nbsp;Chen Qiu ,&nbsp;Haijiang Liu ,&nbsp;Jinguang Gu ,&nbsp;Dan Luo","doi":"10.1016/j.neucom.2025.129809","DOIUrl":null,"url":null,"abstract":"<div><div>Multilingual multimodal pre-training aims to facilitate the integration of conceptual representations across diverse languages and modalities within a shared, high-dimensional semantic space. This endeavor in healthcare faces challenges related to language diversity, suboptimal multimodal interactions, and an absence of coherent multilingual multimodal representations. In response to these challenges, we introduce a novel multilingual multimodal medical pre-training model. Initially, we employ a strategic augmentation of the medical corpus by expanding the MIMIC-CXR report dataset to 20 distinct languages using machine translation techniques. Subsequently, we develop a targeted label disambiguation technique to address the labeling noise within decoupled contrastive learning. In particular, it categorizes and refines uncertain phrases within the clinical reports based on disease type, promoting finer-grained semantic similarity and improving inter-modality interactions. Building on these proposals, we present a refined multilingual multimodal medical pre-trained model, significantly enhancing the understanding of medical multimodal data and adapting the model to multilingual medical contexts. Experiments reveal that our model outperforms other baselines in medical image classification and multilingual medical image–text retrieval by up to 13.78% and 12.6%, respectively.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"633 ","pages":"Article 129809"},"PeriodicalIF":6.5000,"publicationDate":"2025-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225004813","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/3/1 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Multilingual multimodal pre-training aims to facilitate the integration of conceptual representations across diverse languages and modalities within a shared, high-dimensional semantic space. This endeavor in healthcare faces challenges related to language diversity, suboptimal multimodal interactions, and an absence of coherent multilingual multimodal representations. In response to these challenges, we introduce a novel multilingual multimodal medical pre-training model. Initially, we employ a strategic augmentation of the medical corpus by expanding the MIMIC-CXR report dataset to 20 distinct languages using machine translation techniques. Subsequently, we develop a targeted label disambiguation technique to address the labeling noise within decoupled contrastive learning. In particular, it categorizes and refines uncertain phrases within the clinical reports based on disease type, promoting finer-grained semantic similarity and improving inter-modality interactions. Building on these proposals, we present a refined multilingual multimodal medical pre-trained model, significantly enhancing the understanding of medical multimodal data and adapting the model to multilingual medical contexts. Experiments reveal that our model outperforms other baselines in medical image classification and multilingual medical image–text retrieval by up to 13.78% and 12.6%, respectively.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
多语言多模态医学预训练模型的解耦对比学习
多语言多模态预训练旨在促进在共享的高维语义空间内跨不同语言和模态的概念表示的整合。医疗保健领域的这一努力面临着与语言多样性、次优多模态交互以及缺乏连贯的多语言多模态表示相关的挑战。为了应对这些挑战,我们引入了一种新的多语言多模式医学预训练模型。最初,我们通过使用机器翻译技术将MIMIC-CXR报告数据集扩展到20种不同的语言,从而采用了医学语料库的战略增强。随后,我们开发了一种有针对性的标签消歧技术来解决解耦对比学习中的标签噪声。特别是,它根据疾病类型对临床报告中的不确定短语进行分类和提炼,促进了更细粒度的语义相似性,改善了模态间的相互作用。在这些建议的基础上,我们提出了一个改进的多语言多模态医学预训练模型,显著提高了对医学多模态数据的理解,并使该模型适应多语言医学背景。实验表明,我们的模型在医学图像分类和多语言医学图像文本检索方面分别优于其他基线13.78%和12.6%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
期刊最新文献
Optimized Bayesian asymmetric nutcracker quantized neural networks for area-efficient 10T2C capacitive SRAM design with reconfigurable SAR ADCs TOC-UCO: a comprehensive repository of tabular ordinal classification datasets Domain-consistent networks for cross-scene hyperspectral image classification Fuzzy observer-based event-triggered sliding mode control for delayed networked T-S fuzzy systems with dissipativity guarantees: A delta operator approach Dynamic prompt-enhanced multimodal dual-branch few-shot time series forecasting
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1