UnICLAM: Contrastive representation learning with adversarial masking for unified and interpretable Medical Vision Question Answering

IF 11.8 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Medical image analysis Pub Date : 2025-01-15 DOI:10.1016/j.media.2025.103464
Chenlu Zhan , Peng Peng , Hongwei Wang , Gaoang Wang , Yu Lin , Tao Chen , Hongsen Wang
{"title":"UnICLAM: Contrastive representation learning with adversarial masking for unified and interpretable Medical Vision Question Answering","authors":"Chenlu Zhan ,&nbsp;Peng Peng ,&nbsp;Hongwei Wang ,&nbsp;Gaoang Wang ,&nbsp;Yu Lin ,&nbsp;Tao Chen ,&nbsp;Hongsen Wang","doi":"10.1016/j.media.2025.103464","DOIUrl":null,"url":null,"abstract":"<div><div>Medical Visual Question Answering aims to assist doctors in decision-making when answering clinical questions regarding radiology images. Nevertheless, current models learn cross-modal representations through residing vision and text encoders in dual separate spaces, which inevitably leads to indirect semantic alignment. In this paper, we propose UnICLAM, a Unified and Interpretable Medical-VQA model through Contrastive Representation Learning with Adversarial Masking. To achieve the learning of an aligned image–text representation, we first establish a unified dual-stream pre-training structure with the gradually soft-parameter sharing strategy for alignment. Specifically, the proposed strategy learns a constraint for the vision and text encoders to be close in the same space, which is gradually loosened as the number of layers increases, so as to narrow the distance between the two different modalities. For grasping the unified semantic cross-modal representation, we extend the adversarial masking data augmentation to the contrastive representation learning of vision and text in a unified manner. While the encoder training minimizes the distance between the original and masking samples, the adversarial masking module keeps adversarial learning to conversely maximize the distance. We also intuitively take a further exploration of the unified adversarial masking augmentation method, which improves the potential <em>ante-hoc</em> interpretability with remarkable performance and efficiency. Experimental results on VQA-RAD and SLAKE benchmarks demonstrate that UnICLAM outperforms existing 11 state-of-the-art Medical-VQA methods. More importantly, we make an additional discussion about the performance of UnICLAM in diagnosing heart failure, verifying that UnICLAM exhibits superior few-shot adaption performance in practical disease diagnosis. The codes and models will be released upon the acceptance of the paper.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103464"},"PeriodicalIF":11.8000,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image analysis","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S136184152500012X","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Medical Visual Question Answering aims to assist doctors in decision-making when answering clinical questions regarding radiology images. Nevertheless, current models learn cross-modal representations through residing vision and text encoders in dual separate spaces, which inevitably leads to indirect semantic alignment. In this paper, we propose UnICLAM, a Unified and Interpretable Medical-VQA model through Contrastive Representation Learning with Adversarial Masking. To achieve the learning of an aligned image–text representation, we first establish a unified dual-stream pre-training structure with the gradually soft-parameter sharing strategy for alignment. Specifically, the proposed strategy learns a constraint for the vision and text encoders to be close in the same space, which is gradually loosened as the number of layers increases, so as to narrow the distance between the two different modalities. For grasping the unified semantic cross-modal representation, we extend the adversarial masking data augmentation to the contrastive representation learning of vision and text in a unified manner. While the encoder training minimizes the distance between the original and masking samples, the adversarial masking module keeps adversarial learning to conversely maximize the distance. We also intuitively take a further exploration of the unified adversarial masking augmentation method, which improves the potential ante-hoc interpretability with remarkable performance and efficiency. Experimental results on VQA-RAD and SLAKE benchmarks demonstrate that UnICLAM outperforms existing 11 state-of-the-art Medical-VQA methods. More importantly, we make an additional discussion about the performance of UnICLAM in diagnosing heart failure, verifying that UnICLAM exhibits superior few-shot adaption performance in practical disease diagnosis. The codes and models will be released upon the acceptance of the paper.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
UnICLAM:用于统一和可解释的医学视觉问答的对抗掩蔽的对比表征学习。
医学视觉问答旨在帮助医生在回答有关放射影像的临床问题时做出决策。然而,目前的模型通过驻留在两个独立空间的视觉和文本编码器来学习跨模态表示,这不可避免地导致间接的语义对齐。在本文中,我们提出了UnICLAM,一个统一的和可解释的医学vqa模型,通过对比表征学习和对抗掩蔽。为了实现对齐图像-文本表示的学习,我们首先建立了统一的双流预训练结构,并采用渐进式软参数共享对齐策略。具体而言,该策略学习了视觉和文本编码器在同一空间内必须接近的约束,随着层数的增加,该约束逐渐放松,从而缩小了两种不同模式之间的距离。为了掌握统一的语义跨模态表示,我们将对抗性掩蔽数据增强统一扩展到视觉和文本的对比表示学习。当编码器训练最小化原始样本和掩蔽样本之间的距离时,对抗性掩蔽模块保持对抗性学习以反过来最大化距离。我们还直观地对统一对抗掩蔽增强方法进行了进一步的探索,该方法提高了潜在的事前可解释性,并具有显著的性能和效率。VQA-RAD和SLAKE基准的实验结果表明,UnICLAM优于现有的11种最先进的医疗vqa方法。更重要的是,我们对UnICLAM在诊断心力衰竭方面的表现进行了额外的讨论,验证了UnICLAM在实际疾病诊断中具有优越的少针自适应性能。论文通过后将发布代码和模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Medical image analysis
Medical image analysis 工程技术-工程:生物医学
CiteScore
22.10
自引率
6.40%
发文量
309
审稿时长
6.6 months
期刊介绍: Medical Image Analysis serves as a platform for sharing new research findings in the realm of medical and biological image analysis, with a focus on applications of computer vision, virtual reality, and robotics to biomedical imaging challenges. The journal prioritizes the publication of high-quality, original papers contributing to the fundamental science of processing, analyzing, and utilizing medical and biological images. It welcomes approaches utilizing biomedical image datasets across all spatial scales, from molecular/cellular imaging to tissue/organ imaging.
期刊最新文献
Enhancing Uncertainty Assessment in Dynamic PET Imaging with Residual Permutation and Clustering Incorporating global-local tissue changes to predict future breast cancer from longitudinal screening mammograms Context-Enriched Contrastive Auto-Encoder with Topology Learning for Medical Hyperspectral Image Classification to Diagnose Tumors 4D Monocular Surgical Reconstruction under Arbitrary Camera Motions Generative data-engine foundation model for universal few-shot 2D vascular image segmentation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1