Contrastive Learning Guided Fusion Network for Brain CT and MRI

IF 6.8 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS IEEE Journal of Biomedical and Health Informatics Pub Date : 2025-02-25 DOI:10.1109/JBHI.2025.3545172
Yuping Huang;Weisheng Li;Bin Xiao;Guofen Wang;Dan He;Xiaoyu Qiao
{"title":"Contrastive Learning Guided Fusion Network for Brain CT and MRI","authors":"Yuping Huang;Weisheng Li;Bin Xiao;Guofen Wang;Dan He;Xiaoyu Qiao","doi":"10.1109/JBHI.2025.3545172","DOIUrl":null,"url":null,"abstract":"Medical image fusion technology provides professionals with more detailed and precise diagnostic information. This paper introduces a new efficient CT and MRI fusion network, CLGFusion, based on a contrastive learning-guided network. CLGFusion includes two encoding branches at the feature encoding stage, enabling them to interact and learn from each other. The approach begins with training a single-view encoder to predict the feature representation of an image from varied augmented views. Simultaneously, the multi-view encoder is improved using the exponential moving average of the single-view encoder. Contrastive learning is integrated into medical image fusion by creating a feature contrast space without constructing negative samples. This feature contrast space cleverly uses the information of the difference in the feature product of the source image and its corresponding augmented image. It continuously guides the network to constantly optimize its fusion effect by combining the method of structural similarity loss, to achieve more accurate and efficient image fusion. This approach represents an end-to-end unsupervised fusion model. Experimental validation shows that our proposed method demonstrates performance comparable to state-of-the-art techniques in both subjective evaluation and objective metrics.","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"29 7","pages":"5028-5041"},"PeriodicalIF":6.8000,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Biomedical and Health Informatics","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10902172/","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Medical image fusion technology provides professionals with more detailed and precise diagnostic information. This paper introduces a new efficient CT and MRI fusion network, CLGFusion, based on a contrastive learning-guided network. CLGFusion includes two encoding branches at the feature encoding stage, enabling them to interact and learn from each other. The approach begins with training a single-view encoder to predict the feature representation of an image from varied augmented views. Simultaneously, the multi-view encoder is improved using the exponential moving average of the single-view encoder. Contrastive learning is integrated into medical image fusion by creating a feature contrast space without constructing negative samples. This feature contrast space cleverly uses the information of the difference in the feature product of the source image and its corresponding augmented image. It continuously guides the network to constantly optimize its fusion effect by combining the method of structural similarity loss, to achieve more accurate and efficient image fusion. This approach represents an end-to-end unsupervised fusion model. Experimental validation shows that our proposed method demonstrates performance comparable to state-of-the-art techniques in both subjective evaluation and objective metrics.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
对比学习引导的脑CT与MRI融合网络。
医学图像融合技术为专业人员提供更详细、更精确的诊断信息。本文介绍了一种基于对比学习引导网络的CT与MRI高效融合网络CLGFusion。CLGFusion在特征编码阶段包括两个编码分支,使它们能够相互交互和学习。该方法首先训练一个单视图编码器来预测来自不同增强视图的图像的特征表示。同时,利用单视点编码器的指数移动平均对多视点编码器进行了改进。将对比学习方法引入医学图像融合中,在不构造负样本的情况下创建特征对比空间。该特征对比空间巧妙地利用了源图像与其对应增强图像的特征积的差异信息。结合结构相似度损失的方法,不断引导网络不断优化融合效果,实现更加准确高效的图像融合。这种方法代表了一种端到端的无监督融合模型。实验验证表明,我们提出的方法在主观评价和客观指标方面的性能与最先进的技术相当。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Journal of Biomedical and Health Informatics
IEEE Journal of Biomedical and Health Informatics COMPUTER SCIENCE, INFORMATION SYSTEMS-COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
CiteScore
13.60
自引率
6.50%
发文量
1151
期刊介绍: IEEE Journal of Biomedical and Health Informatics publishes original papers presenting recent advances where information and communication technologies intersect with health, healthcare, life sciences, and biomedicine. Topics include acquisition, transmission, storage, retrieval, management, and analysis of biomedical and health information. The journal covers applications of information technologies in healthcare, patient monitoring, preventive care, early disease diagnosis, therapy discovery, and personalized treatment protocols. It explores electronic medical and health records, clinical information systems, decision support systems, medical and biological imaging informatics, wearable systems, body area/sensor networks, and more. Integration-related topics like interoperability, evidence-based medicine, and secure patient data are also addressed.
期刊最新文献
EEG-VLM: A Hierarchical Vision-Language Model With Multi-Level Feature Alignment and Visually Enhanced Language-Guided Reasoning for EEG Image-Based Sleep Stage Prediction. Evaluating Large Language Models in Crisis Detection: A Real-World Benchmark from Psychological Support Hotlines. FD-MSGL: Drug Repositioning via Frequency-Domain Multi-Source Synergistic Graph Learning. DFCNet: A Precise Detection Approach for Obstructive Sleep Apnea-Hypopnea Events Using Airflow and Respiratory Effort Signals. Leveraging Language Embeddings from EMA Surveys to Predict Perceived Social Isolation among Stroke Survivors.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1