DMCMFuse: A dual-phase model via multi-dimensional cross-scanning state space model for multi-modality medical image fusion

IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Displays Pub Date : 2025-09-01 Epub Date: 2025-04-19 DOI:10.1016/j.displa.2025.103056
Hang Zhao , Zitong Wang , Chenyang Li , Rui Zhu , Feiyang Yang
{"title":"DMCMFuse: A dual-phase model via multi-dimensional cross-scanning state space model for multi-modality medical image fusion","authors":"Hang Zhao ,&nbsp;Zitong Wang ,&nbsp;Chenyang Li ,&nbsp;Rui Zhu ,&nbsp;Feiyang Yang","doi":"10.1016/j.displa.2025.103056","DOIUrl":null,"url":null,"abstract":"<div><div>Multi-modality medical image fusion is crucial for improving diagnostic accuracy by combining complementary information from different imaging modalities. However, a key challenge is effectively balancing the abundant modality-specific features (e.g., soft tissue details in MRI and bone structure in CT) with the relatively fewer modality-shared features, often leading to suboptimal fusion outcomes. To address this, we propose DMCMFuse, a dual-phase model for multi-modality medical image fusion that leverages a multi-dimensional cross-scanning state-space model. The model first decomposes multi-modality images into distinct frequency components to maintain spatial and channel coherence. In the fusion phase, we apply Mamba for the first time in medical image fusion and develop a fusion method that integrates spatial scanning, spatial interaction, and channel scanning. This multi-dimensional cross-scanning approach effectively combines features from each modality, ensuring the retention of both global and local information. Comprehensive experimental results demonstrate that DMCMFuse surpasses the state-of-the-art methods, generating fused images of superior quality with enhanced structure consistency and richer feature representation, making it highly effective for medical image analysis and diagnosis.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"89 ","pages":"Article 103056"},"PeriodicalIF":3.4000,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938225000939","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/4/19 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

Multi-modality medical image fusion is crucial for improving diagnostic accuracy by combining complementary information from different imaging modalities. However, a key challenge is effectively balancing the abundant modality-specific features (e.g., soft tissue details in MRI and bone structure in CT) with the relatively fewer modality-shared features, often leading to suboptimal fusion outcomes. To address this, we propose DMCMFuse, a dual-phase model for multi-modality medical image fusion that leverages a multi-dimensional cross-scanning state-space model. The model first decomposes multi-modality images into distinct frequency components to maintain spatial and channel coherence. In the fusion phase, we apply Mamba for the first time in medical image fusion and develop a fusion method that integrates spatial scanning, spatial interaction, and channel scanning. This multi-dimensional cross-scanning approach effectively combines features from each modality, ensuring the retention of both global and local information. Comprehensive experimental results demonstrate that DMCMFuse surpasses the state-of-the-art methods, generating fused images of superior quality with enhanced structure consistency and richer feature representation, making it highly effective for medical image analysis and diagnosis.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
DMCMFuse:一种基于多维交叉扫描状态空间的双相模型,用于多模态医学图像融合
多模态医学影像融合通过结合不同成像模态的互补信息,对提高诊断准确性至关重要。然而,一个关键的挑战是如何有效平衡丰富的特定模式特征(如核磁共振成像中的软组织细节和 CT 中的骨结构)与相对较少的模式共享特征,这往往会导致次优的融合结果。为了解决这个问题,我们提出了 DMCMFuse,这是一种利用多维交叉扫描状态空间模型进行多模态医学图像融合的双阶段模型。该模型首先将多模态图像分解为不同的频率分量,以保持空间和信道的一致性。在融合阶段,我们首次将 Mamba 应用于医学图像融合,并开发出一种融合方法,将空间扫描、空间交互和信道扫描融为一体。这种多维交叉扫描方法有效地结合了每种模式的特征,确保同时保留全局和局部信息。综合实验结果表明,DMCMFuse 超越了最先进的方法,它生成的融合图像质量上乘,结构一致性更强,特征表示更丰富,在医学图像分析和诊断方面非常有效。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Displays
Displays 工程技术-工程:电子与电气
CiteScore
4.60
自引率
25.60%
发文量
138
审稿时长
92 days
期刊介绍: Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface. Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.
期刊最新文献
An end-to-end Chinese-Braille translation method based on mT5: Vocabulary expansion and structural enhancement A degradation-adaptive deep-sea polymetallic nodule image segmentation framework with dual-frequency fusion for Jiaolong submersible Decoder-enhanced and semantic-aware layered image compression for human and machine Using siamese networks with transfer learning for dental identification on small-samples datasets Modeling epistemic uncertainty in 3D Gaussian Splatting for robust scene reconstruction
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1