Jing Zou , Lanqing Liu , Qi Chen , Shujun Wang , Zhanli Hu , Xiaohan Xing , Jing Qin
{"title":"MMR-Mamba: Multi-modal MRI reconstruction with Mamba and spatial-frequency information fusion","authors":"Jing Zou , Lanqing Liu , Qi Chen , Shujun Wang , Zhanli Hu , Xiaohan Xing , Jing Qin","doi":"10.1016/j.media.2025.103549","DOIUrl":null,"url":null,"abstract":"<div><div>Multi-modal MRI offers valuable complementary information for diagnosis and treatment; however, its clinical utility is limited by prolonged scanning time. To accelerate the acquisition process, a practical approach is to reconstruct images of the target modality, which requires longer scanning time, from under-sampled k-space data using the fully-sampled reference modality with shorter scanning time as guidance. The primary challenge of this task lies in comprehensively and efficiently integrating complementary information from different modalities to achieve high-quality reconstruction. Existing methods struggle with this challenge: (1) convolution-based models fail to capture long-range dependencies; (2) transformer-based models, while excelling in global feature modeling, suffer from quadratic computational complexity. To address this dilemma, we propose MMR-Mamba, a novel framework that thoroughly and efficiently integrates multi-modal features for MRI reconstruction, leveraging Mamba’s capability to capture long-range dependencies with linear computational complexity while exploiting global properties of the Fourier domain. Specifically, we first design a <em>Target modality-guided Cross Mamba</em> (TCM) module in the spatial domain, which maximally restores the target modality information by selectively incorporating relevant information from the reference modality. Then, we introduce a <em>Selective Frequency Fusion</em> (SFF) module to efficiently integrate global information in the Fourier domain and recover high-frequency signals for the reconstruction of structural details. Furthermore, we devise an <em>Adaptive Spatial-Frequency Fusion</em> (ASFF) module, which mutually enhances the spatial and frequency domains by supplementing less informative channels from one domain with corresponding channels from the other. Extensive experiments on the BraTS and fastMRI knee datasets demonstrate the superiority of our MMR-Mamba over state-of-the-art reconstruction methods. The code is publicly available at <span><span>https://github.com/zoujing925/MMR-Mamba</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103549"},"PeriodicalIF":10.7000,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image analysis","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1361841525000969","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Multi-modal MRI offers valuable complementary information for diagnosis and treatment; however, its clinical utility is limited by prolonged scanning time. To accelerate the acquisition process, a practical approach is to reconstruct images of the target modality, which requires longer scanning time, from under-sampled k-space data using the fully-sampled reference modality with shorter scanning time as guidance. The primary challenge of this task lies in comprehensively and efficiently integrating complementary information from different modalities to achieve high-quality reconstruction. Existing methods struggle with this challenge: (1) convolution-based models fail to capture long-range dependencies; (2) transformer-based models, while excelling in global feature modeling, suffer from quadratic computational complexity. To address this dilemma, we propose MMR-Mamba, a novel framework that thoroughly and efficiently integrates multi-modal features for MRI reconstruction, leveraging Mamba’s capability to capture long-range dependencies with linear computational complexity while exploiting global properties of the Fourier domain. Specifically, we first design a Target modality-guided Cross Mamba (TCM) module in the spatial domain, which maximally restores the target modality information by selectively incorporating relevant information from the reference modality. Then, we introduce a Selective Frequency Fusion (SFF) module to efficiently integrate global information in the Fourier domain and recover high-frequency signals for the reconstruction of structural details. Furthermore, we devise an Adaptive Spatial-Frequency Fusion (ASFF) module, which mutually enhances the spatial and frequency domains by supplementing less informative channels from one domain with corresponding channels from the other. Extensive experiments on the BraTS and fastMRI knee datasets demonstrate the superiority of our MMR-Mamba over state-of-the-art reconstruction methods. The code is publicly available at https://github.com/zoujing925/MMR-Mamba.
期刊介绍:
Medical Image Analysis serves as a platform for sharing new research findings in the realm of medical and biological image analysis, with a focus on applications of computer vision, virtual reality, and robotics to biomedical imaging challenges. The journal prioritizes the publication of high-quality, original papers contributing to the fundamental science of processing, analyzing, and utilizing medical and biological images. It welcomes approaches utilizing biomedical image datasets across all spatial scales, from molecular/cellular imaging to tissue/organ imaging.