Brain tumor segmentation based on multimodal magnetic resonance imaging (MRI) plays a crucial role in clinical diagnosis and treatment planning. However, the absence or unavailability of certain modalities often hampers segmentation performance in real-world clinical settings. In this study, we propose a novel consistency-driven state-space model (SSM), which incorporates learnable consistency features into the SSM architecture to establish a robust framework for brain tumor segmentation under incomplete modality conditions. This approach introduces an innovative strategy for explicitly capturing cross-modal consistency within the context of efficient long-range dependency modeling, specifically designed for segmentation tasks involving missing modalities. Specifically, we design a scale-aware fusion block that integrates learnable consistency features to aggregate modality-specific information at an early stage. Subsequently, a Mamba-based multimodal consistency fusion technique is employed, enabling efficient long-range dependency modeling with linear computational complexity. To prevent modality bias, we further introduce a progressive attention weighting module that dynamically balances modality-specific features. Additionally, an adaptive feature correction mechanism is incorporated to refine both modality-specific and consistency features along both spatial and channel dimensions. The proposed method effectively facilitates modality integration while minimizing conflicts that may arise from directly fusing potentially inconsistent modalities. Comprehensive experiments conducted on the BraTS2018 and BraTS2020 datasets demonstrate that our model surpasses existing state-of-the-art approaches under various incomplete modality scenarios.
扫码关注我们
求助内容:
应助结果提醒方式:
