Medical image registration aims to align images acquired at different times, perspectives, or modalities. Traditional deep learning approaches typically combine moving and fixed images as joint inputs, limiting the model’s ability to independently process image features and achieve accurate voxel-level correspondences. This paper introduces a novel registration framework, DELCA-Net, which decouples feature extraction from correspondence modeling using a large kernel attention (LKA) mechanism. DELCA-Net utilizes a dual-stream shared encoder that separately processes moving and fixed images, capturing long-range semantic dependencies. Additionally, we propose a Cross-Resolution Attention Refinement Module (CARM) that enhances multi-scale spatial correspondences via a coarse-to-fine fusion strategy, improving anatomical feature alignment across resolutions. Comprehensive experiments conducted on the OASIS and IXI datasets demonstrate that our method consistently improves registration accuracy while offering better interpretability and computational efficiency. On the IXI dataset, our model achieves a 1.2% increase in the DICE coefficient while requiring only 1.9% of the parameters of TransMorph. Additionally, the implementation of our method is publicly available at https://github.com/windandink/DELCA-NET.
扫码关注我们
求助内容:
应助结果提醒方式:
