Automatic retinal vessel segmentation is vital for clinical assessment and therapeutic intervention. Extracting global and local features from fundus images remains a significant challenge for current methods. To address this, we propose a lightweight channel split attention network (CSA-Net), which integrates channel split attention and residual feature fusion, and can effectively capture global context information and fine-grained vascular details. In our model, we first suggest a channel split attention (CSA) module to facilitate multiscale feature aggregation and acquisition of global information. Then, we introduce a residual feature fusion (RFF) module to reduce information loss by incorporating residuals and enhancing feature maps during the multiscale fusion process. In addition, we setup a lightweight design using adaptive inverted residual encoders with varied kernel sizes to increase the computational efficiency. Five publicly available fundus datasets (DRIVE, CHASEDB1, STARE, HRF, LES-AV) were used to test our model. Experimental results demonstrate that CSA-Net achieves state-of-the-art performance, with ACC values up to 0.9830 and AUC values of 0.9948 with only 2.39 M parameters. Ablation studies validate the effectiveness of individual modules. The proposed CSA-Net achieves a good balance between segmentation accuracy and model complexity. In multiple retinal vascular segmentation benchmark tests, it achieves competitive or better performance with fewer parameters.
扫码关注我们
求助内容:
应助结果提醒方式:
