Colorectal cancer is one of the most prevalent and lethal forms of cancer. The automated detection, segmentation and classification of early polyp tissues from endoscopy images of the colorectum has demonstrated impressive potential in improving clinical diagnostic accuracy, avoiding missed detections and reducing the incidence of colorectal cancer in the population. However, most existing studies fail to consider the potential of information fusion between different deep neural network layers and optimization with respect to model complexity, resulting in poor clinical utility. To address the above limitations, the concept of integrity learning is introduced, which divides polyp segmentation into two stages for progressive completion, and a cross-level fusion lightweight network, IC-FusionNet, is proposed to accurately segment polyps from endoscopy images. First, the Context Fusion Module (CFM) of the network aggregates the encoder neighboring branches and current level information to achieve macro-integrity learning. In the second stage, polyp detail information from shallower layers and deeper high-dimensional semantic information are aggregated to achieve enhancement between different layers of complementary information. IC-FusionNet is evaluated on five polyp segmentation benchmark datasets across eight evaluation metrics to assess its performance. IC-FusionNet achieves of 0.908 and 0.925 on the Kvasir and CVC-ClinicDB datasets, respectively, along with of 0.851 and 0.973. On three external polyp segmentation test datasets, the model obtains an average of 0.788 and an average of 0.712. Compared to existing methods, IC-FusionNet achieves superior or near-optimal performance across most evaluation metrics. Moreover, IC-FusionNet contains only 3.84 M parameters and 0.76G MACs, representing a reduction of 9.22% in parameter count and 74.15% in computational complexity compared to recent lightweight segmentation networks.
扫码关注我们
求助内容:
应助结果提醒方式:
