{"title":"基于双分支网络的医学图像自动分割方法","authors":"Lei Yang, H. Huang, Suli Bai, Yanhong Liu","doi":"10.1109/ACAIT56212.2022.10137944","DOIUrl":null,"url":null,"abstract":"Medical image segmentation is a basal and essential task for computer-aided diagnosis and quantification of diseases. However, robust and precise medical image segmentation is still a challenging task on account of much factors, such as complex backgrounds, overlapping structures, high variation of appearances and low contrast. Recently, with the strong support of deep convolutional neural networks (DCNNs), the encoder-decoder based segmentation networks have been the popular detection schemes for medical image analysis, yet image segmentation based on DCNNs still faces some limitations, such as restricted receptive field, limited information flow, etc. To address such challenges, a novel dual-branch deep residual U-Net network is proposed in this paper for medical image detection which provides more avenues for information flow to gather both high-level and low-level feature maps and a greater depth of contextual data.A residual U-Net network is constructed for efficient feature expression using residual learning, attention block, and feature expression. Meanwhile, fused with atrous spatial pyramid pooling (ASPP) block and squeeze-and-excitation (SE) block, The residual U-Net network is suggested to embed an attention fusion block to gather multi-scale contextual data. On the basis, To fully utilize local contextual data and increase segmentation precision, a dual-branch deep residual U-Net network is built by stacking two residual U-Net networks. Combined with multiple public benchmark data sets on medical images, including the CVC-ClinicDB, the GIAS set and LUNA16 set, experimental results indicate the superior ability of proposed segmentation network on medical image segmentation compared with other advanced segmentation models.","PeriodicalId":398228,"journal":{"name":"2022 6th Asian Conference on Artificial Intelligence Technology (ACAIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An Automatic Medical Image Segmentation Approach via Dual-Branch Network\",\"authors\":\"Lei Yang, H. Huang, Suli Bai, Yanhong Liu\",\"doi\":\"10.1109/ACAIT56212.2022.10137944\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Medical image segmentation is a basal and essential task for computer-aided diagnosis and quantification of diseases. However, robust and precise medical image segmentation is still a challenging task on account of much factors, such as complex backgrounds, overlapping structures, high variation of appearances and low contrast. Recently, with the strong support of deep convolutional neural networks (DCNNs), the encoder-decoder based segmentation networks have been the popular detection schemes for medical image analysis, yet image segmentation based on DCNNs still faces some limitations, such as restricted receptive field, limited information flow, etc. To address such challenges, a novel dual-branch deep residual U-Net network is proposed in this paper for medical image detection which provides more avenues for information flow to gather both high-level and low-level feature maps and a greater depth of contextual data.A residual U-Net network is constructed for efficient feature expression using residual learning, attention block, and feature expression. Meanwhile, fused with atrous spatial pyramid pooling (ASPP) block and squeeze-and-excitation (SE) block, The residual U-Net network is suggested to embed an attention fusion block to gather multi-scale contextual data. On the basis, To fully utilize local contextual data and increase segmentation precision, a dual-branch deep residual U-Net network is built by stacking two residual U-Net networks. Combined with multiple public benchmark data sets on medical images, including the CVC-ClinicDB, the GIAS set and LUNA16 set, experimental results indicate the superior ability of proposed segmentation network on medical image segmentation compared with other advanced segmentation models.\",\"PeriodicalId\":398228,\"journal\":{\"name\":\"2022 6th Asian Conference on Artificial Intelligence Technology (ACAIT)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 6th Asian Conference on Artificial Intelligence Technology (ACAIT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ACAIT56212.2022.10137944\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 6th Asian Conference on Artificial Intelligence Technology (ACAIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ACAIT56212.2022.10137944","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An Automatic Medical Image Segmentation Approach via Dual-Branch Network
Medical image segmentation is a basal and essential task for computer-aided diagnosis and quantification of diseases. However, robust and precise medical image segmentation is still a challenging task on account of much factors, such as complex backgrounds, overlapping structures, high variation of appearances and low contrast. Recently, with the strong support of deep convolutional neural networks (DCNNs), the encoder-decoder based segmentation networks have been the popular detection schemes for medical image analysis, yet image segmentation based on DCNNs still faces some limitations, such as restricted receptive field, limited information flow, etc. To address such challenges, a novel dual-branch deep residual U-Net network is proposed in this paper for medical image detection which provides more avenues for information flow to gather both high-level and low-level feature maps and a greater depth of contextual data.A residual U-Net network is constructed for efficient feature expression using residual learning, attention block, and feature expression. Meanwhile, fused with atrous spatial pyramid pooling (ASPP) block and squeeze-and-excitation (SE) block, The residual U-Net network is suggested to embed an attention fusion block to gather multi-scale contextual data. On the basis, To fully utilize local contextual data and increase segmentation precision, a dual-branch deep residual U-Net network is built by stacking two residual U-Net networks. Combined with multiple public benchmark data sets on medical images, including the CVC-ClinicDB, the GIAS set and LUNA16 set, experimental results indicate the superior ability of proposed segmentation network on medical image segmentation compared with other advanced segmentation models.