{"title":"基于U -Net特征融合的直肠癌图像分割","authors":"Wan Yuqian, Ma Jianwei, Zang Shaofei","doi":"10.1109/ITME53901.2021.00069","DOIUrl":null,"url":null,"abstract":"In order to solve the existing problems of low segmentation precision and obvious interference by background noise in the segmentation task of rectal cancer lesions, we propose an improved U-Net method based on feature fusion by U-Net network and weighted feature pyramid structure (W - FPN). First, the proportion of each pixel value in the final pixel is used to assign weights to strengthen the feature fusion ability and improve the segmentation effect by using the scale information in the fusion. Secondly, after the third network output layer, three serial depthwise separable dilated convolution layers with dilation rates of 1, 2 and 4 are added to enlarge the receptive field of feature image and make full use of image feature information. Finally, the improved model is compared with U-Net, SegNet and DeepLab segmentation models. The experimental results show that Our approach reaches good and stable results with a precision of 83.38% and the Dice similarity coefficient value of 92.56%.","PeriodicalId":6774,"journal":{"name":"2021 11th International Conference on Information Technology in Medicine and Education (ITME)","volume":"26 1","pages":"302-306"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"U -Net based on Feature Fusion for Rectal Cancer Image Segmentation\",\"authors\":\"Wan Yuqian, Ma Jianwei, Zang Shaofei\",\"doi\":\"10.1109/ITME53901.2021.00069\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In order to solve the existing problems of low segmentation precision and obvious interference by background noise in the segmentation task of rectal cancer lesions, we propose an improved U-Net method based on feature fusion by U-Net network and weighted feature pyramid structure (W - FPN). First, the proportion of each pixel value in the final pixel is used to assign weights to strengthen the feature fusion ability and improve the segmentation effect by using the scale information in the fusion. Secondly, after the third network output layer, three serial depthwise separable dilated convolution layers with dilation rates of 1, 2 and 4 are added to enlarge the receptive field of feature image and make full use of image feature information. Finally, the improved model is compared with U-Net, SegNet and DeepLab segmentation models. The experimental results show that Our approach reaches good and stable results with a precision of 83.38% and the Dice similarity coefficient value of 92.56%.\",\"PeriodicalId\":6774,\"journal\":{\"name\":\"2021 11th International Conference on Information Technology in Medicine and Education (ITME)\",\"volume\":\"26 1\",\"pages\":\"302-306\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 11th International Conference on Information Technology in Medicine and Education (ITME)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ITME53901.2021.00069\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 11th International Conference on Information Technology in Medicine and Education (ITME)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITME53901.2021.00069","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
U -Net based on Feature Fusion for Rectal Cancer Image Segmentation
In order to solve the existing problems of low segmentation precision and obvious interference by background noise in the segmentation task of rectal cancer lesions, we propose an improved U-Net method based on feature fusion by U-Net network and weighted feature pyramid structure (W - FPN). First, the proportion of each pixel value in the final pixel is used to assign weights to strengthen the feature fusion ability and improve the segmentation effect by using the scale information in the fusion. Secondly, after the third network output layer, three serial depthwise separable dilated convolution layers with dilation rates of 1, 2 and 4 are added to enlarge the receptive field of feature image and make full use of image feature information. Finally, the improved model is compared with U-Net, SegNet and DeepLab segmentation models. The experimental results show that Our approach reaches good and stable results with a precision of 83.38% and the Dice similarity coefficient value of 92.56%.