{"title":"An Improved DeepLab Model for Clothing Image Segmentation","authors":"Jue Wang, Xianfu Wan, Liqing Li, Jun Wang","doi":"10.1109/ICECE54449.2021.9674326","DOIUrl":null,"url":null,"abstract":"Image segmentation is an effective method to extract the clothing region from the image, which is especially suitable for the analysis and processing of the clothing image with the complex background. At present, the research of image segmentation mainly focuses on the field of deep learning, and image segmentation methods such as DeepLab sequence based on convolutional neural network have been widely used. However, their segmentation results are not good enough when there are the complex deformation and edges in the clothing images. In order to improve the performance of clothing image segmentation, an improved DeepLab model for clothing image segmentation is developed in this paper. Based on the DeepLabV3+ model, the receptive field module and the decoder are redesigned in the new model. For the receptive field module, the ASPP (Atrous Spatial Pyramid Pooling) is changed to an improved RFBs (Receptive Field Block), which performs much better in simulating the human visual perception. For the decoder, the interpolation upsampling is replaced with a transpose convolution one due to it’s deformation adaptability to the edges and corners in the images; the concatenations between the high-level and the low-level features are increased from two-stage to five-stage in order to obtain more low-level features. After training and testing on deepfashion2 dataset, the improved model achieved performance of 97.26% Accuracy, 93.23% mIoU, 90.56% AP75 and 44.80% AP95 which is significantly better compared with DeepLabv3+. It takes 93.806 ms for the improved DeepLab model to complete the inference of one image, which is only slightly slower than that (92. 09S ms) for DeepLabV3+. The improved DeepLab model has a stronger ability to obtain information such as clothing edges, which makes the performance of segmentation better.","PeriodicalId":166178,"journal":{"name":"2021 IEEE 4th International Conference on Electronics and Communication Engineering (ICECE)","volume":"168 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 4th International Conference on Electronics and Communication Engineering (ICECE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICECE54449.2021.9674326","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Image segmentation is an effective method to extract the clothing region from the image, which is especially suitable for the analysis and processing of the clothing image with the complex background. At present, the research of image segmentation mainly focuses on the field of deep learning, and image segmentation methods such as DeepLab sequence based on convolutional neural network have been widely used. However, their segmentation results are not good enough when there are the complex deformation and edges in the clothing images. In order to improve the performance of clothing image segmentation, an improved DeepLab model for clothing image segmentation is developed in this paper. Based on the DeepLabV3+ model, the receptive field module and the decoder are redesigned in the new model. For the receptive field module, the ASPP (Atrous Spatial Pyramid Pooling) is changed to an improved RFBs (Receptive Field Block), which performs much better in simulating the human visual perception. For the decoder, the interpolation upsampling is replaced with a transpose convolution one due to it’s deformation adaptability to the edges and corners in the images; the concatenations between the high-level and the low-level features are increased from two-stage to five-stage in order to obtain more low-level features. After training and testing on deepfashion2 dataset, the improved model achieved performance of 97.26% Accuracy, 93.23% mIoU, 90.56% AP75 and 44.80% AP95 which is significantly better compared with DeepLabv3+. It takes 93.806 ms for the improved DeepLab model to complete the inference of one image, which is only slightly slower than that (92. 09S ms) for DeepLabV3+. The improved DeepLab model has a stronger ability to obtain information such as clothing edges, which makes the performance of segmentation better.