{"title":"一种用于服装图像分割的改进DeepLab模型","authors":"Jue Wang, Xianfu Wan, Liqing Li, Jun Wang","doi":"10.1109/ICECE54449.2021.9674326","DOIUrl":null,"url":null,"abstract":"Image segmentation is an effective method to extract the clothing region from the image, which is especially suitable for the analysis and processing of the clothing image with the complex background. At present, the research of image segmentation mainly focuses on the field of deep learning, and image segmentation methods such as DeepLab sequence based on convolutional neural network have been widely used. However, their segmentation results are not good enough when there are the complex deformation and edges in the clothing images. In order to improve the performance of clothing image segmentation, an improved DeepLab model for clothing image segmentation is developed in this paper. Based on the DeepLabV3+ model, the receptive field module and the decoder are redesigned in the new model. For the receptive field module, the ASPP (Atrous Spatial Pyramid Pooling) is changed to an improved RFBs (Receptive Field Block), which performs much better in simulating the human visual perception. For the decoder, the interpolation upsampling is replaced with a transpose convolution one due to it’s deformation adaptability to the edges and corners in the images; the concatenations between the high-level and the low-level features are increased from two-stage to five-stage in order to obtain more low-level features. After training and testing on deepfashion2 dataset, the improved model achieved performance of 97.26% Accuracy, 93.23% mIoU, 90.56% AP75 and 44.80% AP95 which is significantly better compared with DeepLabv3+. It takes 93.806 ms for the improved DeepLab model to complete the inference of one image, which is only slightly slower than that (92. 09S ms) for DeepLabV3+. The improved DeepLab model has a stronger ability to obtain information such as clothing edges, which makes the performance of segmentation better.","PeriodicalId":166178,"journal":{"name":"2021 IEEE 4th International Conference on Electronics and Communication Engineering (ICECE)","volume":"168 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"An Improved DeepLab Model for Clothing Image Segmentation\",\"authors\":\"Jue Wang, Xianfu Wan, Liqing Li, Jun Wang\",\"doi\":\"10.1109/ICECE54449.2021.9674326\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Image segmentation is an effective method to extract the clothing region from the image, which is especially suitable for the analysis and processing of the clothing image with the complex background. At present, the research of image segmentation mainly focuses on the field of deep learning, and image segmentation methods such as DeepLab sequence based on convolutional neural network have been widely used. However, their segmentation results are not good enough when there are the complex deformation and edges in the clothing images. In order to improve the performance of clothing image segmentation, an improved DeepLab model for clothing image segmentation is developed in this paper. Based on the DeepLabV3+ model, the receptive field module and the decoder are redesigned in the new model. For the receptive field module, the ASPP (Atrous Spatial Pyramid Pooling) is changed to an improved RFBs (Receptive Field Block), which performs much better in simulating the human visual perception. For the decoder, the interpolation upsampling is replaced with a transpose convolution one due to it’s deformation adaptability to the edges and corners in the images; the concatenations between the high-level and the low-level features are increased from two-stage to five-stage in order to obtain more low-level features. After training and testing on deepfashion2 dataset, the improved model achieved performance of 97.26% Accuracy, 93.23% mIoU, 90.56% AP75 and 44.80% AP95 which is significantly better compared with DeepLabv3+. It takes 93.806 ms for the improved DeepLab model to complete the inference of one image, which is only slightly slower than that (92. 09S ms) for DeepLabV3+. The improved DeepLab model has a stronger ability to obtain information such as clothing edges, which makes the performance of segmentation better.\",\"PeriodicalId\":166178,\"journal\":{\"name\":\"2021 IEEE 4th International Conference on Electronics and Communication Engineering (ICECE)\",\"volume\":\"168 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 4th International Conference on Electronics and Communication Engineering (ICECE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICECE54449.2021.9674326\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 4th International Conference on Electronics and Communication Engineering (ICECE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICECE54449.2021.9674326","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
摘要
图像分割是从图像中提取服装区域的一种有效方法,特别适用于具有复杂背景的服装图像的分析和处理。目前,对图像分割的研究主要集中在深度学习领域,基于卷积神经网络的DeepLab序列等图像分割方法已得到广泛应用。然而,当服装图像中存在复杂的变形和边缘时,它们的分割效果不够好。为了提高服装图像分割的性能,本文提出了一种改进的DeepLab服装图像分割模型。在DeepLabV3+模型的基础上,对接收野模块和解码器进行了重新设计。在感受野模块中,将astrous空间金字塔池(ASPP)改为改进的RFBs (receptive field Block),可以更好地模拟人类的视觉感受。解码器利用其对图像边角形变的适应性,将插值上采样替换为转置卷积上采样;为了获得更多的低级特征,将高级特征和低级特征之间的连接从两级增加到五级。经过在deepfashion2数据集上的训练和测试,改进后的模型准确率为97.26%,mIoU为93.23%,AP75为90.56%,AP95为44.80%,明显优于DeepLabv3+。改进的DeepLab模型完成一幅图像的推理需要93.806 ms,只比(92)稍慢。09S ms)的DeepLabV3+。改进后的DeepLab模型对服装边缘等信息的获取能力更强,使得分割的性能更好。
An Improved DeepLab Model for Clothing Image Segmentation
Image segmentation is an effective method to extract the clothing region from the image, which is especially suitable for the analysis and processing of the clothing image with the complex background. At present, the research of image segmentation mainly focuses on the field of deep learning, and image segmentation methods such as DeepLab sequence based on convolutional neural network have been widely used. However, their segmentation results are not good enough when there are the complex deformation and edges in the clothing images. In order to improve the performance of clothing image segmentation, an improved DeepLab model for clothing image segmentation is developed in this paper. Based on the DeepLabV3+ model, the receptive field module and the decoder are redesigned in the new model. For the receptive field module, the ASPP (Atrous Spatial Pyramid Pooling) is changed to an improved RFBs (Receptive Field Block), which performs much better in simulating the human visual perception. For the decoder, the interpolation upsampling is replaced with a transpose convolution one due to it’s deformation adaptability to the edges and corners in the images; the concatenations between the high-level and the low-level features are increased from two-stage to five-stage in order to obtain more low-level features. After training and testing on deepfashion2 dataset, the improved model achieved performance of 97.26% Accuracy, 93.23% mIoU, 90.56% AP75 and 44.80% AP95 which is significantly better compared with DeepLabv3+. It takes 93.806 ms for the improved DeepLab model to complete the inference of one image, which is only slightly slower than that (92. 09S ms) for DeepLabV3+. The improved DeepLab model has a stronger ability to obtain information such as clothing edges, which makes the performance of segmentation better.