{"title":"Semantic Segmentation Model Based on Four Channel Non-Separable Additive Wavelet Combined with DeepLabv3+","authors":"斌 刘","doi":"10.12677/jisp.2023.123028","DOIUrl":null,"url":null,"abstract":"In order to improve the loss of details in the traditional semantic segmentation model, which leads to the decline of information, we propose an improved DeepLabv3+ network segmentation model. Firstly, replace the backbone network with the MobileNetV2 network. Secondly, the source image is decomposed by constructing a four-channel non-separable wavelet low-pass filter, and the high-frequency subimage of the source image is extracted. Thirdly, the common convolution is replaced by deep separable convolution and the adaptive refinement feature of convolutional attention module (CBAM) is introduced to improve the segmentation effect of the network model. The experimental results show that on the VOC data set, the mean intersection over union (MIoU) of the improved model is 0.94% higher than that of the original DeepLabv3+ model, the mean pixel accuracy (MPA) is 1.34% higher than the original DeepLabv3+ model, and the accuracy is 0.19% higher than the original DeepLabv3+ model. On the BDD100K data set, mean intersection over union is 0.53% higher than the original DeepLabv3+ model. The DeepLabv3+ mean pixel accuracy is 0.15% higher than the original DeepLabv3+ model, and the accuracy is 0.13% higher than the original DeepLabv3+ model. Both subjective and objective results show that our model is better than the original model.","PeriodicalId":69487,"journal":{"name":"图像与信号处理","volume":"3 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"图像与信号处理","FirstCategoryId":"1093","ListUrlMain":"https://doi.org/10.12677/jisp.2023.123028","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In order to improve the loss of details in the traditional semantic segmentation model, which leads to the decline of information, we propose an improved DeepLabv3+ network segmentation model. Firstly, replace the backbone network with the MobileNetV2 network. Secondly, the source image is decomposed by constructing a four-channel non-separable wavelet low-pass filter, and the high-frequency subimage of the source image is extracted. Thirdly, the common convolution is replaced by deep separable convolution and the adaptive refinement feature of convolutional attention module (CBAM) is introduced to improve the segmentation effect of the network model. The experimental results show that on the VOC data set, the mean intersection over union (MIoU) of the improved model is 0.94% higher than that of the original DeepLabv3+ model, the mean pixel accuracy (MPA) is 1.34% higher than the original DeepLabv3+ model, and the accuracy is 0.19% higher than the original DeepLabv3+ model. On the BDD100K data set, mean intersection over union is 0.53% higher than the original DeepLabv3+ model. The DeepLabv3+ mean pixel accuracy is 0.15% higher than the original DeepLabv3+ model, and the accuracy is 0.13% higher than the original DeepLabv3+ model. Both subjective and objective results show that our model is better than the original model.