{"title":"重新注意网络:一种图像去雾网络","authors":"Shuai Song, Ren-Yuan Zhang, Zhipeng Qiu, Jiawei Jin, Shangbin Yu","doi":"10.1109/CSAIEE54046.2021.9543298","DOIUrl":null,"url":null,"abstract":"In the image dehazing task, there are three key subtasks need to be performed. The first one is extracting the finer scale features, e.g. the detail textures of objects, covered by haze. The second one is retaining the coarser scale features, e.g. the contours of objects, as complete as possible. And third one is fusing the finer scale features and the coarser scale features together. Aiming at the three points, we propose a single image dehazing network named Res-Attention Net based on the encoding-decoding structure similar to U -Net. The encoder and decoder of Res-Attention Net are designed for the objective that extracting the detail textures and retrieving the contours at the same time. We construct the encoder of the Res-Attention Net by the residual blocks (RBs) with different depths and downsampling for performing the first two subtasks, i.e. extracting the multiscale image features from the original hazy image. The decoder of the Res-Attention is based on the attention gates (AGs) and upsampling. The decoder can retrieve the coarser scale features from the output of the encoder and can also fuse them with the multiscale features from the encoder together. That is to say, the decoder is for performing the last two subtasks. The experimental results show that the Res-Attention Net proposed performs better than several state-of-the-art methods.","PeriodicalId":376014,"journal":{"name":"2021 IEEE International Conference on Computer Science, Artificial Intelligence and Electronic Engineering (CSAIEE)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Res-Attention Net: An Image Dehazing Network\",\"authors\":\"Shuai Song, Ren-Yuan Zhang, Zhipeng Qiu, Jiawei Jin, Shangbin Yu\",\"doi\":\"10.1109/CSAIEE54046.2021.9543298\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the image dehazing task, there are three key subtasks need to be performed. The first one is extracting the finer scale features, e.g. the detail textures of objects, covered by haze. The second one is retaining the coarser scale features, e.g. the contours of objects, as complete as possible. And third one is fusing the finer scale features and the coarser scale features together. Aiming at the three points, we propose a single image dehazing network named Res-Attention Net based on the encoding-decoding structure similar to U -Net. The encoder and decoder of Res-Attention Net are designed for the objective that extracting the detail textures and retrieving the contours at the same time. We construct the encoder of the Res-Attention Net by the residual blocks (RBs) with different depths and downsampling for performing the first two subtasks, i.e. extracting the multiscale image features from the original hazy image. The decoder of the Res-Attention is based on the attention gates (AGs) and upsampling. The decoder can retrieve the coarser scale features from the output of the encoder and can also fuse them with the multiscale features from the encoder together. That is to say, the decoder is for performing the last two subtasks. The experimental results show that the Res-Attention Net proposed performs better than several state-of-the-art methods.\",\"PeriodicalId\":376014,\"journal\":{\"name\":\"2021 IEEE International Conference on Computer Science, Artificial Intelligence and Electronic Engineering (CSAIEE)\",\"volume\":\"104 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-08-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Computer Science, Artificial Intelligence and Electronic Engineering (CSAIEE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CSAIEE54046.2021.9543298\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Computer Science, Artificial Intelligence and Electronic Engineering (CSAIEE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSAIEE54046.2021.9543298","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
In the image dehazing task, there are three key subtasks need to be performed. The first one is extracting the finer scale features, e.g. the detail textures of objects, covered by haze. The second one is retaining the coarser scale features, e.g. the contours of objects, as complete as possible. And third one is fusing the finer scale features and the coarser scale features together. Aiming at the three points, we propose a single image dehazing network named Res-Attention Net based on the encoding-decoding structure similar to U -Net. The encoder and decoder of Res-Attention Net are designed for the objective that extracting the detail textures and retrieving the contours at the same time. We construct the encoder of the Res-Attention Net by the residual blocks (RBs) with different depths and downsampling for performing the first two subtasks, i.e. extracting the multiscale image features from the original hazy image. The decoder of the Res-Attention is based on the attention gates (AGs) and upsampling. The decoder can retrieve the coarser scale features from the output of the encoder and can also fuse them with the multiscale features from the encoder together. That is to say, the decoder is for performing the last two subtasks. The experimental results show that the Res-Attention Net proposed performs better than several state-of-the-art methods.