{"title":"改进的SRGAN模型","authors":"Cong Zhu, Fei Wang, Sheng Liang, Keke Liu","doi":"10.1117/12.3000809","DOIUrl":null,"url":null,"abstract":"Image super-resolution reconstruction is an ill-posed problem, as a low-resolution image can correspond to multiple high-resolution images. The models SRCNN and SRDenseNet produce high-resolution images using the mean square error (MSE) loss function, which results in blurry images that are the average of multiple high-quality images. However, the GAN model is capable of reconstructing a more realistic distribution of high-quality images. In this paper, we propose modifications to the SRGAN model by utilizing L1 norm loss for the discriminator's loss function, resulting in a more stable model. We also use VGG16 features for perceptual loss instead of VGG19, which produces better results. The content loss is calculated by weighting both the VGG loss and MSE loss, achieving a better balance between PSNR and human perception.","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"12782 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Improved SRGAN model\",\"authors\":\"Cong Zhu, Fei Wang, Sheng Liang, Keke Liu\",\"doi\":\"10.1117/12.3000809\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Image super-resolution reconstruction is an ill-posed problem, as a low-resolution image can correspond to multiple high-resolution images. The models SRCNN and SRDenseNet produce high-resolution images using the mean square error (MSE) loss function, which results in blurry images that are the average of multiple high-quality images. However, the GAN model is capable of reconstructing a more realistic distribution of high-quality images. In this paper, we propose modifications to the SRGAN model by utilizing L1 norm loss for the discriminator's loss function, resulting in a more stable model. We also use VGG16 features for perceptual loss instead of VGG19, which produces better results. The content loss is calculated by weighting both the VGG loss and MSE loss, achieving a better balance between PSNR and human perception.\",\"PeriodicalId\":210802,\"journal\":{\"name\":\"International Conference on Image Processing and Intelligent Control\",\"volume\":\"12782 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-08-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Conference on Image Processing and Intelligent Control\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1117/12.3000809\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Image Processing and Intelligent Control","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.3000809","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Image super-resolution reconstruction is an ill-posed problem, as a low-resolution image can correspond to multiple high-resolution images. The models SRCNN and SRDenseNet produce high-resolution images using the mean square error (MSE) loss function, which results in blurry images that are the average of multiple high-quality images. However, the GAN model is capable of reconstructing a more realistic distribution of high-quality images. In this paper, we propose modifications to the SRGAN model by utilizing L1 norm loss for the discriminator's loss function, resulting in a more stable model. We also use VGG16 features for perceptual loss instead of VGG19, which produces better results. The content loss is calculated by weighting both the VGG loss and MSE loss, achieving a better balance between PSNR and human perception.