{"title":"CRNet: Unsupervised Color Retention Network for Blind Motion Deblurring","authors":"Suiyi Zhao, Zhao Zhang, Richang Hong, Mingliang Xu, Haijun Zhang, Meng Wang, Shuicheng Yan","doi":"10.1145/3503161.3547962","DOIUrl":null,"url":null,"abstract":"Blind image deblurring is still a challenging problem due to the inherent ill-posed properties. To improve the deblurring performance, many supervised methods have been proposed. However, obtaining labeled samples from a specific distribution (or a domain) is usually expensive, and the data-driven training-based model also cannot be generalized to the blurry images in all domains. These challenges have given birth to certain unsupervised deblurring methods. However, there is a great chromatic aberration between the latent and original images, directly degrading the performance. In this paper, we therefore propose a novel unsupervised color retention network termed CRNet to perform blind motion deblurring. In addition, new concepts of blur offset estimation and adaptive blur correction are proposed to retain the color information when deblurring. As a result, unlike the previous studies, CRNet does not learn a mapping directly from the blurry image to the restored latent image, but from the blurry image to a motion offset. An adaptive blur correction operation is then performed on the blurry image to restore the latent image, thereby retaining the color information of the original image to the greatest extent. To further effectively retain the color information and extract the blur information, we also propose a new module called pyramid global blur feature perception (PGBFP). To quantitatively prove the effectiveness of our network in color retention, we propose a novel chromatic aberration quantization metrics in line with the human perception. Extensive quantitative and visualization experiments show that CRNet can obtain the state-of-the-art performance in unsupervised deblurring tasks.","PeriodicalId":412792,"journal":{"name":"Proceedings of the 30th ACM International Conference on Multimedia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 30th ACM International Conference on Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3503161.3547962","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12
Abstract
Blind image deblurring is still a challenging problem due to the inherent ill-posed properties. To improve the deblurring performance, many supervised methods have been proposed. However, obtaining labeled samples from a specific distribution (or a domain) is usually expensive, and the data-driven training-based model also cannot be generalized to the blurry images in all domains. These challenges have given birth to certain unsupervised deblurring methods. However, there is a great chromatic aberration between the latent and original images, directly degrading the performance. In this paper, we therefore propose a novel unsupervised color retention network termed CRNet to perform blind motion deblurring. In addition, new concepts of blur offset estimation and adaptive blur correction are proposed to retain the color information when deblurring. As a result, unlike the previous studies, CRNet does not learn a mapping directly from the blurry image to the restored latent image, but from the blurry image to a motion offset. An adaptive blur correction operation is then performed on the blurry image to restore the latent image, thereby retaining the color information of the original image to the greatest extent. To further effectively retain the color information and extract the blur information, we also propose a new module called pyramid global blur feature perception (PGBFP). To quantitatively prove the effectiveness of our network in color retention, we propose a novel chromatic aberration quantization metrics in line with the human perception. Extensive quantitative and visualization experiments show that CRNet can obtain the state-of-the-art performance in unsupervised deblurring tasks.