{"title":"Diffusion Model-Based Visual Compensation Guidance and Visual Difference Analysis for No-Reference Image Quality Assessment","authors":"Zhaoyang Wang;Bo Hu;Mingyang Zhang;Jie Li;Leida Li;Maoguo Gong;Xinbo Gao","doi":"10.1109/TIP.2024.3523800","DOIUrl":null,"url":null,"abstract":"Existing free-energy guided No-Reference Image Quality Assessment (NR-IQA) methods continue to face challenges in effectively restoring complexly distorted images. The features guiding the main network for quality assessment lack interpretability, and efficiently leveraging high-level feature information remains a significant challenge. As a novel class of state-of-the-art (SOTA) generative model, the diffusion model exhibits the capability to model intricate relationships, enhancing image restoration effectiveness. Moreover, the intermediate variables in the denoising iteration process exhibit clearer and more interpretable meanings for high-level visual information guidance. In view of these, we pioneer the exploration of the diffusion model into the domain of NR-IQA. We design a novel diffusion model for enhancing images with various types of distortions, resulting in higher quality and more interpretable high-level visual information. Our experiments demonstrate that the diffusion model establishes a clear mapping relationship between image reconstruction and image quality scores, which the network learns to guide quality assessment. Finally, to fully leverage high-level visual information, we design two complementary visual branches to collaboratively perform quality evaluation. Extensive experiments are conducted on seven public NR-IQA datasets, and the results demonstrate that the proposed model outperforms SOTA methods for NR-IQA. The codes will be available at <uri>https://github.com/handsomewzy/DiffV2IQA</uri>.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"263-278"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10829512/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Existing free-energy guided No-Reference Image Quality Assessment (NR-IQA) methods continue to face challenges in effectively restoring complexly distorted images. The features guiding the main network for quality assessment lack interpretability, and efficiently leveraging high-level feature information remains a significant challenge. As a novel class of state-of-the-art (SOTA) generative model, the diffusion model exhibits the capability to model intricate relationships, enhancing image restoration effectiveness. Moreover, the intermediate variables in the denoising iteration process exhibit clearer and more interpretable meanings for high-level visual information guidance. In view of these, we pioneer the exploration of the diffusion model into the domain of NR-IQA. We design a novel diffusion model for enhancing images with various types of distortions, resulting in higher quality and more interpretable high-level visual information. Our experiments demonstrate that the diffusion model establishes a clear mapping relationship between image reconstruction and image quality scores, which the network learns to guide quality assessment. Finally, to fully leverage high-level visual information, we design two complementary visual branches to collaboratively perform quality evaluation. Extensive experiments are conducted on seven public NR-IQA datasets, and the results demonstrate that the proposed model outperforms SOTA methods for NR-IQA. The codes will be available at https://github.com/handsomewzy/DiffV2IQA.