Existing no reference image quality assessment(NR-IQA) methods have not incorporated image semantics explicitly in the assessment process, thus overlooking the significant correlation between image content and its quality. To address this gap, we leverages image semantics as guiding information for quality assessment, integrating it explicitly into the NR-IQA process through a Semantic-Guided NR-IQA model(SGIQA), which is based on the Swin Transformer. Specifically, we introduce a Semantic Attention Module and a Perceptual Rule Learning Module. The Semantic Attention Module refines the features extracted by the deep network according to the image content, allowing the network to dynamically extract quality perceptual features according to the semantic context of the image. The Perceptual Rule Learning Module generates parameters for the image quality regression module tailored to the image content, facilitating a dynamic assessment of image quality based on its semantic information. Employing the Swin Transformer and integrating these two modules, we have developed the final semantic-guided NR-IQA model. Extensive experiments on five widely-used IQA datasets demonstrate that our method not only exhibits excellent generalization capabilities but also achieves state-of-the-art performance.
{"title":"SGIQA: Semantic-Guided No-Reference Image Quality Assessment","authors":"Linpeng Pan;Xiaozhe Zhang;Fengying Xie;Haopeng Zhang;Yushan Zheng","doi":"10.1109/TBC.2024.3450320","DOIUrl":"10.1109/TBC.2024.3450320","url":null,"abstract":"Existing no reference image quality assessment(NR-IQA) methods have not incorporated image semantics explicitly in the assessment process, thus overlooking the significant correlation between image content and its quality. To address this gap, we leverages image semantics as guiding information for quality assessment, integrating it explicitly into the NR-IQA process through a Semantic-Guided NR-IQA model(SGIQA), which is based on the Swin Transformer. Specifically, we introduce a Semantic Attention Module and a Perceptual Rule Learning Module. The Semantic Attention Module refines the features extracted by the deep network according to the image content, allowing the network to dynamically extract quality perceptual features according to the semantic context of the image. The Perceptual Rule Learning Module generates parameters for the image quality regression module tailored to the image content, facilitating a dynamic assessment of image quality based on its semantic information. Employing the Swin Transformer and integrating these two modules, we have developed the final semantic-guided NR-IQA model. Extensive experiments on five widely-used IQA datasets demonstrate that our method not only exhibits excellent generalization capabilities but also achieves state-of-the-art performance.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 4","pages":"1292-1301"},"PeriodicalIF":3.2,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142207648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Learning-based point cloud compression has achieved great success in Rate-Distortion (RD) efficiency. Existing methods usually utilize Variational AutoEncoder (VAE) network, which might lead to poor detail reconstruction and high computational complexity. To address these issues, we propose a Scale-adaptive Asymmetric Sparse Variational AutoEncoder (SAS-VAE) in this work. First, we develop an Asymmetric Multiscale Sparse Convolution (AMSC), which exploits multi-resolution branches to aggregate multiscale features at encoder, and excludes symmetric feature fusion branches to control the model complexity at decoder. Second, we design a Scale Adaptive Feature Refinement Structure (SAFRS) to adaptively adjust the number of Feature Refinement Modules (FRMs), thereby improving RD performance with an acceptable computational overhead. Third, we implement our framework with AMSC and SAFRS, and train it with an RD loss based on Fine-grained Weighted Binary Cross-Entropy (FWBCE) function. Experimental results on 8iVFB, Owlii, and MVUV datasets show that our method outperforms several popular methods, with a 90.0% time reduction and a 51.8% BD-BR saving compared with V-PCC. The code will be available soon at https://github.com/fancj2017/SAS-VAE