Blind Image Quality Assessment With Coarse-Grained Perception Construction and Fine-Grained Interaction Learning

IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC IEEE Transactions on Broadcasting Pub Date : 2023-12-28 DOI:10.1109/TBC.2023.3342696
Bo Hu;Tuoxun Zhao;Jia Zheng;Yan Zhang;Leida Li;Weisheng Li;Xinbo Gao
{"title":"Blind Image Quality Assessment With Coarse-Grained Perception Construction and Fine-Grained Interaction Learning","authors":"Bo Hu;Tuoxun Zhao;Jia Zheng;Yan Zhang;Leida Li;Weisheng Li;Xinbo Gao","doi":"10.1109/TBC.2023.3342696","DOIUrl":null,"url":null,"abstract":"Image Quality Assessment (IQA) plays an important role in the field of computer vision. However, most of the existing metrics for Blind IQA (BIQA) adopt an end-to-end way and do not adequately simulate the process of human subjective evaluation, which limits further improvements in model performance. In the process of perception, people first give a preliminary impression of the distortion type and relative quality of the images, and then give a specific quality score under the influence of the interaction of the two. Although some methods have attempted to explore the effects of distortion type and relative quality, the relationship between them has been neglected. In this paper, we propose a BIQA with coarse-grained perception construction and fine-grained interaction learning, called PINet for short. The fundamental idea is to learn from the two-stage human perceptual process. Specifically, in the pre-training stage, the backbone initially processes a pair of synthetic distorted images with pseudo-subjective scores, and the multi-scale feature extraction module integrates the deep information and delivers it to the coarse-grained perception construction module, which performs the distortion discrimination and the quality ranking. In the fine-tuning stage, we propose a fine-grained interactive learning module to interact with the two pieces of information to further improve the performance of the proposed PINet. The experimental results prove that the proposed PINet not only achieves competing performances on synthetic distortion datasets but also performs better on authentic distortion datasets.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"533-544"},"PeriodicalIF":3.2000,"publicationDate":"2023-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Broadcasting","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10375566/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Image Quality Assessment (IQA) plays an important role in the field of computer vision. However, most of the existing metrics for Blind IQA (BIQA) adopt an end-to-end way and do not adequately simulate the process of human subjective evaluation, which limits further improvements in model performance. In the process of perception, people first give a preliminary impression of the distortion type and relative quality of the images, and then give a specific quality score under the influence of the interaction of the two. Although some methods have attempted to explore the effects of distortion type and relative quality, the relationship between them has been neglected. In this paper, we propose a BIQA with coarse-grained perception construction and fine-grained interaction learning, called PINet for short. The fundamental idea is to learn from the two-stage human perceptual process. Specifically, in the pre-training stage, the backbone initially processes a pair of synthetic distorted images with pseudo-subjective scores, and the multi-scale feature extraction module integrates the deep information and delivers it to the coarse-grained perception construction module, which performs the distortion discrimination and the quality ranking. In the fine-tuning stage, we propose a fine-grained interactive learning module to interact with the two pieces of information to further improve the performance of the proposed PINet. The experimental results prove that the proposed PINet not only achieves competing performances on synthetic distortion datasets but also performs better on authentic distortion datasets.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用粗粒度感知构建和细粒度交互学习进行盲图像质量评估
图像质量评估(IQA)在计算机视觉领域发挥着重要作用。然而,现有的盲 IQA(BIQA)指标大多采用端到端的方式,不能充分模拟人类主观评价的过程,从而限制了模型性能的进一步提高。在感知过程中,人们首先会对图像的畸变类型和相对质量产生初步印象,然后在两者相互作用的影响下给出具体的质量分数。虽然有些方法试图探讨失真类型和相对质量的影响,但它们之间的关系一直被忽视。在本文中,我们提出了一种具有粗粒度感知构建和细粒度交互学习功能的 BIQA,简称 PINet。其基本思想是从两个阶段的人类感知过程中学习。具体来说,在预训练阶段,骨干网初步处理一对带有伪主观评分的合成失真图像,多尺度特征提取模块整合深层信息并将其传递给粗粒度感知构建模块,由其执行失真判别和质量排序。在微调阶段,我们提出了一个细粒度交互学习模块,与这两种信息进行交互,以进一步提高拟议 PINet 的性能。实验结果证明,所提出的 PINet 不仅在合成失真数据集上取得了竞争性的性能,而且在真实失真数据集上也有更好的表现。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Broadcasting
IEEE Transactions on Broadcasting 工程技术-电信学
CiteScore
9.40
自引率
31.10%
发文量
79
审稿时长
6-12 weeks
期刊介绍: The Society’s Field of Interest is “Devices, equipment, techniques and systems related to broadcast technology, including the production, distribution, transmission, and propagation aspects.” In addition to this formal FOI statement, which is used to provide guidance to the Publications Committee in the selection of content, the AdCom has further resolved that “broadcast systems includes all aspects of transmission, propagation, and reception.”
期刊最新文献
Front Cover Table of Contents Table of Contents IEEE Transactions on Broadcasting Information for Authors IEEE Transactions on Broadcasting Information for Authors
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1