通过场景内容感知进行显著性引导的无参照全向图像质量评估

IF 5.6 2区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC IEEE Transactions on Instrumentation and Measurement Pub Date : 2024-10-23 DOI:10.1109/TIM.2024.3485447
Youzhi Zhang;Lifei Wan;Deyang Liu;Xiaofei Zhou;Ping An;Caifeng Shan
{"title":"通过场景内容感知进行显著性引导的无参照全向图像质量评估","authors":"Youzhi Zhang;Lifei Wan;Deyang Liu;Xiaofei Zhou;Ping An;Caifeng Shan","doi":"10.1109/TIM.2024.3485447","DOIUrl":null,"url":null,"abstract":"Due to the widespread application of the virtual reality (VR) technique, omnidirectional image (OI) has attracted remarkable attention both from academia and industry. In contrast to a natural 2-D image, an OI contains \n<inline-formula> <tex-math>$360^{\\circ } \\times 180^{\\circ }$ </tex-math></inline-formula>\n panoramic content, which presents great challenges for no-reference quality assessment. In this article, we propose a saliency-guided no-reference OI quality assessment (OIQA) method based on scene content understanding. Inspired by the fact that humans use hierarchical representations to grade images, we extract multiscale features from each projected viewport. Then, we integrate the texture removal and background detection techniques to obtain the corresponding saliency map of each viewport, which is subsequently utilized to guide the multiscale feature fusion from the low-level feature to the high-level one. Furthermore, motivated by the human way of understanding content, we leverage a self-attention-based Transformer to build nonlocal mutual dependencies to perceive the variations of distortion and scene in each viewport. Moreover, we also propose a content perception hypernetwork to adaptively return weights and biases for quality regressor, which is conducive to understanding the scene content and learning the perception rule for the quality assessment procedure. Comprehensive experiments validate that the proposed method can achieve competitive performances on two available databases. The code is publicly available at \n<uri>https://github.com/ldyorchid/SCP-OIQA</uri>\n.","PeriodicalId":13341,"journal":{"name":"IEEE Transactions on Instrumentation and Measurement","volume":"73 ","pages":"1-15"},"PeriodicalIF":5.6000,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Saliency-Guided No-Reference Omnidirectional Image Quality Assessment via Scene Content Perceiving\",\"authors\":\"Youzhi Zhang;Lifei Wan;Deyang Liu;Xiaofei Zhou;Ping An;Caifeng Shan\",\"doi\":\"10.1109/TIM.2024.3485447\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Due to the widespread application of the virtual reality (VR) technique, omnidirectional image (OI) has attracted remarkable attention both from academia and industry. In contrast to a natural 2-D image, an OI contains \\n<inline-formula> <tex-math>$360^{\\\\circ } \\\\times 180^{\\\\circ }$ </tex-math></inline-formula>\\n panoramic content, which presents great challenges for no-reference quality assessment. In this article, we propose a saliency-guided no-reference OI quality assessment (OIQA) method based on scene content understanding. Inspired by the fact that humans use hierarchical representations to grade images, we extract multiscale features from each projected viewport. Then, we integrate the texture removal and background detection techniques to obtain the corresponding saliency map of each viewport, which is subsequently utilized to guide the multiscale feature fusion from the low-level feature to the high-level one. Furthermore, motivated by the human way of understanding content, we leverage a self-attention-based Transformer to build nonlocal mutual dependencies to perceive the variations of distortion and scene in each viewport. Moreover, we also propose a content perception hypernetwork to adaptively return weights and biases for quality regressor, which is conducive to understanding the scene content and learning the perception rule for the quality assessment procedure. Comprehensive experiments validate that the proposed method can achieve competitive performances on two available databases. The code is publicly available at \\n<uri>https://github.com/ldyorchid/SCP-OIQA</uri>\\n.\",\"PeriodicalId\":13341,\"journal\":{\"name\":\"IEEE Transactions on Instrumentation and Measurement\",\"volume\":\"73 \",\"pages\":\"1-15\"},\"PeriodicalIF\":5.6000,\"publicationDate\":\"2024-10-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Instrumentation and Measurement\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10731918/\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Instrumentation and Measurement","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10731918/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

由于虚拟现实(VR)技术的广泛应用,全向图像(OI)引起了学术界和工业界的极大关注。与自然的二维图像相比,全向图像包含 360^{\circ }$ 次 180^{\circ }$ 的全景内容。\times 180^{\circ }$ 的全景内容,这给无参照质量评估带来了巨大挑战。在本文中,我们提出了一种基于场景内容理解的显著性引导的无参照 OI 质量评估(OIQA)方法。受人类使用分层表示法对图像进行分级这一事实的启发,我们从每个投影视口提取多尺度特征。然后,我们整合纹理去除和背景检测技术,获得每个视口的相应显著性图谱,并利用该图谱指导从低层次特征到高层次特征的多尺度特征融合。此外,受人类理解内容方式的启发,我们利用基于自我注意的变换器来建立非局部相互依赖关系,从而感知每个视口的变形和场景变化。此外,我们还提出了一种内容感知超网络,用于自适应地返回质量回归器的权重和偏差,这有利于理解场景内容和学习质量评估程序的感知规则。综合实验验证了所提出的方法能在两个可用数据库上取得具有竞争力的性能。代码可在 https://github.com/ldyorchid/SCP-OIQA 上公开获取。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Saliency-Guided No-Reference Omnidirectional Image Quality Assessment via Scene Content Perceiving
Due to the widespread application of the virtual reality (VR) technique, omnidirectional image (OI) has attracted remarkable attention both from academia and industry. In contrast to a natural 2-D image, an OI contains $360^{\circ } \times 180^{\circ }$ panoramic content, which presents great challenges for no-reference quality assessment. In this article, we propose a saliency-guided no-reference OI quality assessment (OIQA) method based on scene content understanding. Inspired by the fact that humans use hierarchical representations to grade images, we extract multiscale features from each projected viewport. Then, we integrate the texture removal and background detection techniques to obtain the corresponding saliency map of each viewport, which is subsequently utilized to guide the multiscale feature fusion from the low-level feature to the high-level one. Furthermore, motivated by the human way of understanding content, we leverage a self-attention-based Transformer to build nonlocal mutual dependencies to perceive the variations of distortion and scene in each viewport. Moreover, we also propose a content perception hypernetwork to adaptively return weights and biases for quality regressor, which is conducive to understanding the scene content and learning the perception rule for the quality assessment procedure. Comprehensive experiments validate that the proposed method can achieve competitive performances on two available databases. The code is publicly available at https://github.com/ldyorchid/SCP-OIQA .
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Instrumentation and Measurement
IEEE Transactions on Instrumentation and Measurement 工程技术-工程:电子与电气
CiteScore
9.00
自引率
23.20%
发文量
1294
审稿时长
3.9 months
期刊介绍: Papers are sought that address innovative solutions to the development and use of electrical and electronic instruments and equipment to measure, monitor and/or record physical phenomena for the purpose of advancing measurement science, methods, functionality and applications. The scope of these papers may encompass: (1) theory, methodology, and practice of measurement; (2) design, development and evaluation of instrumentation and measurement systems and components used in generating, acquiring, conditioning and processing signals; (3) analysis, representation, display, and preservation of the information obtained from a set of measurements; and (4) scientific and technical support to establishment and maintenance of technical standards in the field of Instrumentation and Measurement.
期刊最新文献
Front Cover A Lightweight Reprogramming Framework for Cross-Device Fault Diagnosis in Edge Computing First Arrival Picking of Aircraft-Excited Seismic Waves Based on Energy Distribution DMPDD-Net: An Effective Defect Detection Method for Aluminum Profiles Surface Defect C-DHV: A Cascaded Deep Hough Voting-Based Tracking Algorithm for LiDAR Point Clouds
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1