基于人类视觉感知的无参照视频质量评估

IF 1 4区 计算机科学 Q4 ENGINEERING, ELECTRICAL & ELECTRONIC Journal of Electronic Imaging Pub Date : 2024-07-01 DOI:10.1117/1.jei.33.4.043029
Zhou Zhou, Guangqian Kong, Xun Duan, Huiyun Long
{"title":"基于人类视觉感知的无参照视频质量评估","authors":"Zhou Zhou, Guangqian Kong, Xun Duan, Huiyun Long","doi":"10.1117/1.jei.33.4.043029","DOIUrl":null,"url":null,"abstract":"Conducting video quality assessment (VQA) for user-generated content (UGC) videos and achieving consistency with subjective quality assessment are highly challenging tasks. We propose a no-reference video quality assessment (NR-VQA) method for UGC scenarios by considering characteristics of human visual perception. To distinguish between varying levels of human attention within different regions of a single frame, we devise a dual-branch network. This network extracts spatial features containing positional information of moving objects from frame-level images. In addition, we employ the temporal pyramid pooling module to effectively integrate temporal features of different scales, enabling the extraction of inter-frame temporal information. To mitigate the time-lag effect in the human visual system, we introduce the temporal pyramid attention module. This module evaluates the significance of individual video frames and simulates the varying attention levels exhibited by humans towards frames. We conducted experiments on the KoNViD-1k, LIVE-VQC, CVD2014, and YouTube-UGC databases. The experimental results demonstrate the superior performance of our proposed method compared to recent NR-VQA techniques in terms of both objective assessment and consistency with subjective assessment.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.0000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"No-reference video quality assessment based on human visual perception\",\"authors\":\"Zhou Zhou, Guangqian Kong, Xun Duan, Huiyun Long\",\"doi\":\"10.1117/1.jei.33.4.043029\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Conducting video quality assessment (VQA) for user-generated content (UGC) videos and achieving consistency with subjective quality assessment are highly challenging tasks. We propose a no-reference video quality assessment (NR-VQA) method for UGC scenarios by considering characteristics of human visual perception. To distinguish between varying levels of human attention within different regions of a single frame, we devise a dual-branch network. This network extracts spatial features containing positional information of moving objects from frame-level images. In addition, we employ the temporal pyramid pooling module to effectively integrate temporal features of different scales, enabling the extraction of inter-frame temporal information. To mitigate the time-lag effect in the human visual system, we introduce the temporal pyramid attention module. This module evaluates the significance of individual video frames and simulates the varying attention levels exhibited by humans towards frames. We conducted experiments on the KoNViD-1k, LIVE-VQC, CVD2014, and YouTube-UGC databases. The experimental results demonstrate the superior performance of our proposed method compared to recent NR-VQA techniques in terms of both objective assessment and consistency with subjective assessment.\",\"PeriodicalId\":54843,\"journal\":{\"name\":\"Journal of Electronic Imaging\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.0000,\"publicationDate\":\"2024-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Electronic Imaging\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1117/1.jei.33.4.043029\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Electronic Imaging","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1117/1.jei.33.4.043029","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

对用户生成内容(UGC)视频进行视频质量评估(VQA)并实现与主观质量评估的一致性是极具挑战性的任务。考虑到人类视觉感知的特点,我们提出了一种针对 UGC 场景的无参考视频质量评估(NR-VQA)方法。为了区分单帧不同区域内人类注意力的不同水平,我们设计了一个双分支网络。该网络从帧级图像中提取包含移动物体位置信息的空间特征。此外,我们还采用了时间金字塔池化模块来有效整合不同尺度的时间特征,从而提取帧间的时间信息。为了缓解人类视觉系统中的时滞效应,我们引入了时空金字塔注意力模块。该模块可评估单个视频帧的重要性,并模拟人类对帧所表现出的不同注意力水平。我们在 KoNViD-1k、LIVE-VQC、CVD2014 和 YouTube-UGC 数据库上进行了实验。实验结果表明,与最新的 NR-VQA 技术相比,我们提出的方法在客观评估和与主观评估的一致性方面都表现出色。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
No-reference video quality assessment based on human visual perception
Conducting video quality assessment (VQA) for user-generated content (UGC) videos and achieving consistency with subjective quality assessment are highly challenging tasks. We propose a no-reference video quality assessment (NR-VQA) method for UGC scenarios by considering characteristics of human visual perception. To distinguish between varying levels of human attention within different regions of a single frame, we devise a dual-branch network. This network extracts spatial features containing positional information of moving objects from frame-level images. In addition, we employ the temporal pyramid pooling module to effectively integrate temporal features of different scales, enabling the extraction of inter-frame temporal information. To mitigate the time-lag effect in the human visual system, we introduce the temporal pyramid attention module. This module evaluates the significance of individual video frames and simulates the varying attention levels exhibited by humans towards frames. We conducted experiments on the KoNViD-1k, LIVE-VQC, CVD2014, and YouTube-UGC databases. The experimental results demonstrate the superior performance of our proposed method compared to recent NR-VQA techniques in terms of both objective assessment and consistency with subjective assessment.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Electronic Imaging
Journal of Electronic Imaging 工程技术-成像科学与照相技术
CiteScore
1.70
自引率
27.30%
发文量
341
审稿时长
4.0 months
期刊介绍: The Journal of Electronic Imaging publishes peer-reviewed papers in all technology areas that make up the field of electronic imaging and are normally considered in the design, engineering, and applications of electronic imaging systems.
期刊最新文献
DTSIDNet: a discrete wavelet and transformer based network for single image denoising Multi-head attention with reinforcement learning for supervised video summarization End-to-end multitasking network for smart container product positioning and segmentation Generative object separation in X-ray images Toward effective local dimming-driven liquid crystal displays: a deep curve estimation–based adaptive compensation solution
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1