Spatio-temporal enhancement method based on dense connection structure for compressed video

IF 1 4区 计算机科学 Q4 ENGINEERING, ELECTRICAL & ELECTRONIC Journal of Electronic Imaging Pub Date : 2024-08-01 DOI:10.1117/1.jei.33.4.043054
Hongyao Li, Xiaohai He, Xiaodong Bi, Shuhua Xiong, Honggang Chen
{"title":"Spatio-temporal enhancement method based on dense connection structure for compressed video","authors":"Hongyao Li, Xiaohai He, Xiaodong Bi, Shuhua Xiong, Honggang Chen","doi":"10.1117/1.jei.33.4.043054","DOIUrl":null,"url":null,"abstract":"Under limited bandwidth conditions, video transmission often employs lossy compression to reduce the data volume, inevitably introducing compression noise. Quality enhancement of compressed videos can effectively recover the information loss incurred during the compression process. Currently, multi-frame quality enhancement of compressed videos has shown performance advantages compared to single-frame methods, as it utilizes the temporal correlation of videos. Methods based on deformable convolutions obtain spatio-temporal fusion features for reconstruction through multi-frame alignment. However, due to the limited utilization of deep information and sensitivity to alignment accuracy, these methods yield suboptimal results, especially in scenarios with scene changes and intense motion. To overcome these limitations, we propose a dense network-based quality enhancement method to obtain more accurate spatio-temporal fusion features. Specifically, the deep spatial features are first extracted from the to-be-enhanced frames using dense connections, then combined with the aligned features obtained from deformable convolution through the convolution and attention mechanism to make the network more attentive to useful branches in an adaptive way, and finally, the enhanced frames are obtained through the quality enhancement module of the dense connection structure. The experimental results show that when the quantization parameter is 37, the proposed method can improve the average peak signal-to-noise ratio by 0.99 dB in the lowdelay_P configuration.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.0000,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Electronic Imaging","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1117/1.jei.33.4.043054","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Under limited bandwidth conditions, video transmission often employs lossy compression to reduce the data volume, inevitably introducing compression noise. Quality enhancement of compressed videos can effectively recover the information loss incurred during the compression process. Currently, multi-frame quality enhancement of compressed videos has shown performance advantages compared to single-frame methods, as it utilizes the temporal correlation of videos. Methods based on deformable convolutions obtain spatio-temporal fusion features for reconstruction through multi-frame alignment. However, due to the limited utilization of deep information and sensitivity to alignment accuracy, these methods yield suboptimal results, especially in scenarios with scene changes and intense motion. To overcome these limitations, we propose a dense network-based quality enhancement method to obtain more accurate spatio-temporal fusion features. Specifically, the deep spatial features are first extracted from the to-be-enhanced frames using dense connections, then combined with the aligned features obtained from deformable convolution through the convolution and attention mechanism to make the network more attentive to useful branches in an adaptive way, and finally, the enhanced frames are obtained through the quality enhancement module of the dense connection structure. The experimental results show that when the quantization parameter is 37, the proposed method can improve the average peak signal-to-noise ratio by 0.99 dB in the lowdelay_P configuration.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于密集连接结构的压缩视频时空增强方法
在带宽有限的条件下,视频传输通常采用有损压缩来减少数据量,这不可避免地会引入压缩噪声。对压缩视频进行质量增强可以有效弥补压缩过程中的信息损失。目前,与单帧方法相比,压缩视频的多帧质量增强方法因利用了视频的时间相关性而显示出性能优势。基于可变形卷积的方法可通过多帧对齐获得用于重建的时空融合特征。然而,由于对深度信息的利用有限以及对配准精度的敏感性,这些方法产生的结果并不理想,尤其是在场景变化和运动剧烈的场景中。为了克服这些局限性,我们提出了一种基于密集网络的质量增强方法,以获得更精确的时空融合特征。具体来说,首先利用密集连接从待增强帧中提取深度空间特征,然后通过卷积和关注机制与可变形卷积得到的配准特征相结合,使网络以自适应的方式更加关注有用的分支,最后通过密集连接结构的质量增强模块得到增强帧。实验结果表明,当量化参数为 37 时,在低延迟_P 配置下,所提出的方法可以将平均峰值信噪比提高 0.99 dB。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Journal of Electronic Imaging
Journal of Electronic Imaging 工程技术-成像科学与照相技术
CiteScore
1.70
自引率
27.30%
发文量
341
审稿时长
4.0 months
期刊介绍: The Journal of Electronic Imaging publishes peer-reviewed papers in all technology areas that make up the field of electronic imaging and are normally considered in the design, engineering, and applications of electronic imaging systems.
期刊最新文献
DTSIDNet: a discrete wavelet and transformer based network for single image denoising Multi-head attention with reinforcement learning for supervised video summarization End-to-end multitasking network for smart container product positioning and segmentation Generative object separation in X-ray images Toward effective local dimming-driven liquid crystal displays: a deep curve estimation–based adaptive compensation solution
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1