多视点视频编码的深度虚拟参考帧生成

Jianjun Lei, Zongqian Zhang, Dong Liu, Ying Chen, N. Ling
{"title":"多视点视频编码的深度虚拟参考帧生成","authors":"Jianjun Lei, Zongqian Zhang, Dong Liu, Ying Chen, N. Ling","doi":"10.1109/ICIP40778.2020.9191112","DOIUrl":null,"url":null,"abstract":"Multiview video has a large amount of data which brings great challenges to both the storage and transmission. Thus, it is essential to increase the compression efficiency of multiview video coding. In this paper, a deep virtual reference frame generation method is proposed to improve the performance of multiview video coding. Specifically, a parallax-guided generation network (PGG-Net) is designed to transform the parallax relation between different viewpoints and generate a high-quality virtual reference frame. In the network, a multilevel receptive field module is designed to enlarge the receptive field and extract the multi-scale deep features. After that, a parallax attention fusion module is used to transform the parallax and merge the features. The proposed method is integrated into the platform of 3D-HEVC and the generated virtual reference frame is inserted into the reference picture list as an additional reference. Experimental results show that the proposed method achieves 5.31% average BD-rate reduction compared to the 3D-HEVC.","PeriodicalId":405734,"journal":{"name":"2020 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Deep Virtual Reference Frame Generation For Multiview Video Coding\",\"authors\":\"Jianjun Lei, Zongqian Zhang, Dong Liu, Ying Chen, N. Ling\",\"doi\":\"10.1109/ICIP40778.2020.9191112\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multiview video has a large amount of data which brings great challenges to both the storage and transmission. Thus, it is essential to increase the compression efficiency of multiview video coding. In this paper, a deep virtual reference frame generation method is proposed to improve the performance of multiview video coding. Specifically, a parallax-guided generation network (PGG-Net) is designed to transform the parallax relation between different viewpoints and generate a high-quality virtual reference frame. In the network, a multilevel receptive field module is designed to enlarge the receptive field and extract the multi-scale deep features. After that, a parallax attention fusion module is used to transform the parallax and merge the features. The proposed method is integrated into the platform of 3D-HEVC and the generated virtual reference frame is inserted into the reference picture list as an additional reference. Experimental results show that the proposed method achieves 5.31% average BD-rate reduction compared to the 3D-HEVC.\",\"PeriodicalId\":405734,\"journal\":{\"name\":\"2020 IEEE International Conference on Image Processing (ICIP)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE International Conference on Image Processing (ICIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICIP40778.2020.9191112\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Conference on Image Processing (ICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP40778.2020.9191112","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

多视点视频的数据量很大,这给存储和传输带来了很大的挑战。因此,提高多视点视频编码的压缩效率至关重要。为了提高多视点视频编码的性能,本文提出了一种深度虚拟参考帧生成方法。具体来说,设计了视差引导生成网络(PGG-Net)来转换不同视点之间的视差关系,生成高质量的虚拟参照系。在神经网络中,设计了多级感受野模块来扩大感受野并提取多尺度深度特征。然后利用视差注意力融合模块对视差进行变换,并对特征进行融合。将该方法集成到3D-HEVC平台中,生成的虚拟参考帧作为附加参考帧插入到参考图像列表中。实验结果表明,与3D-HEVC相比,该方法的平均bd率降低了5.31%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Deep Virtual Reference Frame Generation For Multiview Video Coding
Multiview video has a large amount of data which brings great challenges to both the storage and transmission. Thus, it is essential to increase the compression efficiency of multiview video coding. In this paper, a deep virtual reference frame generation method is proposed to improve the performance of multiview video coding. Specifically, a parallax-guided generation network (PGG-Net) is designed to transform the parallax relation between different viewpoints and generate a high-quality virtual reference frame. In the network, a multilevel receptive field module is designed to enlarge the receptive field and extract the multi-scale deep features. After that, a parallax attention fusion module is used to transform the parallax and merge the features. The proposed method is integrated into the platform of 3D-HEVC and the generated virtual reference frame is inserted into the reference picture list as an additional reference. Experimental results show that the proposed method achieves 5.31% average BD-rate reduction compared to the 3D-HEVC.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Deep Adversarial Active Learning With Model Uncertainty For Image Classification Emotion Transformation Feature: Novel Feature For Deception Detection In Videos Object Segmentation In Electrical Impedance Tomography For Tactile Sensing A Syndrome-Based Autoencoder For Point Cloud Geometry Compression A Comparison Of Compressed Sensing And Dnn Based Reconstruction For Ghost Motion Imaging
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1