Light field image coding using a residual channel attention network–based view synthesis

IF 1.7 4区 计算机科学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Data Technologies and Applications Pub Date : 2024-02-21 DOI:10.1108/dta-03-2023-0071
Faguo Liu, Qian Zhang, Tao Yan, Bin Wang, Ying Gao, Jiaqi Hou, Feiniu Yuan
{"title":"Light field image coding using a residual channel attention network–based view synthesis","authors":"Faguo Liu, Qian Zhang, Tao Yan, Bin Wang, Ying Gao, Jiaqi Hou, Feiniu Yuan","doi":"10.1108/dta-03-2023-0071","DOIUrl":null,"url":null,"abstract":"<h3>Purpose</h3>\n<p>Light field images (LFIs) have gained popularity as a technology to increase the field of view (FoV) of plenoptic cameras since they can capture information about light rays with a large FoV. Wide FoV causes light field (LF) data to increase rapidly, which restricts the use of LF imaging in image processing, visual analysis and user interface. Effective LFI coding methods become of paramount importance. This paper aims to eliminate more redundancy by exploring sparsity and correlation in the angular domain of LFIs, as well as mitigate the loss of perceptual quality of LFIs caused by encoding.</p><!--/ Abstract__block -->\n<h3>Design/methodology/approach</h3>\n<p>This work proposes a new efficient LF coding framework. On the coding side, a new sampling scheme and a hierarchical prediction structure are used to eliminate redundancy in the LFI's angular and spatial domains. At the decoding side, high-quality dense LF is reconstructed using a view synthesis method based on the residual channel attention network (RCAN).</p><!--/ Abstract__block -->\n<h3>Findings</h3>\n<p>In three different LF datasets, our proposed coding framework not only reduces the transmitted bit rate but also maintains a higher view quality than the current more advanced methods.</p><!--/ Abstract__block -->\n<h3>Originality/value</h3>\n<p>(1) A new sampling scheme is designed to synthesize high-quality LFIs while better ensuring LF angular domain sparsity. (2) To further eliminate redundancy in the spatial domain, new ranking schemes and hierarchical prediction structures are designed. (3) A synthetic network based on RCAN and a novel loss function is designed to mitigate the perceptual quality loss due to the coding process.</p><!--/ Abstract__block -->","PeriodicalId":56156,"journal":{"name":"Data Technologies and Applications","volume":null,"pages":null},"PeriodicalIF":1.7000,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Data Technologies and Applications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1108/dta-03-2023-0071","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose

Light field images (LFIs) have gained popularity as a technology to increase the field of view (FoV) of plenoptic cameras since they can capture information about light rays with a large FoV. Wide FoV causes light field (LF) data to increase rapidly, which restricts the use of LF imaging in image processing, visual analysis and user interface. Effective LFI coding methods become of paramount importance. This paper aims to eliminate more redundancy by exploring sparsity and correlation in the angular domain of LFIs, as well as mitigate the loss of perceptual quality of LFIs caused by encoding.

Design/methodology/approach

This work proposes a new efficient LF coding framework. On the coding side, a new sampling scheme and a hierarchical prediction structure are used to eliminate redundancy in the LFI's angular and spatial domains. At the decoding side, high-quality dense LF is reconstructed using a view synthesis method based on the residual channel attention network (RCAN).

Findings

In three different LF datasets, our proposed coding framework not only reduces the transmitted bit rate but also maintains a higher view quality than the current more advanced methods.

Originality/value

(1) A new sampling scheme is designed to synthesize high-quality LFIs while better ensuring LF angular domain sparsity. (2) To further eliminate redundancy in the spatial domain, new ranking schemes and hierarchical prediction structures are designed. (3) A synthetic network based on RCAN and a novel loss function is designed to mitigate the perceptual quality loss due to the coding process.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用基于残差通道注意网络的视图合成技术进行光场图像编码
目的 光场图像(LFIs)可以捕捉大视场(FoV)的光线信息,因此作为一种增加全视角照相机视场(FoV)的技术而广受欢迎。宽视场会导致光场(LF)数据迅速增加,从而限制了 LF 成像在图像处理、视觉分析和用户界面中的应用。有效的光场成像编码方法变得至关重要。本文旨在通过探索 LFI 角度域的稀疏性和相关性来消除更多冗余,同时减轻编码对 LFI 感知质量造成的损失。在编码方面,采用了新的采样方案和分层预测结构来消除 LFI 角域和空间域中的冗余。在解码端,使用基于残差信道注意网络(RCAN)的视图合成方法重建高质量的密集 LF。在三个不同的 LF 数据集中,我们提出的编码框架不仅降低了传输比特率,而且与当前更先进的方法相比保持了更高的视图质量。(2)为进一步消除空间域的冗余,设计了新的排序方案和分层预测结构。(3) 设计了基于 RCAN 和新型损失函数的合成网络,以减轻编码过程造成的感知质量损失。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Data Technologies and Applications
Data Technologies and Applications Social Sciences-Library and Information Sciences
CiteScore
3.80
自引率
6.20%
发文量
29
期刊介绍: Previously published as: Program Online from: 2018 Subject Area: Information & Knowledge Management, Library Studies
期刊最新文献
Understanding customer behavior by mapping complaints to personality based on social media textual data A systematic review of the use of FHIR to support clinical research, public health and medical education Novel framework for learning performance prediction using pattern identification and deep learning A comparative analysis of job satisfaction prediction models using machine learning: a mixed-method approach Assessing the alignment of corporate ESG disclosures with the UN sustainable development goals: a BERT-based text analysis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1