预测 360° 视频显著性:具有时空一致性的 ConvLSTM 编码器-解码器网络

IF 3.7 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2024-03-18 DOI:10.1109/JETCAS.2024.3377096
Zhaolin Wan;Han Qin;Ruiqin Xiong;Zhiyang Li;Xiaopeng Fan;Debin Zhao
{"title":"预测 360° 视频显著性:具有时空一致性的 ConvLSTM 编码器-解码器网络","authors":"Zhaolin Wan;Han Qin;Ruiqin Xiong;Zhiyang Li;Xiaopeng Fan;Debin Zhao","doi":"10.1109/JETCAS.2024.3377096","DOIUrl":null,"url":null,"abstract":"360° videos have been widely used with the development of virtual reality technology and triggered a demand to determine the most visually attractive objects in them, aka 360° video saliency prediction (VSP). While generative models, i.e., variational autoencoders or autoregressive models have proved their effectiveness in handling spatio-temporal data, utilizing them in 360° VSP is still challenging due to the problem of severe distortion and feature alignment inconsistency. In this study, we propose a novel spatio-temporal consistency generative network for 360° VSP. A dual-stream encoder-decoder architecture is adopted to process the forward and backward frame sequences of 360° videos simultaneously. Moreover, a deep autoregressive module termed as axial-attention based spherical ConvLSTM is designed in the encoder to memorize features with global-range spatial and temporal dependencies. Finally, motivated by the bias phenomenon in human viewing behavior, a temporal-convolutional Gaussian prior module is introduced to further improve the accuracy of the saliency prediction. Extensive experiments are conducted to evaluate our model and the state-of-the-art competitors, demonstrating that our model has achieved the best performance on the databases of PVS-HM and VR-Eyetracking.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":null,"pages":null},"PeriodicalIF":3.7000,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Predicting 360° Video Saliency: A ConvLSTM Encoder-Decoder Network With Spatio-Temporal Consistency\",\"authors\":\"Zhaolin Wan;Han Qin;Ruiqin Xiong;Zhiyang Li;Xiaopeng Fan;Debin Zhao\",\"doi\":\"10.1109/JETCAS.2024.3377096\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"360° videos have been widely used with the development of virtual reality technology and triggered a demand to determine the most visually attractive objects in them, aka 360° video saliency prediction (VSP). While generative models, i.e., variational autoencoders or autoregressive models have proved their effectiveness in handling spatio-temporal data, utilizing them in 360° VSP is still challenging due to the problem of severe distortion and feature alignment inconsistency. In this study, we propose a novel spatio-temporal consistency generative network for 360° VSP. A dual-stream encoder-decoder architecture is adopted to process the forward and backward frame sequences of 360° videos simultaneously. Moreover, a deep autoregressive module termed as axial-attention based spherical ConvLSTM is designed in the encoder to memorize features with global-range spatial and temporal dependencies. Finally, motivated by the bias phenomenon in human viewing behavior, a temporal-convolutional Gaussian prior module is introduced to further improve the accuracy of the saliency prediction. Extensive experiments are conducted to evaluate our model and the state-of-the-art competitors, demonstrating that our model has achieved the best performance on the databases of PVS-HM and VR-Eyetracking.\",\"PeriodicalId\":48827,\"journal\":{\"name\":\"IEEE Journal on Emerging and Selected Topics in Circuits and Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2024-03-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Journal on Emerging and Selected Topics in Circuits and Systems\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10471541/\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10471541/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

随着虚拟现实技术的发展,360° 视频得到了广泛应用,并引发了确定其中最具视觉吸引力物体的需求,即 360° 视频显著性预测(VSP)。虽然生成模型(即变异自动编码器或自回归模型)在处理时空数据方面的有效性已得到证明,但由于存在严重失真和特征对齐不一致的问题,在 360° 视频显著性预测中使用这些模型仍具有挑战性。在本研究中,我们为 360° VSP 提出了一种新型时空一致性生成网络。该网络采用双流编码器-解码器架构,可同时处理 360° 视频的前向和后向帧序列。此外,还在编码器中设计了一个深度自回归模块,称为基于轴向注意力的球形 ConvLSTM,用于记忆具有全局范围空间和时间依赖性的特征。最后,受人类观看行为中的偏差现象的启发,我们引入了时间卷积高斯先验模块,以进一步提高显著性预测的准确性。我们进行了广泛的实验来评估我们的模型和最先进的竞争对手,结果表明我们的模型在 PVS-HM 和 VR-Eyetracking 数据库上取得了最佳性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Predicting 360° Video Saliency: A ConvLSTM Encoder-Decoder Network With Spatio-Temporal Consistency
360° videos have been widely used with the development of virtual reality technology and triggered a demand to determine the most visually attractive objects in them, aka 360° video saliency prediction (VSP). While generative models, i.e., variational autoencoders or autoregressive models have proved their effectiveness in handling spatio-temporal data, utilizing them in 360° VSP is still challenging due to the problem of severe distortion and feature alignment inconsistency. In this study, we propose a novel spatio-temporal consistency generative network for 360° VSP. A dual-stream encoder-decoder architecture is adopted to process the forward and backward frame sequences of 360° videos simultaneously. Moreover, a deep autoregressive module termed as axial-attention based spherical ConvLSTM is designed in the encoder to memorize features with global-range spatial and temporal dependencies. Finally, motivated by the bias phenomenon in human viewing behavior, a temporal-convolutional Gaussian prior module is introduced to further improve the accuracy of the saliency prediction. Extensive experiments are conducted to evaluate our model and the state-of-the-art competitors, demonstrating that our model has achieved the best performance on the databases of PVS-HM and VR-Eyetracking.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
8.50
自引率
2.20%
发文量
86
期刊介绍: The IEEE Journal on Emerging and Selected Topics in Circuits and Systems is published quarterly and solicits, with particular emphasis on emerging areas, special issues on topics that cover the entire scope of the IEEE Circuits and Systems (CAS) Society, namely the theory, analysis, design, tools, and implementation of circuits and systems, spanning their theoretical foundations, applications, and architectures for signal and information processing.
期刊最新文献
Introducing IEEE Collabratec Table of Contents IEEE Journal on Emerging and Selected Topics in Circuits and Systems Information for Authors IEEE Circuits and Systems Society Information IEEE Journal on Emerging and Selected Topics in Circuits and Systems Publication Information
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1