ESIQA: Perceptual Quality Assessment of Vision-Pro-based Egocentric Spatial Images

Xilei Zhu;Liu Yang;Huiyu Duan;Xiongkuo Min;Guangtao Zhai;Patrick Le Callet
{"title":"ESIQA: Perceptual Quality Assessment of Vision-Pro-based Egocentric Spatial Images","authors":"Xilei Zhu;Liu Yang;Huiyu Duan;Xiongkuo Min;Guangtao Zhai;Patrick Le Callet","doi":"10.1109/TVCG.2025.3549174","DOIUrl":null,"url":null,"abstract":"With the development of eXtended Reality (XR), photo capturing and display technology based on head-mounted displays (HMDs) have experienced significant advancements and gained considerable attention. Egocentric spatial images and videos are emerging as a compelling form of stereoscopic XR content. The assessment for the Quality of Experience (QoE) of XR content is important to ensure a high-quality viewing experience. Different from traditional 2D images, egocentric spatial images present challenges for perceptual quality assessment due to their special shooting, processing methods, and stereoscopic characteristics. However, the corresponding image quality assessment (IQA) research for egocentric spatial images is still lacking. In this paper, we establish the Egocentric Spatial Images Quality Assessment Database (ESIQAD), the first IQA database dedicated for egocentric spatial images as far as we know. Our ESIQAD includes 500 egocentric spatial images and the corresponding mean opinion scores (MOSs) under three display modes, including 2D display, 3D-window display, and 3D-immersive display. Based on our ESIQAD, we propose a novel mamba2-based multi-stage feature fusion model, termed ESIQAnet, which predicts the perceptual quality of egocentric spatial images under the three display modes. Specifically, we first extract features from multiple visual state space duality (VSSD) blocks, then apply cross attention to fuse binocular view information and use transposed attention to further refine the features. The multi-stage features are finally concatenated and fed into a quality regression network to predict the quality score. Extensive experimental results demonstrate that the ESIQAnet outperforms 22 state-of-the-art IQA models on the ESIQAD under all three display modes. The database and code are available at https://github.com/IntMeGroup/ESIQA.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"2277-2287"},"PeriodicalIF":6.5000,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on visualization and computer graphics","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10919000/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

With the development of eXtended Reality (XR), photo capturing and display technology based on head-mounted displays (HMDs) have experienced significant advancements and gained considerable attention. Egocentric spatial images and videos are emerging as a compelling form of stereoscopic XR content. The assessment for the Quality of Experience (QoE) of XR content is important to ensure a high-quality viewing experience. Different from traditional 2D images, egocentric spatial images present challenges for perceptual quality assessment due to their special shooting, processing methods, and stereoscopic characteristics. However, the corresponding image quality assessment (IQA) research for egocentric spatial images is still lacking. In this paper, we establish the Egocentric Spatial Images Quality Assessment Database (ESIQAD), the first IQA database dedicated for egocentric spatial images as far as we know. Our ESIQAD includes 500 egocentric spatial images and the corresponding mean opinion scores (MOSs) under three display modes, including 2D display, 3D-window display, and 3D-immersive display. Based on our ESIQAD, we propose a novel mamba2-based multi-stage feature fusion model, termed ESIQAnet, which predicts the perceptual quality of egocentric spatial images under the three display modes. Specifically, we first extract features from multiple visual state space duality (VSSD) blocks, then apply cross attention to fuse binocular view information and use transposed attention to further refine the features. The multi-stage features are finally concatenated and fed into a quality regression network to predict the quality score. Extensive experimental results demonstrate that the ESIQAnet outperforms 22 state-of-the-art IQA models on the ESIQAD under all three display modes. The database and code are available at https://github.com/IntMeGroup/ESIQA.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于视觉支持的自我中心空间图像的感知质量评估。
随着扩展现实技术(XR)的发展,基于头戴式显示器(hmd)的照片捕捉和显示技术取得了长足的进步,并受到了广泛的关注。以自我为中心的空间图像和视频正在成为立体XR内容的一种引人注目的形式。XR内容的体验质量(QoE)评估对于确保高质量的观看体验非常重要。与传统的二维图像不同,以自我为中心的空间图像由于其特殊的拍摄、处理方法和立体性等特点,给感知质量评价带来了挑战,但对以自我为中心的空间图像进行相应的图像质量评价(IQA)研究仍然缺乏。在本文中,我们建立了自我中心空间图像质量评估数据库(ESIQAD),这是迄今为止我们所知道的第一个专门用于自我中心空间图像的IQA数据库。我们的ESIQAD包括500张以自我为中心的空间图像和相应的平均意见评分(MOSs),在三种显示模式下,包括2D显示、3d窗口显示和3d沉浸式显示。基于我们的ESIQAD,我们提出了一种新的基于mamba2的多阶段特征融合模型ESIQAnet,该模型预测了三种显示模式下以自我为中心的空间图像的感知质量。具体而言,我们首先从多个视觉状态空间二元性(VSSD)块中提取特征,然后利用交叉注意融合双眼视图信息,并利用转置注意进一步细化特征。最后将多阶段特征连接并馈入质量回归网络,以预测质量分数。大量的实验结果表明,在所有三种显示模式下,ESIQAnet在ESIQAD上优于22种最先进的IQA模型。数据库和代码可从https://github.com/IntMeGroup/ESIQA获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
HYVE: Hybrid Vertex Encoder for Neural Distance Fields. Errata to "DiffCap: Diffusion-Based Real-Time Human Motion Capture Using Sparse IMUs and a Monocular Camera". Outer Contour-Driven Ruled Surface Generation for Linear Hot-Wire Rough Machining. Gaussian Mixture Model-Based Splatting for Rapid Rendering and Time Series Analysis of Large-Scale Particle Data. Minding the Gap: A Quantitative Comparison of Distance Perception on Open vs. Closed Circles.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1