Xuehuai Shi;Yucheng Li;Jiaheng Li;Jian Wu;Jieming Yin;Xiaobai Chen;Lili Wang
{"title":"Audio-Visual Aware Foveated Rendering","authors":"Xuehuai Shi;Yucheng Li;Jiaheng Li;Jian Wu;Jieming Yin;Xiaobai Chen;Lili Wang","doi":"10.1109/TVCG.2025.3554737","DOIUrl":null,"url":null,"abstract":"With the increasing complexity of geometry and rendering effects in virtual reality (VR) scenes, existing foveated rendering methods for VR head-mounted displays (HMDs) struggle to meet users’ demands for VR scene rendering with high frame rates (<inline-formula><tex-math>$\\geq 60fps$</tex-math></inline-formula> for rendering binocular foveated images in VR scenes containing over 50 m triangles). Current research validates that auditory content affects the perception of the human visual system (HVS). However, existing foveated rendering methods primarily model the HVS's eccentricity-dependent visual perception ability on the visual content in VR while ignoring the impact of auditory content on the HVS's visual perception. In this article, we introduce an auditory-content-based perceived rendering quality analysis to quantify the impact of visual perception under different auditory conditions in foveated rendering. Based on the analysis results, we propose an audio-visual aware foveated rendering method (AvFR). AvFR first constructs an audio-visual feature-driven perception model that predicts the eccentricity-based visual perception in real time by combining the scene's audio-visual content, and then proposes a foveated rendering cost optimization algorithm to adaptively control the shading rate of different regions with the guidance of the perception model. In complex scenes with visual and auditory content containing over 1.17 m triangles, AvFR renders high-quality binocular foveated images at an average frame rate of 116<inline-formula><tex-math>$fps$</tex-math></inline-formula>. The results of the main user study and performance evaluation validate that AvFR achieves significant performance improvement (up to 1.4× speedup) without lowering the perceived visual quality compared with the state-of-the-art VR-HMD foveated rendering method.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 10","pages":"7711-7726"},"PeriodicalIF":6.5000,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on visualization and computer graphics","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10938866/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With the increasing complexity of geometry and rendering effects in virtual reality (VR) scenes, existing foveated rendering methods for VR head-mounted displays (HMDs) struggle to meet users’ demands for VR scene rendering with high frame rates ($\geq 60fps$ for rendering binocular foveated images in VR scenes containing over 50 m triangles). Current research validates that auditory content affects the perception of the human visual system (HVS). However, existing foveated rendering methods primarily model the HVS's eccentricity-dependent visual perception ability on the visual content in VR while ignoring the impact of auditory content on the HVS's visual perception. In this article, we introduce an auditory-content-based perceived rendering quality analysis to quantify the impact of visual perception under different auditory conditions in foveated rendering. Based on the analysis results, we propose an audio-visual aware foveated rendering method (AvFR). AvFR first constructs an audio-visual feature-driven perception model that predicts the eccentricity-based visual perception in real time by combining the scene's audio-visual content, and then proposes a foveated rendering cost optimization algorithm to adaptively control the shading rate of different regions with the guidance of the perception model. In complex scenes with visual and auditory content containing over 1.17 m triangles, AvFR renders high-quality binocular foveated images at an average frame rate of 116$fps$. The results of the main user study and performance evaluation validate that AvFR achieves significant performance improvement (up to 1.4× speedup) without lowering the perceived visual quality compared with the state-of-the-art VR-HMD foveated rendering method.
随着虚拟现实(VR)场景中几何图形和渲染效果的日益复杂,现有的VR头戴式显示器(HMDs)注视点渲染方法难以满足用户对高帧率(在包含超过50M个三角形的VR场景中,双眼注视点图像渲染≥60 f ps)的VR场景渲染需求。目前的研究证实,听觉内容影响人类视觉系统(HVS)的感知。然而,现有的注视点渲染方法主要是模拟虚拟现实中HVS对视觉内容的偏心依赖视觉感知能力,而忽略了听觉内容对HVS视觉感知的影响。在本文中,我们引入了一种基于听觉内容的感知渲染质量分析,以量化不同听觉条件下视觉感知对注视点渲染的影响。基于分析结果,我们提出了一种视听感知的注视点渲染方法(AvFR)。AvFR首先构建一个视听特征驱动的感知模型,结合场景的视听内容实时预测基于离心率的视觉感知,然后提出一种注视点渲染成本优化算法,在感知模型的指导下自适应控制不同区域的遮光率。在包含超过1.17M三角形的视觉和听觉内容的复杂场景中,AvFR以116 f / ps的平均帧速率渲染高质量的双目注视点图像。主要用户研究和性能评估结果验证,与最先进的VR-HMD注视点渲染方法相比,AvFR在不降低感知视觉质量的情况下实现了显著的性能提升(高达1.4倍的加速)。