Accelerating Stereo Rendering via Image Reprojection and Spatio-Temporal Supersampling

Sipeng Yang;Junhao Zhuge;Jiayu Ji;Qingchuan Zhu;Xiaogang Jin
{"title":"Accelerating Stereo Rendering via Image Reprojection and Spatio-Temporal Supersampling","authors":"Sipeng Yang;Junhao Zhuge;Jiayu Ji;Qingchuan Zhu;Xiaogang Jin","doi":"10.1109/TVCG.2025.3549557","DOIUrl":null,"url":null,"abstract":"Achieving immersive virtual reality (VR) experiences typically requires extensive computational resources to ensure high-definition visuals, high frame rates, and low latency in stereoscopic rendering. This challenge is particularly pronounced for lower-tier and standalone VR devices with limited processing power. To accelerate rendering, existing supersampling and image reprojection techniques have shown significant potential, yet to date, no previous work has explored their combination to minimize stereo rendering overhead. In this paper, we introduce a lightweight supersampling framework that integrates image projection with spatio-temporal supersampling to accelerate stereo rendering. Our approach effectively leverages the temporal and spatial redundancies inherent in stereo videos, enabling rapid image generation for unshaded viewpoints and providing resolution-enhanced and anti-aliased images for binocular viewpoints. We first blend a rendered low-resolution (LR) frame with accumulated temporal samples to construct an high-resolution (HR) frame. This HR frame is then reprojected to the other viewpoint to directly synthesize a new image. To address disocclusions in reprojected images, we utilize accumulated history data and low-pass filtering for filling, ensuring high-quality results with minimal delay. Extensive evaluations on both the PC and the standalone device confirm that our framework requires short runtime to generate high-fidelity images, making it an effective solution for stereo rendering across various VR platforms.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"2643-2652"},"PeriodicalIF":6.5000,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on visualization and computer graphics","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10918999/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Achieving immersive virtual reality (VR) experiences typically requires extensive computational resources to ensure high-definition visuals, high frame rates, and low latency in stereoscopic rendering. This challenge is particularly pronounced for lower-tier and standalone VR devices with limited processing power. To accelerate rendering, existing supersampling and image reprojection techniques have shown significant potential, yet to date, no previous work has explored their combination to minimize stereo rendering overhead. In this paper, we introduce a lightweight supersampling framework that integrates image projection with spatio-temporal supersampling to accelerate stereo rendering. Our approach effectively leverages the temporal and spatial redundancies inherent in stereo videos, enabling rapid image generation for unshaded viewpoints and providing resolution-enhanced and anti-aliased images for binocular viewpoints. We first blend a rendered low-resolution (LR) frame with accumulated temporal samples to construct an high-resolution (HR) frame. This HR frame is then reprojected to the other viewpoint to directly synthesize a new image. To address disocclusions in reprojected images, we utilize accumulated history data and low-pass filtering for filling, ensuring high-quality results with minimal delay. Extensive evaluations on both the PC and the standalone device confirm that our framework requires short runtime to generate high-fidelity images, making it an effective solution for stereo rendering across various VR platforms.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过图像重投影和时空超采样加速立体渲染。
实现沉浸式虚拟现实(VR)体验通常需要大量的计算资源,以确保立体渲染中的高清晰度视觉效果、高帧率和低延迟。对于处理能力有限的底层和独立VR设备来说,这一挑战尤其明显。为了加速渲染,现有的超采样和图像重投影技术已经显示出巨大的潜力,但到目前为止,还没有以前的工作探索过它们的组合,以最小化立体渲染开销。在本文中,我们介绍了一个轻量级的超采样框架,该框架将图像投影与时空超采样相结合,以加速立体渲染。我们的方法有效地利用了立体视频中固有的时间和空间冗余,能够快速生成无阴影视点的图像,并为双目视点提供分辨率增强和抗混叠图像。我们首先将呈现的低分辨率(LR)帧与累积的时间样本混合以构建高分辨率(HR)帧。然后将该HR帧重新投影到另一个视点,直接合成新图像。为了解决重投影图像中的咬合问题,我们利用累积的历史数据和低通滤波进行填充,以最小的延迟确保高质量的结果。在PC和独立设备上的广泛评估证实,我们的框架需要较短的运行时间来生成高保真图像,使其成为跨各种VR平台的立体渲染的有效解决方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
HYVE: Hybrid Vertex Encoder for Neural Distance Fields. Errata to "DiffCap: Diffusion-Based Real-Time Human Motion Capture Using Sparse IMUs and a Monocular Camera". A Systematic Evaluation of Dragging Interaction Using Raycasting in Virtual Reality. MARRS: Masked Autoregressive Unit-based Reaction Synthesis. Visualization Tasks for Unlabeled Graphs.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1