GFFE:用于低延迟实时渲染的无 G 缓冲区帧外推法

IF 7.8 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING ACM Transactions on Graphics Pub Date : 2024-11-19 DOI:10.1145/3687923
Songyin Wu, Deepak Vembar, Anton Sochenov, Selvakumar Panneer, Sungye Kim, Anton Kaplanyan, Ling-Qi Yan
{"title":"GFFE:用于低延迟实时渲染的无 G 缓冲区帧外推法","authors":"Songyin Wu, Deepak Vembar, Anton Sochenov, Selvakumar Panneer, Sungye Kim, Anton Kaplanyan, Ling-Qi Yan","doi":"10.1145/3687923","DOIUrl":null,"url":null,"abstract":"Real-time rendering has been embracing ever-demanding effects, such as ray tracing. However, rendering such effects in high resolution and high frame rate remains challenging. Frame extrapolation methods, which do not introduce additional latency as opposed to frame interpolation methods such as DLSS 3 and FSR 3, boost the frame rate by generating future frames based on previous frames. However, it is a more challenging task because of the lack of information in the disocclusion regions and complex future motions, and recent methods also have a high engine integration cost due to requiring G-buffers as input. We propose a <jats:italic>G-buffer free</jats:italic> frame extrapolation method, GFFE, with a novel heuristic framework and an efficient neural network, to plausibly generate new frames in real time without introducing additional latency. We analyze the motion of dynamic fragments and different types of disocclusions, and design the corresponding modules of the extrapolation block to handle them. After that, a light-weight shading correction network is used to correct shading and improve overall quality. GFFE achieves comparable or better results than previous interpolation and G-buffer dependent extrapolation methods, with more efficient performance and easier integration.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"99 1","pages":""},"PeriodicalIF":7.8000,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"GFFE: G-buffer Free Frame Extrapolation for Low-latency Real-time Rendering\",\"authors\":\"Songyin Wu, Deepak Vembar, Anton Sochenov, Selvakumar Panneer, Sungye Kim, Anton Kaplanyan, Ling-Qi Yan\",\"doi\":\"10.1145/3687923\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Real-time rendering has been embracing ever-demanding effects, such as ray tracing. However, rendering such effects in high resolution and high frame rate remains challenging. Frame extrapolation methods, which do not introduce additional latency as opposed to frame interpolation methods such as DLSS 3 and FSR 3, boost the frame rate by generating future frames based on previous frames. However, it is a more challenging task because of the lack of information in the disocclusion regions and complex future motions, and recent methods also have a high engine integration cost due to requiring G-buffers as input. We propose a <jats:italic>G-buffer free</jats:italic> frame extrapolation method, GFFE, with a novel heuristic framework and an efficient neural network, to plausibly generate new frames in real time without introducing additional latency. We analyze the motion of dynamic fragments and different types of disocclusions, and design the corresponding modules of the extrapolation block to handle them. After that, a light-weight shading correction network is used to correct shading and improve overall quality. GFFE achieves comparable or better results than previous interpolation and G-buffer dependent extrapolation methods, with more efficient performance and easier integration.\",\"PeriodicalId\":50913,\"journal\":{\"name\":\"ACM Transactions on Graphics\",\"volume\":\"99 1\",\"pages\":\"\"},\"PeriodicalIF\":7.8000,\"publicationDate\":\"2024-11-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Graphics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1145/3687923\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Graphics","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3687923","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

摘要

实时渲染一直在采用光线追踪等要求越来越高的特效。然而,以高分辨率和高帧频渲染此类效果仍然具有挑战性。与 DLSS 3 和 FSR 3 等帧插值方法相比,帧外推法不会带来额外的延迟,它可以根据之前的帧生成未来的帧,从而提高帧频。然而,由于缺乏不闭塞区域的信息和复杂的未来运动,这是一项更具挑战性的任务,而且由于需要 G 缓冲区作为输入,最近的方法还具有较高的引擎集成成本。我们提出了一种无 G 缓冲区的帧外推方法 GFFE,该方法采用新颖的启发式框架和高效的神经网络,可在不引入额外延迟的情况下实时生成新帧。我们分析了动态片段的运动和不同类型的干扰,并设计了外推块的相应模块来处理它们。之后,我们使用轻量级阴影校正网络来校正阴影并提高整体质量。与之前的插值和依赖 G 缓冲区的外推方法相比,GFFE 取得了相当或更好的效果,而且性能更高效、更易于集成。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
GFFE: G-buffer Free Frame Extrapolation for Low-latency Real-time Rendering
Real-time rendering has been embracing ever-demanding effects, such as ray tracing. However, rendering such effects in high resolution and high frame rate remains challenging. Frame extrapolation methods, which do not introduce additional latency as opposed to frame interpolation methods such as DLSS 3 and FSR 3, boost the frame rate by generating future frames based on previous frames. However, it is a more challenging task because of the lack of information in the disocclusion regions and complex future motions, and recent methods also have a high engine integration cost due to requiring G-buffers as input. We propose a G-buffer free frame extrapolation method, GFFE, with a novel heuristic framework and an efficient neural network, to plausibly generate new frames in real time without introducing additional latency. We analyze the motion of dynamic fragments and different types of disocclusions, and design the corresponding modules of the extrapolation block to handle them. After that, a light-weight shading correction network is used to correct shading and improve overall quality. GFFE achieves comparable or better results than previous interpolation and G-buffer dependent extrapolation methods, with more efficient performance and easier integration.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
ACM Transactions on Graphics
ACM Transactions on Graphics 工程技术-计算机:软件工程
CiteScore
14.30
自引率
25.80%
发文量
193
审稿时长
12 months
期刊介绍: ACM Transactions on Graphics (TOG) is a peer-reviewed scientific journal that aims to disseminate the latest findings of note in the field of computer graphics. It has been published since 1982 by the Association for Computing Machinery. Starting in 2003, all papers accepted for presentation at the annual SIGGRAPH conference are printed in a special summer issue of the journal.
期刊最新文献
Direct Manipulation of Procedural Implicit Surfaces 3DGSR: Implicit Surface Reconstruction with 3D Gaussian Splatting Quark: Real-time, High-resolution, and General Neural View Synthesis Differentiable Owen Scrambling ELMO: Enhanced Real-time LiDAR Motion Capture through Upsampling
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1