Approximate depth of field effects using few samples per pixel

Ke Lei, J. Hughes
{"title":"Approximate depth of field effects using few samples per pixel","authors":"Ke Lei, J. Hughes","doi":"10.1145/2448196.2448215","DOIUrl":null,"url":null,"abstract":"We present a method for rendering depth of field (DoF) effects in a ray-tracing based rendering pipeline using very few samples (typically two or three) per pixel, with the ability to refocus at arbitrary depths at a given view point without gathering more samples. To do so, we treat each sample as a proxy for possible nearby samples and calculate its contributions to the final image with a splat-and-gather scheme. The radiance for each pixel in the output image is then obtained via compositing all contributing samples. We optimize the pipeline using mipmap-like techniques so that the running time is independent of the amount of focal blur in the image. Our method approximates the underlying physical image formation process and thus avoids many of the artifacts of other approximation algorithms. With very low budget it provides satisfactory DoF rendering for most purposes, and a quick preview of DoF effects for applications demanding high rendering quality.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"23 1","pages":"119-128"},"PeriodicalIF":0.0000,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2448196.2448215","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 15

Abstract

We present a method for rendering depth of field (DoF) effects in a ray-tracing based rendering pipeline using very few samples (typically two or three) per pixel, with the ability to refocus at arbitrary depths at a given view point without gathering more samples. To do so, we treat each sample as a proxy for possible nearby samples and calculate its contributions to the final image with a splat-and-gather scheme. The radiance for each pixel in the output image is then obtained via compositing all contributing samples. We optimize the pipeline using mipmap-like techniques so that the running time is independent of the amount of focal blur in the image. Our method approximates the underlying physical image formation process and thus avoids many of the artifacts of other approximation algorithms. With very low budget it provides satisfactory DoF rendering for most purposes, and a quick preview of DoF effects for applications demanding high rendering quality.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
近似景深效果使用很少的样本每像素
我们提出了一种在基于光线跟踪的渲染管道中渲染景深(DoF)效果的方法,每个像素使用很少的样本(通常是两个或三个),能够在给定视点的任意深度重新聚焦,而无需收集更多的样本。为此,我们将每个样本视为可能附近样本的代理,并使用飞溅-收集方案计算其对最终图像的贡献。然后通过合成所有贡献样本来获得输出图像中每个像素的亮度。我们使用类似mimap的技术优化管道,这样运行时间与图像中的焦点模糊量无关。我们的方法近似底层物理图像形成过程,从而避免了许多其他近似算法的工件。在非常低的预算下,它为大多数用途提供了令人满意的DoF渲染,并为要求高渲染质量的应用程序提供了DoF效果的快速预览。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Interactive Inverse Spatio-Temporal Crowd Motion Design User-guided 3D reconstruction using multi-view stereo DenseGATs: A Graph-Attention-Based Network for Nonlinear Character Deformation RANDM: Random Access Depth Map Compression Using Range-Partitioning and Global Dictionary The Effect of Lighting, Landmarks and Auditory Cues on Human Performance in Navigating a Virtual Maze
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1