Linear Volumetric Focus for Light Field Cameras

ACM Trans. Graph. Pub Date : 2015-03-02 DOI:10.1145/2665074
D. Dansereau, O. Pizarro, Stefan B. Williams
{"title":"Linear Volumetric Focus for Light Field Cameras","authors":"D. Dansereau, O. Pizarro, Stefan B. Williams","doi":"10.1145/2665074","DOIUrl":null,"url":null,"abstract":"We demonstrate that the redundant information in light field imagery allows volumetric focus, an improvement of signal quality that maintains focus over a controllable range of depths. To do this, we derive the frequency-domain region of support of the light field, finding it to be the 4D hyperfan at the intersection of a dual fan and a hypercone, and design a filter with correspondingly shaped passband. Drawing examples from the Stanford Light Field Archive and images captured using a commercially available lenslet-based plenoptic camera, we demonstrate that the hyperfan outperforms competing methods including planar focus, fan-shaped antialiasing, and nonlinear image and video denoising techniques. We show the hyperfan preserves depth of field, making it a single-step all-in-focus denoising filter suitable for general-purpose light field rendering. We include results for different noise types and levels, through murky water and particulate matter, in real-world scenarios, and evaluated using a variety of metrics. We show that the hyperfan's performance scales with aperture count, and demonstrate the inclusion of aliased components for high-quality rendering.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"1 1","pages":"15:1-15:20"},"PeriodicalIF":0.0000,"publicationDate":"2015-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"144","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Trans. Graph.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2665074","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 144

Abstract

We demonstrate that the redundant information in light field imagery allows volumetric focus, an improvement of signal quality that maintains focus over a controllable range of depths. To do this, we derive the frequency-domain region of support of the light field, finding it to be the 4D hyperfan at the intersection of a dual fan and a hypercone, and design a filter with correspondingly shaped passband. Drawing examples from the Stanford Light Field Archive and images captured using a commercially available lenslet-based plenoptic camera, we demonstrate that the hyperfan outperforms competing methods including planar focus, fan-shaped antialiasing, and nonlinear image and video denoising techniques. We show the hyperfan preserves depth of field, making it a single-step all-in-focus denoising filter suitable for general-purpose light field rendering. We include results for different noise types and levels, through murky water and particulate matter, in real-world scenarios, and evaluated using a variety of metrics. We show that the hyperfan's performance scales with aperture count, and demonstrate the inclusion of aliased components for high-quality rendering.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
光场相机的线性体积对焦
我们证明了光场图像中的冗余信息允许体积聚焦,这是一种信号质量的改进,可以在可控的深度范围内保持聚焦。为此,我们推导出了光场的频域支撑区域,发现它是双扇和超锥相交处的四维超锥,并设计了相应形状通带的滤波器。通过斯坦福光场档案的例子和使用商用透镜全光学相机拍摄的图像,我们证明了hyperperfan优于其他竞争方法,包括平面聚焦、扇形抗混叠、非线性图像和视频去噪技术。我们展示了hyperfan保留景深,使其成为适用于通用光场渲染的单步全焦点去噪滤波器。我们包括了不同噪音类型和水平的结果,通过浑浊的水和颗粒物,在现实世界的场景中,并使用各种指标进行评估。我们展示了hyperfan的性能随光圈数的变化而变化,并展示了高质量渲染所包含的混叠组件。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
LuisaRender: A High-Performance Rendering Framework with Layered and Unified Interfaces on Stream Architectures BoolSurf: Boolean Operations on Surfaces SkinMixer: Blending 3D Animated Models PopStage: The Generation of Stage Cross-Editing Video Based on Spatio-Temporal Matching QuadStream: A Quad-Based Scene Streaming Architecture for Novel Viewpoint Reconstruction
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1