Generalized autofocus

D. Vaquero, Natasha Gelfand, M. Tico, K. Pulli, M. Turk
{"title":"Generalized autofocus","authors":"D. Vaquero, Natasha Gelfand, M. Tico, K. Pulli, M. Turk","doi":"10.1109/WACV.2011.5711547","DOIUrl":null,"url":null,"abstract":"All-in-focus imaging is a computational photography technique that produces images free of defocus blur by capturing a stack of images focused at different distances and merging them into a single sharp result. Current approaches assume that images have been captured offline, and that a reasonably powerful computer is available to process them. In contrast, we focus on the problem of how to capture such input stacks in an efficient and scene-adaptive fashion. Inspired by passive autofocus techniques, which select a single best plane of focus in the scene, we propose a method to automatically select a minimal set of images, focused at different depths, such that all objects in a given scene are in focus in at least one image. We aim to minimize both the amount of time spent metering the scene and capturing the images, and the total amount of high-resolution data that is captured. The algorithm first analyzes a set of low-resolution sharpness measurements of the scene while continuously varying the focus distance of the lens. From these measurements, we estimate the final lens positions required to capture all objects in the scene in acceptable focus. We demonstrate the use of our technique in a mobile computational photography scenario, where it is essential to minimize image capture time (as the camera is typically handheld) and processing time (as the computation and energy resources are limited).","PeriodicalId":424724,"journal":{"name":"2011 IEEE Workshop on Applications of Computer Vision (WACV)","volume":"212 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"30","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 IEEE Workshop on Applications of Computer Vision (WACV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WACV.2011.5711547","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 30

Abstract

All-in-focus imaging is a computational photography technique that produces images free of defocus blur by capturing a stack of images focused at different distances and merging them into a single sharp result. Current approaches assume that images have been captured offline, and that a reasonably powerful computer is available to process them. In contrast, we focus on the problem of how to capture such input stacks in an efficient and scene-adaptive fashion. Inspired by passive autofocus techniques, which select a single best plane of focus in the scene, we propose a method to automatically select a minimal set of images, focused at different depths, such that all objects in a given scene are in focus in at least one image. We aim to minimize both the amount of time spent metering the scene and capturing the images, and the total amount of high-resolution data that is captured. The algorithm first analyzes a set of low-resolution sharpness measurements of the scene while continuously varying the focus distance of the lens. From these measurements, we estimate the final lens positions required to capture all objects in the scene in acceptable focus. We demonstrate the use of our technique in a mobile computational photography scenario, where it is essential to minimize image capture time (as the camera is typically handheld) and processing time (as the computation and energy resources are limited).
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
广义的自动对焦
全焦成像是一种计算摄影技术,通过捕捉在不同距离聚焦的一堆图像,并将它们合并成一个清晰的结果,产生无离焦模糊的图像。目前的方法假设图像是脱机捕获的,并且有一台功能相当强大的计算机可以处理它们。相比之下,我们关注的问题是如何以高效和场景自适应的方式捕获这些输入堆栈。受被动自动对焦技术的启发,我们提出了一种自动选择最小图像集的方法,这些图像集聚焦在不同的深度,这样给定场景中的所有物体都至少在一张图像中对焦。我们的目标是尽量减少测量场景和捕获图像所花费的时间,以及捕获的高分辨率数据的总量。该算法首先分析了一组低分辨率的场景清晰度测量值,同时连续改变镜头的焦距。从这些测量中,我们估计在可接受的焦点下捕获场景中所有物体所需的最终镜头位置。我们演示了在移动计算摄影场景中使用我们的技术,其中最小化图像捕获时间(因为相机通常是手持的)和处理时间(因为计算和能源资源有限)是必不可少的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Tracking planes with Time of Flight cameras and J-linkage Multi-modal visual concept classification of images via Markov random walk over tags Real-time illumination-invariant motion detection in spatio-temporal image volumes An evaluation of bags-of-words and spatio-temporal shapes for action recognition Illumination change compensation techniques to improve kinematic tracking
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1