首页 > 最新文献

2010 IEEE International Conference on Computational Photography (ICCP)最新文献

英文 中文
Transfer efficiency and depth invariance in computational cameras 计算相机的传输效率和深度不变性
Pub Date : 2010-03-29 DOI: 10.1109/ICCPHOT.2010.5585098
Jongmin Baek
Recent advances in computational cameras achieve extension of depth of field by modulating the aperture of an imaging system, either spatially or temporally. They are, however, accompanied by loss of image detail, the chief cause of which is low and/or depth-varying frequency response of such systems. In this paper, we examine the tradeoff between achieving depth invariance and maintaining high transfer efficiency by providing a mathematical framework for analyzing the transfer function of these computational cameras. Using this framework, we prove mathematical bounds on the efficacy of the tradeoff. These bounds lead to observations on the fundamental limitations of computational cameras. In particular, we show that some existing designs are already near-optimal in our metrics.
计算机相机的最新进展是通过在空间或时间上调制成像系统的孔径来实现景深的扩展。然而,它们伴随着图像细节的损失,其主要原因是这种系统的低和/或深度变化的频率响应。在本文中,我们通过提供一个数学框架来分析这些计算相机的传递函数,研究了实现深度不变性和保持高传递效率之间的权衡。利用这个框架,我们证明了权衡效果的数学界限。这些界限导致了对计算相机基本限制的观察。特别是,我们展示了一些现有的设计在我们的度量中已经接近最优。
{"title":"Transfer efficiency and depth invariance in computational cameras","authors":"Jongmin Baek","doi":"10.1109/ICCPHOT.2010.5585098","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2010.5585098","url":null,"abstract":"Recent advances in computational cameras achieve extension of depth of field by modulating the aperture of an imaging system, either spatially or temporally. They are, however, accompanied by loss of image detail, the chief cause of which is low and/or depth-varying frequency response of such systems. In this paper, we examine the tradeoff between achieving depth invariance and maintaining high transfer efficiency by providing a mathematical framework for analyzing the transfer function of these computational cameras. Using this framework, we prove mathematical bounds on the efficacy of the tradeoff. These bounds lead to observations on the fundamental limitations of computational cameras. In particular, we show that some existing designs are already near-optimal in our metrics.","PeriodicalId":248821,"journal":{"name":"2010 IEEE International Conference on Computational Photography (ICCP)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131336104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Spectral Focal Sweep: Extended depth of field from chromatic aberrations 光谱焦点扫描:从色差扩展景深
Pub Date : 2010-03-29 DOI: 10.1109/ICCPHOT.2010.5585101
O. Cossairt, S. Nayar
In recent years, many new camera designs have been proposed which preserve image detail over a larger depth range than conventional cameras. These methods rely on either mechanical motion or a custom optical element placed in the pupil plane of a camera lens to create the desired point spread function (PSF). This work introduces a new Spectral Focal Sweep (SFS) camera which can be used to extend depth of field (DOF) when some information about the reflectance spectra of objects being imaged is known. Our core idea is to exploit the principle that for a lens without chromatic correction, the focal length varies with wavelength. We use a SFS camera to capture an image that effectively “sweeps” the focal plane continuously through a scene without the need for either mechanical motion or custom optical elements. We demonstrate that this approach simplifies lens design constraints, enabling an inexpensive implementation to be constructed with off-the-shelf components. We verify the effectiveness of our implementation and show several example images illustrating a significant increase in DOF over conventional cameras.
近年来,提出了许多新的相机设计,在更大的深度范围内保留图像细节比传统相机。这些方法依赖于机械运动或自定义光学元件放置在相机镜头的瞳孔平面,以创建所需的点扩展函数(PSF)。本文介绍了一种新的光谱焦扫描(SFS)相机,它可以在已知被成像物体的反射光谱信息的情况下扩展景深(DOF)。我们的核心思想是利用无色差校正透镜的焦距随波长变化的原理。我们使用SFS相机捕捉图像,有效地“扫描”焦平面连续通过场景,而不需要机械运动或定制光学元件。我们证明了这种方法简化了透镜设计约束,使得用现成的组件构建廉价的实现成为可能。我们验证了我们实现的有效性,并展示了几个示例图像,说明与传统相机相比,DOF显着增加。
{"title":"Spectral Focal Sweep: Extended depth of field from chromatic aberrations","authors":"O. Cossairt, S. Nayar","doi":"10.1109/ICCPHOT.2010.5585101","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2010.5585101","url":null,"abstract":"In recent years, many new camera designs have been proposed which preserve image detail over a larger depth range than conventional cameras. These methods rely on either mechanical motion or a custom optical element placed in the pupil plane of a camera lens to create the desired point spread function (PSF). This work introduces a new Spectral Focal Sweep (SFS) camera which can be used to extend depth of field (DOF) when some information about the reflectance spectra of objects being imaged is known. Our core idea is to exploit the principle that for a lens without chromatic correction, the focal length varies with wavelength. We use a SFS camera to capture an image that effectively “sweeps” the focal plane continuously through a scene without the need for either mechanical motion or custom optical elements. We demonstrate that this approach simplifies lens design constraints, enabling an inexpensive implementation to be constructed with off-the-shelf components. We verify the effectiveness of our implementation and show several example images illustrating a significant increase in DOF over conventional cameras.","PeriodicalId":248821,"journal":{"name":"2010 IEEE International Conference on Computational Photography (ICCP)","volume":"35 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116426062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 87
Rich image capture with plenoptic cameras 丰富的图像捕获与全光学相机
Pub Date : 2010-03-29 DOI: 10.1109/ICCPHOT.2010.5585092
Todor Georgiev, A. Lumsdaine
The plenoptic function was originally defined as a record of both the 3D structure of the lightfield and of its dependence on parameters such as wavelength, polarization, etc. Still, most work on these ideas has emphasized the 3D aspect of lightfield capture and manipulation, with less attention paid to other parameters. In this paper, we leverage the high resolution and flexible sampling trade-offs of the focused plenoptic camera to perform high-resolution capture of the rich “non 3D” structure of the plenoptic function. Two different techniques are presented and analyzed, using extended dynamic range photography as a particular example. The first technique simultaneously captures multiple exposures with a microlens array that has an interleaved set of different filters. The second technique places multiple filters at the main lens aperture. Experimental results validate our approach, producing 1.3Mpixel HDR images with a single capture.
全光函数最初被定义为光场的三维结构及其与波长、偏振等参数的依赖关系的记录。尽管如此,大多数关于这些想法的工作都强调了光场捕获和操纵的3D方面,而对其他参数的关注较少。在本文中,我们利用聚焦全光相机的高分辨率和灵活的采样权衡来执行全光功能丰富的“非3D”结构的高分辨率捕获。以扩展动态范围摄影为例,介绍并分析了两种不同的技术。第一种技术是用微透镜阵列同时捕捉多次曝光,微透镜阵列有一组不同的交错滤镜。第二种技术是在主镜头光圈处放置多个滤光片。实验结果验证了我们的方法,单次捕获产生130万像素的HDR图像。
{"title":"Rich image capture with plenoptic cameras","authors":"Todor Georgiev, A. Lumsdaine","doi":"10.1109/ICCPHOT.2010.5585092","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2010.5585092","url":null,"abstract":"The plenoptic function was originally defined as a record of both the 3D structure of the lightfield and of its dependence on parameters such as wavelength, polarization, etc. Still, most work on these ideas has emphasized the 3D aspect of lightfield capture and manipulation, with less attention paid to other parameters. In this paper, we leverage the high resolution and flexible sampling trade-offs of the focused plenoptic camera to perform high-resolution capture of the rich “non 3D” structure of the plenoptic function. Two different techniques are presented and analyzed, using extended dynamic range photography as a particular example. The first technique simultaneously captures multiple exposures with a microlens array that has an interleaved set of different filters. The second technique places multiple filters at the main lens aperture. Experimental results validate our approach, producing 1.3Mpixel HDR images with a single capture.","PeriodicalId":248821,"journal":{"name":"2010 IEEE International Conference on Computational Photography (ICCP)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127862855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Image upsampling via texture hallucination 通过纹理幻觉进行图像上采样
Pub Date : 2010-03-29 DOI: 10.1109/ICCPHOT.2010.5585097
Yoav HaCohen, Raanan Fattal, Dani Lischinski
Image upsampling is a common yet challenging task, since it is severely underconstrained. While considerable progress was made in preserving the sharpness of salient edges, current methods fail to reproduce the fine detail typically present in the textured regions bounded by these edges, resulting in unrealistic appearance. In this paper we address this fundamental shortcoming by integrating higher-level image analysis and custom low-level image synthesis. Our approach extends and refines the patch-based image model of Freeman et al. [10] and interprets the image as a tiling of distinct textures, each of which is matched to an example in a database of relevant textures. The matching is not done at the patch level, but rather collectively, over entire segments. Following this model fitting stage, which requires some user guidance, a higher-resolution image is synthesized using a hybrid approach that incorporates principles from example-based texture synthesis. We show that for images that comply with our model, our method is able to reintroduce consistent fine-scale detail, resulting in enhanced appearance textured regions.
图像上采样是一项常见但具有挑战性的任务,因为它严重缺乏约束。虽然在保持突出边缘的清晰度方面取得了相当大的进展,但目前的方法无法再现由这些边缘包围的纹理区域中典型的精细细节,从而导致不现实的外观。在本文中,我们通过集成高级图像分析和自定义低级图像合成来解决这一基本缺陷。我们的方法扩展并改进了Freeman等人[10]的基于补丁的图像模型,并将图像解释为不同纹理的平铺,每个纹理都与相关纹理数据库中的示例相匹配。匹配不是在补丁级别进行的,而是在整个段上进行的。在这个需要一些用户指导的模型拟合阶段之后,使用结合基于示例的纹理合成原理的混合方法合成更高分辨率的图像。我们表明,对于符合我们模型的图像,我们的方法能够重新引入一致的精细尺度细节,从而增强外观纹理区域。
{"title":"Image upsampling via texture hallucination","authors":"Yoav HaCohen, Raanan Fattal, Dani Lischinski","doi":"10.1109/ICCPHOT.2010.5585097","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2010.5585097","url":null,"abstract":"Image upsampling is a common yet challenging task, since it is severely underconstrained. While considerable progress was made in preserving the sharpness of salient edges, current methods fail to reproduce the fine detail typically present in the textured regions bounded by these edges, resulting in unrealistic appearance. In this paper we address this fundamental shortcoming by integrating higher-level image analysis and custom low-level image synthesis. Our approach extends and refines the patch-based image model of Freeman et al. [10] and interprets the image as a tiling of distinct textures, each of which is matched to an example in a database of relevant textures. The matching is not done at the patch level, but rather collectively, over entire segments. Following this model fitting stage, which requires some user guidance, a higher-resolution image is synthesized using a hybrid approach that incorporates principles from example-based texture synthesis. We show that for images that comply with our model, our method is able to reintroduce consistent fine-scale detail, resulting in enhanced appearance textured regions.","PeriodicalId":248821,"journal":{"name":"2010 IEEE International Conference on Computational Photography (ICCP)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121229008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 80
Motion blur removal with orthogonal parabolic exposures 运动模糊去除与正交抛物线曝光
Pub Date : 2010-03-29 DOI: 10.1109/ICCPHOT.2010.5585100
T. Cho, Anat Levin, F. Durand, W. Freeman
Object movement during exposure generates blur. Removing blur is challenging because one has to estimate the motion blur, which can spatially vary over the image. Even if the motion is successfully identified, blur removal can be unstable because the blur kernel attenuates high frequency image contents. We address the problem of removing blur from objects moving at constant velocities in arbitrary 2D directions. Our solution captures two images of the scene with a parabolic motion in two orthogonal directions. We show that our strategy near-optimally preserves image content, and allows for stable blur inversion. Taking two images of a scene helps us estimate spatially varying object motions. We present a prototype camera and demonstrate successful motion deblurring on real motions.
曝光时物体移动会产生模糊。消除模糊是具有挑战性的,因为一个人必须估计运动模糊,它可以在图像的空间变化。即使运动被成功识别,模糊去除也可能不稳定,因为模糊核会衰减高频图像内容。我们解决了在任意二维方向上以恒定速度移动的物体中去除模糊的问题。我们的解决方案捕获了在两个正交方向上进行抛物线运动的场景的两幅图像。我们表明,我们的策略近乎最佳地保留了图像内容,并允许稳定的模糊反转。拍摄一个场景的两张图像有助于我们估计空间变化的物体运动。我们展示了一个原型相机,并演示了在真实运动中成功的运动去模糊。
{"title":"Motion blur removal with orthogonal parabolic exposures","authors":"T. Cho, Anat Levin, F. Durand, W. Freeman","doi":"10.1109/ICCPHOT.2010.5585100","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2010.5585100","url":null,"abstract":"Object movement during exposure generates blur. Removing blur is challenging because one has to estimate the motion blur, which can spatially vary over the image. Even if the motion is successfully identified, blur removal can be unstable because the blur kernel attenuates high frequency image contents. We address the problem of removing blur from objects moving at constant velocities in arbitrary 2D directions. Our solution captures two images of the scene with a parabolic motion in two orthogonal directions. We show that our strategy near-optimally preserves image content, and allows for stable blur inversion. Taking two images of a scene helps us estimate spatially varying object motions. We present a prototype camera and demonstrate successful motion deblurring on real motions.","PeriodicalId":248821,"journal":{"name":"2010 IEEE International Conference on Computational Photography (ICCP)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115112318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
Search-and-replace editing for personal photo collections 搜索和替换编辑个人相册
Pub Date : 2010-03-29 DOI: 10.1109/ICCPHOT.2010.5585099
S. Hasinoff, M. Jóźwiak, F. Durand, W. Freeman
We propose a new system for editing personal photo collections, inspired by search-and-replace editing for text. In our system, local edits specified by the user in a single photo (e.g., using the “clone brush” tool) can be propagated automatically to other photos in the same collection, by matching the edited region across photos. To achieve this, we build on tools from computer vision for image matching. Our experimental results on real photo collections demonstrate the feasibility and potential benefits of our approach.
我们提出了一个编辑个人相册的新系统,灵感来自于文本的搜索和替换编辑。在我们的系统中,用户在单个照片中指定的本地编辑(例如,使用“克隆画笔”工具)可以通过在照片中匹配编辑区域自动传播到同一集合中的其他照片。为了实现这一点,我们建立了计算机视觉的图像匹配工具。我们在真实照片集上的实验结果证明了我们的方法的可行性和潜在的好处。
{"title":"Search-and-replace editing for personal photo collections","authors":"S. Hasinoff, M. Jóźwiak, F. Durand, W. Freeman","doi":"10.1109/ICCPHOT.2010.5585099","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2010.5585099","url":null,"abstract":"We propose a new system for editing personal photo collections, inspired by search-and-replace editing for text. In our system, local edits specified by the user in a single photo (e.g., using the “clone brush” tool) can be propagated automatically to other photos in the same collection, by matching the edited region across photos. To achieve this, we build on tools from computer vision for image matching. Our experimental results on real photo collections demonstrate the feasibility and potential benefits of our approach.","PeriodicalId":248821,"journal":{"name":"2010 IEEE International Conference on Computational Photography (ICCP)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115532224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Coded rolling shutter photography: Flexible space-time sampling 编码滚动快门摄影:灵活的时空采样
Pub Date : 2010-03-29 DOI: 10.1109/ICCPHOT.2010.5585094
Jinwei Gu, Y. Hitomi, T. Mitsunaga, S. Nayar
We propose a novel readout architecture called coded rolling shutter for complementary metal-oxide semiconductor (CMOS) image sensors. Rolling shutter has traditionally been considered as a disadvantage to image quality since it often introduces skew artifact. In this paper, we show that by controlling the readout timing and the exposure length for each row, the row-wise exposure discrepancy in rolling shutter can be exploited to flexibly sample the 3D space-time volume of scene appearance, and can thus be advantageous for computational photography. The required controls can be readily implemented in standard CMOS sensors by altering the logic of the control unit. We propose several coding schemes and applications: (1) coded readout allows us to better sample time dimension for high-speed photography and optical flow based applications; and (2) row-wise control enables capturing motion-blur free high dynamic range images from a single shot. While a prototype chip is currently in development, we demonstrate the benefits of coded rolling shutter via simulation using images of real scenes.
我们为互补金属氧化物半导体(CMOS)图像传感器提出了一种称为编码卷帘门的新型读出架构。卷帘式快门历来被认为是图像质量的一个缺点,因为它经常引入歪斜的伪影。在本文中,我们证明了通过控制每一行的读出时间和曝光长度,可以利用卷帘式快门的逐行曝光差异灵活地采样场景外观的三维时空体积,从而有利于计算摄影。通过改变控制单元的逻辑,可以很容易地在标准CMOS传感器中实现所需的控制。我们提出了几种编码方案和应用:(1)编码读出允许我们更好地采样时间维度的高速摄影和光流为基础的应用;(2)行方向控制使捕获运动模糊自由高动态范围图像从一个单一的镜头。虽然原型芯片目前正在开发中,但我们通过使用真实场景图像的模拟来演示编码卷帘门的好处。
{"title":"Coded rolling shutter photography: Flexible space-time sampling","authors":"Jinwei Gu, Y. Hitomi, T. Mitsunaga, S. Nayar","doi":"10.1109/ICCPHOT.2010.5585094","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2010.5585094","url":null,"abstract":"We propose a novel readout architecture called coded rolling shutter for complementary metal-oxide semiconductor (CMOS) image sensors. Rolling shutter has traditionally been considered as a disadvantage to image quality since it often introduces skew artifact. In this paper, we show that by controlling the readout timing and the exposure length for each row, the row-wise exposure discrepancy in rolling shutter can be exploited to flexibly sample the 3D space-time volume of scene appearance, and can thus be advantageous for computational photography. The required controls can be readily implemented in standard CMOS sensors by altering the logic of the control unit. We propose several coding schemes and applications: (1) coded readout allows us to better sample time dimension for high-speed photography and optical flow based applications; and (2) row-wise control enables capturing motion-blur free high dynamic range images from a single shot. While a prototype chip is currently in development, we demonstrate the benefits of coded rolling shutter via simulation using images of real scenes.","PeriodicalId":248821,"journal":{"name":"2010 IEEE International Conference on Computational Photography (ICCP)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123305078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
A context-aware light source 环境感知光源
Pub Date : 2010-03-29 DOI: 10.1109/ICCPHOT.2010.5585091
Oliver Wang, M. Fuchs, Christian Fuchs, H. Lensch, James Davis, H. Seidel
We present a technique that combines the visual benefits of virtual enhancement with the intuitive interaction of the real world. We accomplish this by introducing the concept of a context-aware light source. This light source provides illumination based on scene context in real-time. This allows us to project feature enhancement in-place onto an object while it is being manipulated by the user. A separate proxy light source can be employed to enable freely programmable shading responses for interactive scene analysis. We created a prototype hardware setup and have implemented several applications that demonstrate the approach, such as a sharpening light, an edge highlighting light, an accumulation light, and a light with a programmable, nonlinear shading response.
我们提出了一种将虚拟增强的视觉优势与现实世界的直观交互相结合的技术。我们通过引入上下文感知光源的概念来实现这一目标。该光源基于场景上下文实时提供照明。这使我们能够在用户操纵对象时将功能增强就地投影到对象上。一个单独的代理光源可以用来实现可自由编程的遮阳响应,用于交互式场景分析。我们创建了一个原型硬件设置,并实现了几个应用程序来演示该方法,例如锐化光,边缘高亮光,积累光和具有可编程非线性阴影响应的光。
{"title":"A context-aware light source","authors":"Oliver Wang, M. Fuchs, Christian Fuchs, H. Lensch, James Davis, H. Seidel","doi":"10.1109/ICCPHOT.2010.5585091","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2010.5585091","url":null,"abstract":"We present a technique that combines the visual benefits of virtual enhancement with the intuitive interaction of the real world. We accomplish this by introducing the concept of a context-aware light source. This light source provides illumination based on scene context in real-time. This allows us to project feature enhancement in-place onto an object while it is being manipulated by the user. A separate proxy light source can be employed to enable freely programmable shading responses for interactive scene analysis. We created a prototype hardware setup and have implemented several applications that demonstrate the approach, such as a sharpening light, an edge highlighting light, an accumulation light, and a light with a programmable, nonlinear shading response.","PeriodicalId":248821,"journal":{"name":"2010 IEEE International Conference on Computational Photography (ICCP)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121590055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Seeing Mt. Rainier: Lucky imaging for multi-image denoising, sharpening, and haze removal 看雷尼尔山:多图像去噪、锐化和雾霾去除的幸运成像
Pub Date : 2010-03-29 DOI: 10.1109/ICCPHOT.2010.5585096
Neel Joshi, Michael F. Cohen
Photographing distant objects is challenging for a number of reasons. Even on a clear day, atmospheric haze often represents the majority of light received by a camera. Unfortunately, dehazing alone cannot create a clean image. The combination of shot noise and quantization noise is exacerbated when the contrast is expanded after haze removal. Dust on the sensor that may be unnoticeable in the original images creates serious artifacts. Multiple images can be averaged to overcome the noise, but the combination of long lenses and small camera motion as well as time varying atmospheric refraction results in large global and local shifts of the images on the sensor. An iconic example of a distant object is Mount Rainier, when viewed from Seattle, which is 90 kilometers away. This paper demonstrates a methodology to pull out a clean image of Mount Rainier from a series of images. Rigid and non-rigid alignment steps brings individual pixels into alignment. A novel local weighted averaging method based on ideas from “lucky imaging” minimizes blur, resampling and alignment errors, as well as effects of sensor dust, to maintain the sharpness of the original pixel grid. Finally, dehazing and contrast expansion results in a sharp clean image.
拍摄远处的物体是具有挑战性的,原因有很多。即使在晴朗的日子里,大气中的雾霾通常也代表了相机接收到的大部分光线。不幸的是,单靠除雾不能产生干净的图像。去雾后对比度扩大,散粒噪声和量化噪声的组合加剧。传感器上的灰尘可能在原始图像中不明显,会产生严重的伪影。可以对多幅图像进行平均以克服噪声,但是长镜头和相机的小运动以及随时间变化的大气折射的结合导致传感器上的图像出现较大的全局和局部偏移。从90公里外的西雅图看,遥远天体的一个标志性例子是雷尼尔山。本文介绍了一种从一系列图像中提取雷尼尔山清晰图像的方法。刚性和非刚性对齐步骤使单个像素对齐。一种基于“幸运成像”思想的局部加权平均方法最大限度地减少了模糊、重采样和对齐误差以及传感器灰尘的影响,以保持原始像素网格的清晰度。最后,除雾和对比度扩大的结果是一个清晰的干净的图像。
{"title":"Seeing Mt. Rainier: Lucky imaging for multi-image denoising, sharpening, and haze removal","authors":"Neel Joshi, Michael F. Cohen","doi":"10.1109/ICCPHOT.2010.5585096","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2010.5585096","url":null,"abstract":"Photographing distant objects is challenging for a number of reasons. Even on a clear day, atmospheric haze often represents the majority of light received by a camera. Unfortunately, dehazing alone cannot create a clean image. The combination of shot noise and quantization noise is exacerbated when the contrast is expanded after haze removal. Dust on the sensor that may be unnoticeable in the original images creates serious artifacts. Multiple images can be averaged to overcome the noise, but the combination of long lenses and small camera motion as well as time varying atmospheric refraction results in large global and local shifts of the images on the sensor. An iconic example of a distant object is Mount Rainier, when viewed from Seattle, which is 90 kilometers away. This paper demonstrates a methodology to pull out a clean image of Mount Rainier from a series of images. Rigid and non-rigid alignment steps brings individual pixels into alignment. A novel local weighted averaging method based on ideas from “lucky imaging” minimizes blur, resampling and alignment errors, as well as effects of sensor dust, to maintain the sharpness of the original pixel grid. Finally, dehazing and contrast expansion results in a sharp clean image.","PeriodicalId":248821,"journal":{"name":"2010 IEEE International Conference on Computational Photography (ICCP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134123926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 98
High resolution large format tile-scan camera: Design, calibration, and extended depth of field 高分辨率大幅面瓷砖扫描相机:设计,校准和扩展景深
Pub Date : 2010-03-29 DOI: 10.1109/ICCPHOT.2010.5585095
M. Ben-Ezra
Emerging applications in virtual museums, cultural heritage, and digital art preservation require very high quality and high resolution imaging of objects with fine structure, shape, and texture. To this end we propose to use large format digital photography. We analyze and resolve some of the unique challenges that are presented by digital large format photography, in particular sensor-lens mismatch and extended depth of field. Based on our analysis we have designed and built a digital tile-scan large format camera capable of acquiring high quality and high resolution images of static scenes. We also developed calibration techniques that are specific to our camera as well as a novel and simple algorithm for focal stack processing of very large images with significant magnification variations.
在虚拟博物馆、文化遗产和数字艺术保存等新兴应用中,需要对具有精细结构、形状和纹理的物体进行高质量和高分辨率成像。为此,我们建议使用大幅面数码摄影。我们分析并解决了数码大幅面摄影所带来的一些独特挑战,特别是传感器镜头不匹配和扩展的景深。基于我们的分析,我们设计并制造了一个能够获取静态场景高质量和高分辨率图像的数字瓷砖扫描大画幅相机。我们还开发了特定于我们相机的校准技术,以及一种新颖而简单的算法,用于处理具有显着放大倍率变化的超大图像的焦点堆栈。
{"title":"High resolution large format tile-scan camera: Design, calibration, and extended depth of field","authors":"M. Ben-Ezra","doi":"10.1109/ICCPHOT.2010.5585095","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2010.5585095","url":null,"abstract":"Emerging applications in virtual museums, cultural heritage, and digital art preservation require very high quality and high resolution imaging of objects with fine structure, shape, and texture. To this end we propose to use large format digital photography. We analyze and resolve some of the unique challenges that are presented by digital large format photography, in particular sensor-lens mismatch and extended depth of field. Based on our analysis we have designed and built a digital tile-scan large format camera capable of acquiring high quality and high resolution images of static scenes. We also developed calibration techniques that are specific to our camera as well as a novel and simple algorithm for focal stack processing of very large images with significant magnification variations.","PeriodicalId":248821,"journal":{"name":"2010 IEEE International Conference on Computational Photography (ICCP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134196147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
期刊
2010 IEEE International Conference on Computational Photography (ICCP)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1