首页 > 最新文献

2012 IEEE International Conference on Computational Photography (ICCP)最新文献

英文 中文
Variable focus video: Reconstructing depth and video for dynamic scenes 可变焦点视频:动态场景的深度和视频重建
Pub Date : 2012-04-28 DOI: 10.1109/ICCPhot.2012.6215219
Nitesh Shroff, A. Veeraraghavan, Yuichi Taguchi, Oncel Tuzel, Amit K. Agrawal, R. Chellappa
Traditional depth from defocus (DFD) algorithms assume that the camera and the scene are static during acquisition time. In this paper, we examine the effects of camera and scene motion on DFD algorithms. We show that, given accurate estimates of optical flow (OF), one can robustly warp the focal stack (FS) images to obtain a virtual static FS and apply traditional DFD algorithms on the static FS. Acquiring accurate OF in the presence of varying focal blur is a challenging task. We show how defocus blur variations cause inherent biases in the estimates of optical flow. We then show how to robustly handle these biases and compute accurate OF estimates in the presence of varying focal blur. This leads to an architecture and an algorithm that converts a traditional 30 fps video camera into a co-located 30 fps image and a range sensor. Further, the ability to extract image and range information allows us to render images with artistic depth-of field effects, both extending and reducing the depth of field of the captured images. We demonstrate experimental results on challenging scenes captured using a camera prototype.
传统的离焦深度(DFD)算法假设相机和场景在采集过程中是静态的。在本文中,我们研究了相机和场景运动对DFD算法的影响。我们证明,在给定准确的光流估计的情况下,可以鲁棒地扭曲焦叠(FS)图像以获得虚拟静态FS,并在静态FS上应用传统的DFD算法。在不同焦距模糊的情况下获得准确的OF是一项具有挑战性的任务。我们展示了散焦模糊变化如何导致光流估计的固有偏差。然后,我们展示了如何稳健地处理这些偏差,并在不同焦点模糊的存在下计算准确的OF估计。这就产生了一种架构和算法,可以将传统的30 fps视频摄像机转换为30 fps图像和距离传感器。此外,提取图像和距离信息的能力使我们能够渲染具有艺术景深效果的图像,既可以扩展也可以减少捕获图像的景深。我们展示了使用相机原型捕获的具有挑战性的场景的实验结果。
{"title":"Variable focus video: Reconstructing depth and video for dynamic scenes","authors":"Nitesh Shroff, A. Veeraraghavan, Yuichi Taguchi, Oncel Tuzel, Amit K. Agrawal, R. Chellappa","doi":"10.1109/ICCPhot.2012.6215219","DOIUrl":"https://doi.org/10.1109/ICCPhot.2012.6215219","url":null,"abstract":"Traditional depth from defocus (DFD) algorithms assume that the camera and the scene are static during acquisition time. In this paper, we examine the effects of camera and scene motion on DFD algorithms. We show that, given accurate estimates of optical flow (OF), one can robustly warp the focal stack (FS) images to obtain a virtual static FS and apply traditional DFD algorithms on the static FS. Acquiring accurate OF in the presence of varying focal blur is a challenging task. We show how defocus blur variations cause inherent biases in the estimates of optical flow. We then show how to robustly handle these biases and compute accurate OF estimates in the presence of varying focal blur. This leads to an architecture and an algorithm that converts a traditional 30 fps video camera into a co-located 30 fps image and a range sensor. Further, the ability to extract image and range information allows us to render images with artistic depth-of field effects, both extending and reducing the depth of field of the captured images. We demonstrate experimental results on challenging scenes captured using a camera prototype.","PeriodicalId":169984,"journal":{"name":"2012 IEEE International Conference on Computational Photography (ICCP)","volume":"02 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129823173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Fourier Slice Super-resolution in plenoptic cameras 全光学相机的傅里叶切片超分辨率
Pub Date : 2012-04-28 DOI: 10.1109/ICCPhot.2012.6215210
F. Pérez, Alejandro Pérez, Manuel Rodríguez, E. Magdaleno
Plenoptic cameras are a promising solution to increase the capabilities of current commercial cameras because they capture the four-dimensional lightfield of a scene. Processing the recorded lightfield, these cameras offer the possibility of focusing the scene after the shot or obtaining 3D information. Conventional photographs focused on determined planes can be obtained through projections of the four-dimensional lightfield onto two spatial dimensions. These photographs can be efficiently computed using the Fourier Slice technique but their resolution is usually less than 1% of the full resolution of the camera sensor. Several super-resolution methods have been recently developed to increase the spatial resolution of plenoptic cameras. In this paper we propose a new super-resolution method based on the Fourier Slice technique. We show how several existing super-resolution methods can be seen as particular cases of this approach. Besides the theoretical interest of this unified view, we also show how to obtain simultaneously spatial and depth super-resolution removing the limitations of previous approaches.
全光相机是一种很有前途的解决方案,可以提高当前商用相机的能力,因为它们可以捕捉场景的四维光场。处理记录的光场,这些相机提供了在拍摄后聚焦场景或获得3D信息的可能性。通过将四维光场投影到两个空间维度上,可以获得聚焦在确定平面上的常规照片。这些照片可以有效地计算使用傅里叶切片技术,但他们的分辨率通常是小于相机传感器的全分辨率的1%。为了提高全光学相机的空间分辨率,近年来发展了几种超分辨率方法。本文提出了一种基于傅里叶切片技术的超分辨方法。我们展示了几种现有的超分辨率方法如何被视为这种方法的特殊情况。除了这种统一视图的理论兴趣之外,我们还展示了如何同时获得空间和深度超分辨率,从而消除了以前方法的局限性。
{"title":"Fourier Slice Super-resolution in plenoptic cameras","authors":"F. Pérez, Alejandro Pérez, Manuel Rodríguez, E. Magdaleno","doi":"10.1109/ICCPhot.2012.6215210","DOIUrl":"https://doi.org/10.1109/ICCPhot.2012.6215210","url":null,"abstract":"Plenoptic cameras are a promising solution to increase the capabilities of current commercial cameras because they capture the four-dimensional lightfield of a scene. Processing the recorded lightfield, these cameras offer the possibility of focusing the scene after the shot or obtaining 3D information. Conventional photographs focused on determined planes can be obtained through projections of the four-dimensional lightfield onto two spatial dimensions. These photographs can be efficiently computed using the Fourier Slice technique but their resolution is usually less than 1% of the full resolution of the camera sensor. Several super-resolution methods have been recently developed to increase the spatial resolution of plenoptic cameras. In this paper we propose a new super-resolution method based on the Fourier Slice technique. We show how several existing super-resolution methods can be seen as particular cases of this approach. Besides the theoretical interest of this unified view, we also show how to obtain simultaneously spatial and depth super-resolution removing the limitations of previous approaches.","PeriodicalId":169984,"journal":{"name":"2012 IEEE International Conference on Computational Photography (ICCP)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134644456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Calibration-free rolling shutter removal 无需校准的卷帘门拆卸
Pub Date : 2012-04-28 DOI: 10.1109/ICCPhot.2012.6215213
Matthias Grundmann, Vivek Kwatra, Daniel Castro, Irfan Essa
We present a novel algorithm for efficient removal of rolling shutter distortions in uncalibrated streaming videos. Our proposed method is calibration free as it does not need any knowledge of the camera used, nor does it require calibration using specially recorded calibration sequences. Our algorithm can perform rolling shutter removal under varying focal lengths, as in videos from CMOS cameras equipped with an optical zoom. We evaluate our approach across a broad range of cameras and video sequences demonstrating robustness, scaleability, and repeatability. We also conducted a user study, which demonstrates preference for the output of our algorithm over other state-of-the art methods. Our algorithm is computationally efficient, easy to parallelize, and robust to challenging artifacts introduced by various cameras with differing technologies.
我们提出了一种新的算法来有效地去除未校准的流媒体视频中的滚动快门失真。我们提出的方法是免校准的,因为它不需要对所使用的相机有任何了解,也不需要使用专门记录的校准序列进行校准。我们的算法可以在不同的焦距下执行滚动快门去除,例如在配备光学变焦的CMOS相机的视频中。我们在广泛的摄像机和视频序列中评估我们的方法,证明了鲁棒性、可扩展性和可重复性。我们还进行了一项用户研究,该研究表明,与其他最先进的方法相比,我们的算法的输出更受青睐。我们的算法计算效率高,易于并行化,并且对使用不同技术的各种相机引入的具有挑战性的工件具有鲁棒性。
{"title":"Calibration-free rolling shutter removal","authors":"Matthias Grundmann, Vivek Kwatra, Daniel Castro, Irfan Essa","doi":"10.1109/ICCPhot.2012.6215213","DOIUrl":"https://doi.org/10.1109/ICCPhot.2012.6215213","url":null,"abstract":"We present a novel algorithm for efficient removal of rolling shutter distortions in uncalibrated streaming videos. Our proposed method is calibration free as it does not need any knowledge of the camera used, nor does it require calibration using specially recorded calibration sequences. Our algorithm can perform rolling shutter removal under varying focal lengths, as in videos from CMOS cameras equipped with an optical zoom. We evaluate our approach across a broad range of cameras and video sequences demonstrating robustness, scaleability, and repeatability. We also conducted a user study, which demonstrates preference for the output of our algorithm over other state-of-the art methods. Our algorithm is computationally efficient, easy to parallelize, and robust to challenging artifacts introduced by various cameras with differing technologies.","PeriodicalId":169984,"journal":{"name":"2012 IEEE International Conference on Computational Photography (ICCP)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129652154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 177
Flutter Shutter Video Camera for compressive sensing of videos 用于视频压缩感知的颤振快门摄像机
Pub Date : 2012-04-28 DOI: 10.1109/ICCPhot.2012.6215211
Jason Holloway, Aswin C. Sankaranarayanan, A. Veeraraghavan, S. Tambe
Video cameras are invariably bandwidth limited and this results in a trade-off between spatial and temporal resolution. Advances in sensor manufacturing technology have tremendously increased the available spatial resolution of modern cameras while simultaneously lowering the costs of these sensors. In stark contrast, hardware improvements in temporal resolution have been modest. One solution to enhance temporal resolution is to use high bandwidth imaging devices such as high speed sensors and camera arrays. Unfortunately, these solutions are expensive. An alternate solution is motivated by recent advances in computational imaging and compressive sensing. Camera designs based on these principles, typically, modulate the incoming video using spatio-temporal light modulators and capture the modulated video at a lower bandwidth. Reconstruction algorithms, motivated by compressive sensing, are subsequently used to recover the high bandwidth video at high fidelity. Though promising, these methods have been limited since they require complex and expensive light modulators that make the techniques difficult to realize in practice. In this paper, we show that a simple coded exposure modulation is sufficient to reconstruct high speed videos. We propose the Flutter Shutter Video Camera (FSVC) in which each exposure of the sensor is temporally coded using an independent pseudo-random sequence. Such exposure coding is easily achieved in modern sensors and is already a feature of several machine vision cameras. We also develop two algorithms for reconstructing the high speed video; the first based on minimizing the total variation of the spatio-temporal slices of the video and the second based on a data driven dictionary based approximation. We perform evaluation on simulated videos and real data to illustrate the robustness of our system.
视频摄像机总是带宽有限,这导致在空间和时间分辨率之间的权衡。传感器制造技术的进步极大地提高了现代相机的可用空间分辨率,同时降低了这些传感器的成本。与之形成鲜明对比的是,硬件在时间分辨率方面的改进并不大。提高时间分辨率的一个解决方案是使用高带宽成像设备,如高速传感器和相机阵列。不幸的是,这些解决方案都很昂贵。另一种解决方案是由计算成像和压缩感知的最新进展推动的。基于这些原理的摄像机设计,通常使用时空光调制器调制传入视频,并以较低的带宽捕获调制后的视频。基于压缩感知的重构算法被用于高保真地恢复高带宽视频。虽然这些方法很有前途,但由于它们需要复杂和昂贵的光调制器,使得这些技术难以在实践中实现,因此这些方法受到限制。在本文中,我们证明了一个简单的编码曝光调制足以重建高速视频。我们提出了颤振快门摄像机(FSVC),其中每个传感器的曝光都使用独立的伪随机序列进行时间编码。这种曝光编码在现代传感器中很容易实现,并且已经成为一些机器视觉相机的特征。我们还开发了两种用于高速视频重建的算法;第一种基于最小化视频的时空切片的总变化,第二种基于数据驱动的基于字典的近似。我们对模拟视频和真实数据进行了评估,以说明我们系统的鲁棒性。
{"title":"Flutter Shutter Video Camera for compressive sensing of videos","authors":"Jason Holloway, Aswin C. Sankaranarayanan, A. Veeraraghavan, S. Tambe","doi":"10.1109/ICCPhot.2012.6215211","DOIUrl":"https://doi.org/10.1109/ICCPhot.2012.6215211","url":null,"abstract":"Video cameras are invariably bandwidth limited and this results in a trade-off between spatial and temporal resolution. Advances in sensor manufacturing technology have tremendously increased the available spatial resolution of modern cameras while simultaneously lowering the costs of these sensors. In stark contrast, hardware improvements in temporal resolution have been modest. One solution to enhance temporal resolution is to use high bandwidth imaging devices such as high speed sensors and camera arrays. Unfortunately, these solutions are expensive. An alternate solution is motivated by recent advances in computational imaging and compressive sensing. Camera designs based on these principles, typically, modulate the incoming video using spatio-temporal light modulators and capture the modulated video at a lower bandwidth. Reconstruction algorithms, motivated by compressive sensing, are subsequently used to recover the high bandwidth video at high fidelity. Though promising, these methods have been limited since they require complex and expensive light modulators that make the techniques difficult to realize in practice. In this paper, we show that a simple coded exposure modulation is sufficient to reconstruct high speed videos. We propose the Flutter Shutter Video Camera (FSVC) in which each exposure of the sensor is temporally coded using an independent pseudo-random sequence. Such exposure coding is easily achieved in modern sensors and is already a feature of several machine vision cameras. We also develop two algorithms for reconstructing the high speed video; the first based on minimizing the total variation of the spatio-temporal slices of the video and the second based on a data driven dictionary based approximation. We perform evaluation on simulated videos and real data to illustrate the robustness of our system.","PeriodicalId":169984,"journal":{"name":"2012 IEEE International Conference on Computational Photography (ICCP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131119810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 76
CS-MUVI: Video compressive sensing for spatial-multiplexing cameras CS-MUVI:用于空间复用摄像机的视频压缩感知
Pub Date : 2012-04-28 DOI: 10.1109/ICCPhot.2012.6215212
Aswin C. Sankaranarayanan, Christoph Studer, Richard Baraniuk
Compressive sensing (CS)-based spatial-multiplexing cameras (SMCs) sample a scene through a series of coded projections using a spatial light modulator and a few optical sensor elements. SMC architectures are particularly useful when imaging at wavelengths for which full-frame sensors are too cumbersome or expensive. While existing recovery algorithms for SMCs perform well for static images, they typically fail for time-varying scenes (videos). In this paper, we propose a novel CS multi-scale video (CS-MUVI) sensing and recovery framework for SMCs. Our framework features a co-designed video CS sensing matrix and recovery algorithm that provide an efficiently computable low-resolution video preview. We estimate the scene's optical flow from the video preview and feed it into a convex-optimization algorithm to recover the high-resolution video. We demonstrate the performance and capabilities of the CS-MUVI framework for different scenes.
基于压缩感知(CS)的空间多路复用相机(SMCs)使用空间光调制器和一些光学传感器元件通过一系列编码投影对场景进行采样。SMC架构在全画幅传感器过于笨重或昂贵的波长下成像时特别有用。虽然现有的smc恢复算法对静态图像表现良好,但对于时变场景(视频),它们通常会失败。在本文中,我们提出了一种新的CS多尺度视频(CS- muvi)感知和恢复框架。我们的框架具有共同设计的视频CS传感矩阵和恢复算法,可提供高效可计算的低分辨率视频预览。我们从视频预览中估计场景的光流,并将其输入到凸优化算法中以恢复高分辨率视频。我们演示了CS-MUVI框架在不同场景下的性能和功能。
{"title":"CS-MUVI: Video compressive sensing for spatial-multiplexing cameras","authors":"Aswin C. Sankaranarayanan, Christoph Studer, Richard Baraniuk","doi":"10.1109/ICCPhot.2012.6215212","DOIUrl":"https://doi.org/10.1109/ICCPhot.2012.6215212","url":null,"abstract":"Compressive sensing (CS)-based spatial-multiplexing cameras (SMCs) sample a scene through a series of coded projections using a spatial light modulator and a few optical sensor elements. SMC architectures are particularly useful when imaging at wavelengths for which full-frame sensors are too cumbersome or expensive. While existing recovery algorithms for SMCs perform well for static images, they typically fail for time-varying scenes (videos). In this paper, we propose a novel CS multi-scale video (CS-MUVI) sensing and recovery framework for SMCs. Our framework features a co-designed video CS sensing matrix and recovery algorithm that provide an efficiently computable low-resolution video preview. We estimate the scene's optical flow from the video preview and feed it into a convex-optimization algorithm to recover the high-resolution video. We demonstrate the performance and capabilities of the CS-MUVI framework for different scenes.","PeriodicalId":169984,"journal":{"name":"2012 IEEE International Conference on Computational Photography (ICCP)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128689309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 151
Contrast preserving decolorization 保持对比脱色
Pub Date : 2012-04-28 DOI: 10.1109/ICCPhot.2012.6215215
Cewu Lu, Li Xu, Jiaya Jia
Decolorization - the process to transform a color image to a grayscale one - is a basic tool in digital printing, stylized black-and-white photography, and in many single channel image processing applications. In this paper, we propose an optimization approach aiming at maximally preserving the original color contrast. Our main contribution is to alleviate a strict order constraint for color mapping based on human vision system, which enables the employment of a bimodal distribution to constrain spatial pixel difference and allows for automatic selection of suitable gray scale in order to preserve the original contrast. Both the quantitative and qualitative evaluation bears out the effectiveness of the proposed method.
脱色-将彩色图像转换为灰度图像的过程-是数字印刷,程式化黑白摄影和许多单通道图像处理应用中的基本工具。在本文中,我们提出了一种以最大限度地保留原始颜色对比度为目标的优化方法。我们的主要贡献是减轻了基于人类视觉系统的颜色映射的严格顺序约束,它允许使用双峰分布来约束空间像素差,并允许自动选择合适的灰度以保持原始对比度。定量和定性评价均证明了所提方法的有效性。
{"title":"Contrast preserving decolorization","authors":"Cewu Lu, Li Xu, Jiaya Jia","doi":"10.1109/ICCPhot.2012.6215215","DOIUrl":"https://doi.org/10.1109/ICCPhot.2012.6215215","url":null,"abstract":"Decolorization - the process to transform a color image to a grayscale one - is a basic tool in digital printing, stylized black-and-white photography, and in many single channel image processing applications. In this paper, we propose an optimization approach aiming at maximally preserving the original color contrast. Our main contribution is to alleviate a strict order constraint for color mapping based on human vision system, which enables the employment of a bimodal distribution to constrain spatial pixel difference and allows for automatic selection of suitable gray scale in order to preserve the original contrast. Both the quantitative and qualitative evaluation bears out the effectiveness of the proposed method.","PeriodicalId":169984,"journal":{"name":"2012 IEEE International Conference on Computational Photography (ICCP)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115296876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 113
Fast reactive control for illumination through rain and snow 快速反应控制照明通过雨雪
Pub Date : 2012-04-28 DOI: 10.1109/ICCPhot.2012.6215217
Raoul de Charette, R. Tamburo, P. Barnum, Anthony G. Rowe, T. Kanade, S. Narasimhan
During low-light conditions, drivers rely mainly on headlights to improve visibility. But in the presence of rain and snow, headlights can paradoxically reduce visibility due to light reflected off of precipitation back towards the driver. Precipitation also scatters light across a wide range of angles that disrupts the vision of drivers in oncoming vehicles. In contrast to recent computer vision methods that digitally remove rain and snow streaks from captured images, we present a system that will directly improve driver visibility by controlling illumination in response to detected precipitation. The motion of precipitation is tracked and only the space around particles is illuminated using fast dynamic control. Using a physics-based simulator, we show how such a system would perform under a variety of weather conditions. We build and evaluate a proof-of-concept system that can avoid water drops generated in the laboratory.
在弱光条件下,司机主要依靠前灯来提高能见度。但在雨雪天气中,由于雨水反射回司机身上的光线,车头灯反而会降低能见度。降水还会将光线散射到大范围的角度,从而干扰迎面而来的车辆的驾驶员的视野。与最近的计算机视觉方法(从捕获的图像中以数字方式去除雨雪条纹)相比,我们提出了一个系统,该系统将通过控制照明来响应检测到的降水,从而直接提高驾驶员的能见度。采用快速动态控制,跟踪降水的运动,只照亮粒子周围的空间。使用基于物理的模拟器,我们展示了这样一个系统在各种天气条件下的表现。我们建立并评估了一个概念验证系统,该系统可以避免在实验室中产生水滴。
{"title":"Fast reactive control for illumination through rain and snow","authors":"Raoul de Charette, R. Tamburo, P. Barnum, Anthony G. Rowe, T. Kanade, S. Narasimhan","doi":"10.1109/ICCPhot.2012.6215217","DOIUrl":"https://doi.org/10.1109/ICCPhot.2012.6215217","url":null,"abstract":"During low-light conditions, drivers rely mainly on headlights to improve visibility. But in the presence of rain and snow, headlights can paradoxically reduce visibility due to light reflected off of precipitation back towards the driver. Precipitation also scatters light across a wide range of angles that disrupts the vision of drivers in oncoming vehicles. In contrast to recent computer vision methods that digitally remove rain and snow streaks from captured images, we present a system that will directly improve driver visibility by controlling illumination in response to detected precipitation. The motion of precipitation is tracked and only the space around particles is illuminated using fast dynamic control. Using a physics-based simulator, we show how such a system would perform under a variety of weather conditions. We build and evaluate a proof-of-concept system that can avoid water drops generated in the laboratory.","PeriodicalId":169984,"journal":{"name":"2012 IEEE International Conference on Computational Photography (ICCP)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121423065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Depth-aware motion deblurring 深度感知运动去模糊
Pub Date : 2012-04-28 DOI: 10.1109/ICCPhot.2012.6215220
Li Xu, Jiaya Jia
Motion deblurring from images that are captured in a scene with depth variation needs to estimate spatially-varying point spread functions (PSFs). We tackle this problemwith a stereopsis configuration, using depth information to help blur removal. We observe that the simple scheme to partition the blurred images into regions and estimate their PSFs respectively may make small-size regions lack necessary structural information to guide PSF estimation and accordingly propose region trees to hierarchically estimate them. Erroneous PSFs are rejected with a novel PSF selection scheme, based on the shock filtering invariance of natural images. Our framework also applies to general single-image spatially-varying deblurring.
对深度变化场景中捕获的图像进行运动去模糊需要估计空间变化点扩展函数(psf)。我们用立体视觉配置来解决这个问题,使用深度信息来帮助去除模糊。我们观察到,将模糊图像划分为区域并分别估计其PSF的简单方案可能会使小尺寸区域缺乏必要的结构信息来指导PSF估计,因此提出了区域树对其进行分层估计。基于自然图像的冲击滤波不变性,提出了一种新的PSF选择方案。我们的框架也适用于一般的单图像空间变化去模糊。
{"title":"Depth-aware motion deblurring","authors":"Li Xu, Jiaya Jia","doi":"10.1109/ICCPhot.2012.6215220","DOIUrl":"https://doi.org/10.1109/ICCPhot.2012.6215220","url":null,"abstract":"Motion deblurring from images that are captured in a scene with depth variation needs to estimate spatially-varying point spread functions (PSFs). We tackle this problemwith a stereopsis configuration, using depth information to help blur removal. We observe that the simple scheme to partition the blurred images into regions and estimate their PSFs respectively may make small-size regions lack necessary structural information to guide PSF estimation and accordingly propose region trees to hierarchically estimate them. Erroneous PSFs are rejected with a novel PSF selection scheme, based on the shock filtering invariance of natural images. Our framework also applies to general single-image spatially-varying deblurring.","PeriodicalId":169984,"journal":{"name":"2012 IEEE International Conference on Computational Photography (ICCP)","volume":"650 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133216560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 95
Depth coded shape from focus 深度编码形状从焦点
Pub Date : 2012-04-28 DOI: 10.1109/ICCPhot.2012.6215218
Martin Lenz, David Ferstl, M. Rüther, H. Bischof
We present a novel shape from focus method for high- speed shape reconstruction in optical microscopy. While the traditional shape from focus approach heavily depends on presence of surface texture, and requires a considerable amount of measurement time, our method is able to perform reconstruction from only two images. Our method relies the rapid projection of a binary pattern sequence, while object is continuously moved through the camera focus range and a single image is continuously exposed. Deconvolution of the integral image allows a direct decoding of binary pattern and its associated depth. Experiments a synthetic dataset and on real scenes show that a depth map can be reconstructed at only 3% of memory costs and fraction of the computational effort compared with traditional shape from focus.
提出了一种用于光学显微镜高速形状重建的新型聚焦形状重建方法。传统的聚焦形状方法严重依赖于表面纹理的存在,并且需要相当多的测量时间,而我们的方法能够仅从两幅图像中进行重建。我们的方法依赖于二值模式序列的快速投影,同时物体在相机焦距范围内连续移动,并连续曝光单幅图像。积分图像的反卷积允许二进制模式及其相关深度的直接解码。在合成数据集和真实场景中进行的实验表明,与传统的聚焦形状相比,重建深度图只需要3%的内存成本和一小部分计算量。
{"title":"Depth coded shape from focus","authors":"Martin Lenz, David Ferstl, M. Rüther, H. Bischof","doi":"10.1109/ICCPhot.2012.6215218","DOIUrl":"https://doi.org/10.1109/ICCPhot.2012.6215218","url":null,"abstract":"We present a novel shape from focus method for high- speed shape reconstruction in optical microscopy. While the traditional shape from focus approach heavily depends on presence of surface texture, and requires a considerable amount of measurement time, our method is able to perform reconstruction from only two images. Our method relies the rapid projection of a binary pattern sequence, while object is continuously moved through the camera focus range and a single image is continuously exposed. Deconvolution of the integral image allows a direct decoding of binary pattern and its associated depth. Experiments a synthetic dataset and on real scenes show that a depth map can be reconstructed at only 3% of memory costs and fraction of the computational effort compared with traditional shape from focus.","PeriodicalId":169984,"journal":{"name":"2012 IEEE International Conference on Computational Photography (ICCP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131004020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Diffuse structured light 漫射结构光
Pub Date : 2012-04-28 DOI: 10.1109/ICCPhot.2012.6215216
S. Nayar, Mohit Gupta
Today, structured light systems are widely used in applications such as robotic assembly, visual inspection, surgery, entertainment, games and digitization of cultural heritage. Current structured light methods are faced with two serious limitations. First, they are unable to cope with scene regions that produce strong highlights due to specular reflection. Second, they cannot recover useful information for regions that lie within shadows. We observe that many structured light methods use illumination patterns that have translational symmetry, i.e., two-dimensional patterns that vary only along one of the two dimensions. We show that, for this class of patterns, diffusion of the patterns along the axis of translation can mitigate the adverse effects of specularities and shadows. We show results for two applications - 3D scanning using phase shifting of sinusoidal patterns and separation of direct and global components of light transport using high-frequency binary stripes.
如今,结构光系统被广泛应用于机器人装配、视觉检查、外科手术、娱乐、游戏和文化遗产数字化等领域。目前的结构光方法面临着两个严重的局限性。首先,它们无法处理由于镜面反射而产生强烈高光的场景区域。其次,它们无法恢复阴影区域的有用信息。我们观察到许多结构光方法使用具有平移对称的照明模式,即仅沿两个维度中的一个变化的二维模式。我们表明,对于这类图案,沿着平移轴的图案扩散可以减轻镜面和阴影的不利影响。我们展示了两种应用的结果-使用正弦模式相移的3D扫描和使用高频二进制条纹分离光传输的直接和全局分量。
{"title":"Diffuse structured light","authors":"S. Nayar, Mohit Gupta","doi":"10.1109/ICCPhot.2012.6215216","DOIUrl":"https://doi.org/10.1109/ICCPhot.2012.6215216","url":null,"abstract":"Today, structured light systems are widely used in applications such as robotic assembly, visual inspection, surgery, entertainment, games and digitization of cultural heritage. Current structured light methods are faced with two serious limitations. First, they are unable to cope with scene regions that produce strong highlights due to specular reflection. Second, they cannot recover useful information for regions that lie within shadows. We observe that many structured light methods use illumination patterns that have translational symmetry, i.e., two-dimensional patterns that vary only along one of the two dimensions. We show that, for this class of patterns, diffusion of the patterns along the axis of translation can mitigate the adverse effects of specularities and shadows. We show results for two applications - 3D scanning using phase shifting of sinusoidal patterns and separation of direct and global components of light transport using high-frequency binary stripes.","PeriodicalId":169984,"journal":{"name":"2012 IEEE International Conference on Computational Photography (ICCP)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122093225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 55
期刊
2012 IEEE International Conference on Computational Photography (ICCP)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1