{"title":"Fast and Efficient Restoration of Extremely Dark Light Fields","authors":"Mohit Lamba, K. Mitra","doi":"10.1109/WACV51458.2022.00321","DOIUrl":null,"url":null,"abstract":"The ability of Light Field (LF) cameras to capture the 3D geometry of a scene in a single photographic exposure has become central to several applications ranging from passive depth estimation to post-capture refocusing and view synthesis. But these LF applications break down in extreme low-light conditions due to excessive noise and poor image photometry. Existing low-light restoration techniques are inappropriate because they either do not leverage LF’s multi-view perspective or have enormous time and memory complexity. We propose a three-stage network that is simultaneously fast and accurate for real world applications. Our accuracy comes from the fact that our three stage architecture utilizes global, local and view-specific information present in low-light LFs and fuse them using an RNN inspired feedforward network. We are fast because we restore multiple views simultaneously and so require less number of forward passes. Besides these advantages, our network is flexible enough to restore a m × m LF during inference even if trained for a smaller n × n (n < m) LF without any finetuning. Extensive experiments on real low-light LF demonstrate that compared to the current state-of-the-art, our model can achieve up to 1 dB higher restoration PSNR, with 9× speedup, 23% smaller model size and about 5× lower floating-point operations.","PeriodicalId":297092,"journal":{"name":"2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WACV51458.2022.00321","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
The ability of Light Field (LF) cameras to capture the 3D geometry of a scene in a single photographic exposure has become central to several applications ranging from passive depth estimation to post-capture refocusing and view synthesis. But these LF applications break down in extreme low-light conditions due to excessive noise and poor image photometry. Existing low-light restoration techniques are inappropriate because they either do not leverage LF’s multi-view perspective or have enormous time and memory complexity. We propose a three-stage network that is simultaneously fast and accurate for real world applications. Our accuracy comes from the fact that our three stage architecture utilizes global, local and view-specific information present in low-light LFs and fuse them using an RNN inspired feedforward network. We are fast because we restore multiple views simultaneously and so require less number of forward passes. Besides these advantages, our network is flexible enough to restore a m × m LF during inference even if trained for a smaller n × n (n < m) LF without any finetuning. Extensive experiments on real low-light LF demonstrate that compared to the current state-of-the-art, our model can achieve up to 1 dB higher restoration PSNR, with 9× speedup, 23% smaller model size and about 5× lower floating-point operations.
光场(LF)相机在单次曝光中捕捉场景的3D几何形状的能力已经成为从被动深度估计到捕获后重新聚焦和视图合成等几个应用的核心。但是,由于噪声过大和图像测光性能差,这些LF应用在极端低光条件下会崩溃。现有的低光恢复技术是不合适的,因为它们要么没有利用LF的多视图视角,要么有巨大的时间和内存复杂性。我们提出了一个三级网络,同时快速和准确的现实世界的应用。我们的准确性来自于这样一个事实,即我们的三阶段架构利用了低光照LFs中存在的全局、局部和特定视图信息,并使用RNN启发的前馈网络将它们融合在一起。我们的速度很快,因为我们可以同时恢复多个视图,因此需要更少的前向传递。除了这些优点之外,我们的网络足够灵活,即使在没有任何微调的情况下训练较小的n × n (n < m) LF,也可以在推理期间恢复m × m的LF。在实际低光照下进行的大量实验表明,与目前最先进的模型相比,我们的模型可以实现高达1 dB的恢复PSNR,加速提高9倍,模型尺寸缩小23%,浮点运算减少约5倍。