Neural Radiance Fields (NeRF) have demonstrated exceptional three-dimensional (3D) reconstruction quality by synthesizing novel views from multi-view images. However, NeRF algorithms typically require clear and static images to function effectively, and little attention has been given to suboptimal scenarios involving noise such as reflections and blur. Although blurred images are common in real-world situations, few studies have explored NeRF for handling blur, particularly defocus blur. Correctly simulating the formation of defocus blur is the key to deblurring and helps to accurately synthesize new perspectives from blurred images. Therefore, this paper proposes a Multi-View Deblurring Neural Radiance Fields from Defocused Images (MVD-NeRF), a framework for 3D reconstruction from defocus-blurred images. The framework ensures consistency in 3D geometry and appearance by modeling the formation of defocus blur. MVD-NeRF introduces the Defocus Modeling Approach (DMA), a novel method for simulating defocused scenes. When the view is fixed, DMA assumes that the pixel is rendered by multiple rays emitted from the same light source. Additionally, MVD-NeRF proposes a new Multi-view Panning Algorithm (MPA), which simulates light source movement through slight shifts in the camera center across different views, thereby generating blur effects similar to those in real photography. Together, DMA and MPA enhance MVD-NeRF’s ability to capture intricate scene details. Our experimental results validate that MVD-NeRF achieves significant improvements in Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Learned Perceptual Image Patch Similarity (LPIPS). The source code for MVD-NeRF is available at the following URL: https://github.com/luckhui0505/MVD-NeRF.
扫码关注我们
求助内容:
应助结果提醒方式:
