This paper proposes a unified 3D Gaussian splatting framework consisting of three key components for motion and defocus blur reconstruction. First, a dual-blur perception module is designed to generate pixel-wise masks and predict the types of motion and defocus blur, guiding structural feature extraction. Second, a blur-aware Gaussian splatting integrates blur-aware features into the splatting process for accurate modeling of the global and local scene structure. Third, an Unoptimized Gaussian Ratio (UGR)-opacity joint optimization strategy is proposed to refine under-optimized regions, improving reconstruction accuracy under complex blur conditions. Experiments on a newly constructed motion and defocus blur dataset demonstrate the effectiveness of the proposed method for novel view synthesis. Compared with state-of-the-art methods, our framework achieves improvements of 0.28 dB, 2.46% and 39.88% on PSNR, SSIM, and LPIPS, respectively. For deblurring tasks, it achieves improvements of 0.36 dB, 3.24% and 28.96% on the same metrics. These results highlight the robustness and effectiveness of this approach. Additional visual results and video renderings are available on our project webpage: https://sunbeam-217.github.io/Dual-blur-reconstruction/.
扫码关注我们
求助内容:
应助结果提醒方式:
