Haze significantly reduces the visual quality of images, particularly in dense atmospheric conditions, resulting in a substantial loss of perceptible structural and semantic information. This degradation negatively affects the performance of vision-based systems in critical applications such as autonomous navigation and intelligent surveillance. Consequently, single image dehazing has been recognized as a challenging inverse problem, aiming to restore clear images from hazy observations. Although significant progress has been made with existing dehazing approaches, the intrinsic mixing of haze-related features with unrelated image content often leads to distortions in color and detail preservation, limiting restoration accuracy. In recent years, Denoising Diffusion Probabilistic Model (DDPM) has demonstrated excellent performance in image generation and restoration tasks. However, the effectiveness of these methods in single image dehazing remains constrained by both irrelevant image content and temporal redundancy during sampling. To address these limitations, we propose a diffusion model-based dehazing method that effectively recovers image content by integrating both local and global priors through differential convolution. Furthermore, the generative capability of DDPM is exploited to enhance image texture and fine details. To reduce temporal redundancy during the diffusion process, a noise addition strategy based on the Fibonacci Sequence is introduced, which significantly optimizes the sampling time and improves overall computational efficiency. Experimental validation shows that the proposed method requires only 1/5 to 1/6 of the time required by the linear noise addition method. Additionally, the overall network achieves excellent performance in both synthetic and real dehazing datasets.
扫码关注我们
求助内容:
应助结果提醒方式:
