The advancement in image dehazing research has increased the demand for effective dehazed image quality assessment (DQA) methods. However, existing DQA approaches suffer from limitations due to scarce labeled data, resulting in insufficient representation of quality-related information. Most current methods focus on distortion artifacts introduced by dehazing algorithms or rely on single quality factors, limiting their performance and generalizability. In this work, we propose a novel no-reference DQA model that leverages self-supervised reconstruction and pseudo-label learning to extract three complementary perceptual features: image Content, Distortion, and Fog Density (CDFD-DQA). The framework includes four key components: Feature Extraction Module (FEM), Perceptual Feature Representation Module (PFRM), Feature Self-Interaction Module (FSIM), and Dual-branch Quality Predictor (DQP). The FEM uses pre-trained content-aware and distortion-aware encoders, along with a fog density predictor, to capture quality-discriminative features related to content preservation, distortion artifacts, and fog density. These features are refined through PFRM to enhance expressive capacity. To capture dependencies among features, FSIM incorporates Content-Distortion-Fog Density Feature Self-Interaction (CDFD-FSI), adaptively integrating interrelated and independent representations. Finally, DQP maps fused features to perceptual quality scores. Extensive experiments on five publicly available DQA datasets demonstrate that CDFD-DQA generally aligns well with human subjective perception and outperforms several existing state-of-the-art methods.
扫码关注我们
求助内容:
应助结果提醒方式:
