Low-light image enhancement (LLIE) is a critical technology for ensuring the robustness of downstream vision tasks such as autonomous driving and intelligent surveillance. To address the limitations of existing enhancement methods, including heavy reliance on paired supervised data, high computational cost, and susceptibility to color distortion, we propose CSME-Net, a lightweight, unsupervised, and color-adaptive enhancement network. Operating in the YUV color space, the network adopts a “luminance-prior, chrominance-controlled” channel separation strategy, performing structure-aware adaptive enhancement exclusively on the luminance (Y) channel while applying constrained fine-tuning to the chrominance (UV) channels, effectively reducing redundant computation while maintaining color consistency. Furthermore, we introduce the Y-attention module, which leverages the luminance variation (Y) as an online guidance signal to dynamically regulate chrominance enhancement, achieving real-time interaction and balance between luminance and chrominance. By integrating BilateralFReLU activation and Gamma-weighted loss, CSME-Net significantly enhances feature extraction capabilities while maintaining an extremely lightweight architecture with only 0.00495M parameters. Experimental results demonstrate that, under an unsupervised framework, CSME-Net can achieve stable convergence with only 100 training samples. The method not only effectively suppresses color distortion and information loss but also provides a valuable pathway for high-quality, high-frame-rate, real-time image enhancement on low-computation platforms.
扫码关注我们
求助内容:
应助结果提醒方式:
