Zero-Reference Low-Light Image Enhancement (LLIE) techniques mainly focus on grey-scale inhomogeneities, and few methods consider how to explicitly recover a dark scene to achieve enhancements in color and overall illumination. In this paper, we introduce a novel Zero-Reference Color Self-Calibration framework for enhancing low-light images, termed as Zero-CSC. It effectively emphasizes channel-wise representations that contain fine-grained color information, achieving a natural result in a progressive manner. Furthermore, we propose a Light Up (LU) module with large-kernel convolutional blocks to improve overall illumination, which is implemented with a simple U-Net and further simplified with a light-weight structure. Experiments on representative datasets show that our model consistently achieves state-of-the-art performance in image signal-to-noise ratio, structural similarity, and color accuracy, setting new records on the challenging SICE dataset with improvements of 23.7% in image signal-to-noise ratio and 5.3% in structural similarity compared to the most advanced methods.
With the proliferation of short-form video traffic, video service providers are faced with the challenge of balancing video quality and bandwidth consumption while processing massive volumes of videos. The most straightforward and simplistic approach is to set uniformly encoding parameters to all videos. However, such an approach fails to consider the differences in video content, and there may be alternative encoding parameter configuration approach that can improve global coding efficiency. Finding the optimal combination of encoding parameter configurations for a batch of videos requires an amount of redundant encoding, thereby introducing significant computational costs. To address this issue, we propose a low-complexity encoding parameter prediction model that can adaptively adjust the values of the encoding parameters based on video content. The experiments show that when only changing the value of the encoding parameter CRF, our prediction model can achieve 27.04%, 6.11%, and 15.92% bit saving in terms of PSNR, SSIM, and VMAF respectively, while maintaining an acceptable complexity compared to the approach using the same CRF value.
Video-based Point Cloud Compression enables point cloud streaming over the internet by converting dynamic 3D point clouds to 2D geometry and attribute videos, which are then compressed using 2D video codecs like H.266/VVC. However, the complex encoding process of H.266/VVC, such as the quadtree with nested multi-type tree (QTMT) partition, greatly hinders the practical application of V-PCC. To address this issue, we propose a fast CU partition method dedicated to V-PCC to accelerate the coding process. Specifically, we classify coding units (CUs) of projected images into three categories based on the occupancy map of a point cloud: unoccupied, partially occupied, and fully occupied. Subsequently, we employ either statistic-based rules or machine-learning models to manage the partition of each category. For unoccupied CUs, we terminate the partition directly; for partially occupied CUs with explicit directions, we selectively skip certain partition candidates; for the remaining CUs (partially occupied CUs with complex directions and fully occupied CUs), we train an edge-driven LightGBM model to predict the partition probability of each partition candidate automatically. Only partitions with high probabilities are retained for further Rate–Distortion (R–D) decisions. Comprehensive experiments demonstrate the superior performance of our proposed method: under the V-PCC common test conditions, our method reduces encoding time by 52% and 44% in geometry and attribute, respectively, while incurring only 0.68% (0.66%) BD-Rate loss in D1 (D2) measurements and 0.79% (luma) BD-Rate loss in attribute, significantly surpassing state-of-the-art works.
The quality assessment for 4K super-resolution (SR) videos can be conducive to the optimization of video SR algorithms. To improve the subjective and objective consistency of the SR quality assessment, a 4K video database and a blind metric are proposed in this paper. In the database SR4KVQA, there are 30 4K pristine videos, from which 600 SR 4K distorted videos with mean opinion score (MOS) labels are generated by three classic interpolation methods, six SR algorithms based on the deep neural network (DNN), and two SR algorithms based on the generative adversarial network (GAN). The benchmark experiment of the proposed database indicates that video quality assessment (VQA) of the 4K SR videos is challenging for the existing metrics. Among those metrics, the Video-Swin-Transformer backbone demonstrates tremendous potential in the VQA task. Accordingly, a blind VQA metric based on the Video-Swin-Transformer backbone is established, where the normalized loss function and optimized spatio-temporal sampling strategy are applied. The experiment result manifests that the Pearson linear correlation coefficient (PLCC) and Spearman rank-order correlation coefficient (SROCC) of the proposed metric reach 0.8011 and 0.8275 respectively on the SR4KVQA database, which outperforms or competes with the state-of-the-art VQA metrics. The database and the code proposed in this paper are available in the GitHub repository, https://github.com/AlexReadyNico/SR4KVQA.