Multi-view clustering aims to uncover discriminative data structures by integrating complementary information from multiple feature views. However, existing approaches often encounter several limitations: they struggle to handle high-dimensional and heterogeneous features, tend to assign suboptimal weights to different views, suffer from severe feature redundancy, and fail to effectively account for variations in view quality. These issues can substantially degrade clustering performance. To address these challenges, we propose a novel deep multi-view clustering framework, named DS-MVC. The proposed approach incorporates an enhanced feature fusion strategy and a multi-scale view contrastive learning scheme. First, we propose Dynamic View-Confidence Fusion, operating at the feature level. Specifically, we estimate the prediction confidence of each sample in each view and assign adaptive, sample-specific weights accordingly. This mechanism effectively emphasizes high-quality views while suppressing the influence of noisy or low-quality views, thereby enabling more accurate and fine-grained feature integration. Second, we propose Multi-Scale View Contrastive Learning, which leverages inter-view discrepancies to guide representation learning. By constructing hierarchical contrastive objectives based on prediction discrepancies between samples, the model is able to capture underlying structural relationships and contextual dependencies across views, leading to richer and more discriminative representations. Extensive experiments on multiple benchmark datasets demonstrate that DS-MVC achieves superior performance in terms of clustering accuracy and robustness. Furthermore, ablation studies validate the effectiveness of each component and confirm the generalization capability of the proposed framework.
扫码关注我们
求助内容:
应助结果提醒方式:
