{"title":"图形结构感知对比多视图聚类","authors":"Rui Chen;Yongqiang Tang;Xiangrui Cai;Xiaojie Yuan;Wenlong Feng;Wensheng Zhang","doi":"10.1109/TBDATA.2023.3334674","DOIUrl":null,"url":null,"abstract":"Multi-view clustering has become a research hotspot in recent decades because of its effectiveness in heterogeneous data fusion. Although a large number of related studies have been developed one after another, most of them usually only concern with the characteristics of the data themselves and overlook the inherent connection among samples, hindering them from exploring structural knowledge of graph space. Moreover, many current works tend to highlight the compactness of one cluster without taking the differences between clusters into account. To track these two drawbacks, in this article, we propose a graph structure aware contrastive multi-view clustering (namely, GCMC) approach. Specifically, we incorporate the well-designed graph autoencoder with conventional multi-layer perception autoencoder to extract the structural and high-level representation of multi-view data, so that the underlying correlation of samples can be effectively squeezed for model learning. Then the contrastive learning paradigm is performed on multiple pseudo-label distributions to ensure that the positive pairs of pseudo-label representations share the complementarity across views while the divergence between negative pairs is sufficiently large. This makes each semantic cluster more discriminative, i.e., jointly satisfying intra-cluster compactness and inter-cluster exclusiveness. Through comprehensive experiments on eight widely-known datasets, we prove that the proposed approach can perform better than the state-of-the-art opponents.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"10 3","pages":"260-274"},"PeriodicalIF":7.5000,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Graph Structure Aware Contrastive Multi-View Clustering\",\"authors\":\"Rui Chen;Yongqiang Tang;Xiangrui Cai;Xiaojie Yuan;Wenlong Feng;Wensheng Zhang\",\"doi\":\"10.1109/TBDATA.2023.3334674\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multi-view clustering has become a research hotspot in recent decades because of its effectiveness in heterogeneous data fusion. Although a large number of related studies have been developed one after another, most of them usually only concern with the characteristics of the data themselves and overlook the inherent connection among samples, hindering them from exploring structural knowledge of graph space. Moreover, many current works tend to highlight the compactness of one cluster without taking the differences between clusters into account. To track these two drawbacks, in this article, we propose a graph structure aware contrastive multi-view clustering (namely, GCMC) approach. Specifically, we incorporate the well-designed graph autoencoder with conventional multi-layer perception autoencoder to extract the structural and high-level representation of multi-view data, so that the underlying correlation of samples can be effectively squeezed for model learning. Then the contrastive learning paradigm is performed on multiple pseudo-label distributions to ensure that the positive pairs of pseudo-label representations share the complementarity across views while the divergence between negative pairs is sufficiently large. This makes each semantic cluster more discriminative, i.e., jointly satisfying intra-cluster compactness and inter-cluster exclusiveness. Through comprehensive experiments on eight widely-known datasets, we prove that the proposed approach can perform better than the state-of-the-art opponents.\",\"PeriodicalId\":13106,\"journal\":{\"name\":\"IEEE Transactions on Big Data\",\"volume\":\"10 3\",\"pages\":\"260-274\"},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2023-11-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Big Data\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10323145/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Big Data","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10323145/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Multi-view clustering has become a research hotspot in recent decades because of its effectiveness in heterogeneous data fusion. Although a large number of related studies have been developed one after another, most of them usually only concern with the characteristics of the data themselves and overlook the inherent connection among samples, hindering them from exploring structural knowledge of graph space. Moreover, many current works tend to highlight the compactness of one cluster without taking the differences between clusters into account. To track these two drawbacks, in this article, we propose a graph structure aware contrastive multi-view clustering (namely, GCMC) approach. Specifically, we incorporate the well-designed graph autoencoder with conventional multi-layer perception autoencoder to extract the structural and high-level representation of multi-view data, so that the underlying correlation of samples can be effectively squeezed for model learning. Then the contrastive learning paradigm is performed on multiple pseudo-label distributions to ensure that the positive pairs of pseudo-label representations share the complementarity across views while the divergence between negative pairs is sufficiently large. This makes each semantic cluster more discriminative, i.e., jointly satisfying intra-cluster compactness and inter-cluster exclusiveness. Through comprehensive experiments on eight widely-known datasets, we prove that the proposed approach can perform better than the state-of-the-art opponents.
期刊介绍:
The IEEE Transactions on Big Data publishes peer-reviewed articles focusing on big data. These articles present innovative research ideas and application results across disciplines, including novel theories, algorithms, and applications. Research areas cover a wide range, such as big data analytics, visualization, curation, management, semantics, infrastructure, standards, performance analysis, intelligence extraction, scientific discovery, security, privacy, and legal issues specific to big data. The journal also prioritizes applications of big data in fields generating massive datasets.