Guo Zhong;Juanchun Wu;Xueming Yan;Xuanlong Ma;Shixun Lin
{"title":"Nonnegative Tensor Representation With Cross-View Consensus for Incomplete Multi-View Clustering","authors":"Guo Zhong;Juanchun Wu;Xueming Yan;Xuanlong Ma;Shixun Lin","doi":"10.1109/LSP.2024.3466011","DOIUrl":null,"url":null,"abstract":"Tensors capture the multi-dimensional structure of multi-view data naturally, resulting in richer and more meaningful data representations. This produces more accurate clustering results for challenging incomplete multi-view clustering (IMVC) tasks. However, previous tensor learning-based IMVC (TLIMVC) methods often build a tensor representation by simply stacking view-specific representations. Consequently, the learned tensor representation lacks good interpretability since each entry of it could not directly reveals the similarity relationship of the corresponding two samples. In addition, most of them only focus on exploring the high-order correlations among views, while the underlying consensus information is not fully exploited. To this end, we propose a novel TLIMVC method named Nonnegative Tensor Representation with Cross-view Consensus (NTRC\n<inline-formula><tex-math>$^{2}$</tex-math></inline-formula>\n) in this paper. Specifically, a nonnegative constraint and view-specific consensus are jointly integrated into the framework of the tensor based self-representation learning, which enables the method to simultaneously explore the consensus and complementary information of multi-view data more fully. An Augmented Lagrangian Multiplier based optimization algorithm is derived to optimize the objective function. Experiments on several challenging benchmark datasets verify our NTRC\n<inline-formula><tex-math>$^{2}$</tex-math></inline-formula>\n method's effectiveness and competitiveness against state-of-the-art methods.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":null,"pages":null},"PeriodicalIF":3.2000,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Letters","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10685117/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Tensors capture the multi-dimensional structure of multi-view data naturally, resulting in richer and more meaningful data representations. This produces more accurate clustering results for challenging incomplete multi-view clustering (IMVC) tasks. However, previous tensor learning-based IMVC (TLIMVC) methods often build a tensor representation by simply stacking view-specific representations. Consequently, the learned tensor representation lacks good interpretability since each entry of it could not directly reveals the similarity relationship of the corresponding two samples. In addition, most of them only focus on exploring the high-order correlations among views, while the underlying consensus information is not fully exploited. To this end, we propose a novel TLIMVC method named Nonnegative Tensor Representation with Cross-view Consensus (NTRC
$^{2}$
) in this paper. Specifically, a nonnegative constraint and view-specific consensus are jointly integrated into the framework of the tensor based self-representation learning, which enables the method to simultaneously explore the consensus and complementary information of multi-view data more fully. An Augmented Lagrangian Multiplier based optimization algorithm is derived to optimize the objective function. Experiments on several challenging benchmark datasets verify our NTRC
$^{2}$
method's effectiveness and competitiveness against state-of-the-art methods.
期刊介绍:
The IEEE Signal Processing Letters is a monthly, archival publication designed to provide rapid dissemination of original, cutting-edge ideas and timely, significant contributions in signal, image, speech, language and audio processing. Papers published in the Letters can be presented within one year of their appearance in signal processing conferences such as ICASSP, GlobalSIP and ICIP, and also in several workshop organized by the Signal Processing Society.