Explicit View-Labels Matter: A Multifacet Complementarity Study of Multi-View Clustering

Chuanxing Geng;Aiyang Han;Songcan Chen
{"title":"Explicit View-Labels Matter: A Multifacet Complementarity Study of Multi-View Clustering","authors":"Chuanxing Geng;Aiyang Han;Songcan Chen","doi":"10.1109/TPAMI.2024.3521478","DOIUrl":null,"url":null,"abstract":"Consistency and complementarity are two key ingredients for boosting multi-view clustering (MVC). Recently with the introduction of popular contrastive learning, the consistency learning of views has been further enhanced in MVC, leading to promising performance. However, by contrast, the complementarity has not received sufficient attention except just in the feature facet, where the Hilbert Schmidt Independence Criterion term or the independent encoder-decoder network is usually adopted to capture view-specific information. This motivates us to reconsider the complementarity learning of views comprehensively from multiple facets including the feature-, view-label- and contrast- facets, while maintaining the view consistency. We empirically find that all the facets contribute to the complementarity learning, especially the view-label facet, which is usually neglected by existing methods. Based on this, a simple yet effective <underline>M</u>ultifacet <underline>C</u>omplementarity learning framework for <underline>M</u>ulti-<underline>V</u>iew <underline>C</u>lustering (MCMVC) is naturally developed, which fuses multifacet complementarity information, especially explicitly embedding the view-label information. To our best knowledge, it is the first time to use view-labels explicitly to guide the complementarity learning of views. Compared with the SOTA baselines, MCMVC achieves remarkable improvements, e.g., by average margins over 5.00% and 7.00% respectively in complete and incomplete MVC settings on Caltech101-20 in terms of three evaluation metrics.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 4","pages":"2520-2532"},"PeriodicalIF":18.6000,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10816579/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Consistency and complementarity are two key ingredients for boosting multi-view clustering (MVC). Recently with the introduction of popular contrastive learning, the consistency learning of views has been further enhanced in MVC, leading to promising performance. However, by contrast, the complementarity has not received sufficient attention except just in the feature facet, where the Hilbert Schmidt Independence Criterion term or the independent encoder-decoder network is usually adopted to capture view-specific information. This motivates us to reconsider the complementarity learning of views comprehensively from multiple facets including the feature-, view-label- and contrast- facets, while maintaining the view consistency. We empirically find that all the facets contribute to the complementarity learning, especially the view-label facet, which is usually neglected by existing methods. Based on this, a simple yet effective Multifacet Complementarity learning framework for Multi-View Clustering (MCMVC) is naturally developed, which fuses multifacet complementarity information, especially explicitly embedding the view-label information. To our best knowledge, it is the first time to use view-labels explicitly to guide the complementarity learning of views. Compared with the SOTA baselines, MCMVC achieves remarkable improvements, e.g., by average margins over 5.00% and 7.00% respectively in complete and incomplete MVC settings on Caltech101-20 in terms of three evaluation metrics.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
显式视图标签问题:多视图聚类的多层面互补研究
一致性和互补性是促进多视图集群(MVC)的两个关键因素。最近,随着流行的对比学习的引入,视图的一致性学习在MVC中得到了进一步的增强,带来了很好的性能。然而,相比之下,互补性并没有得到足够的重视,除了在特征方面,其中通常采用希尔伯特施密特独立准则项或独立编码器-解码器网络来捕获特定于视图的信息。这促使我们在保持视图一致性的前提下,从特征、视图标签和对比等多个方面全面考虑视图的互补学习。我们的经验发现,所有方面都有助于互补性学习,特别是视图-标签方面,这通常被现有方法所忽视。在此基础上,自然开发了一种简单有效的多视图聚类的多面互补学习框架(MCMVC),该框架融合了多面互补信息,特别是显式嵌入了视图标签信息。据我们所知,这是第一次明确地使用视图标签来指导视图的互补学习。与SOTA基线相比,MCMVC在三个评价指标上取得了显著的进步,例如在Caltech101-20上,在完全和不完全MVC设置下,MCMVC在三个评价指标上的平均差距分别超过5.00%和7.00%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
CrossEarth: Geospatial Vision Foundation Model for Domain Generalizable Remote Sensing Semantic Segmentation. Continuous Review and Timely Correction: Enhancing the Resistance to Noisy Labels via Self-Not-True and Class-Wise Distillation. On the Transferability and Discriminability of Representation Learning in Unsupervised Domain Adaptation. Fast Multi-view Discrete Clustering via Spectral Embedding Fusion. GrowSP++: Growing Superpoints and Primitives for Unsupervised 3D Semantic Segmentation.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1