Multiview Representation Learning With One-to-Many Dynamic Relationships.

IF 10.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE transactions on neural networks and learning systems Pub Date : 2024-11-05 DOI:10.1109/TNNLS.2024.3482408
Dan Li, Haibao Wang, Shihui Ying
{"title":"Multiview Representation Learning With One-to-Many Dynamic Relationships.","authors":"Dan Li, Haibao Wang, Shihui Ying","doi":"10.1109/TNNLS.2024.3482408","DOIUrl":null,"url":null,"abstract":"<p><p>Integrating information from multiple views to obtain potential representations with stronger expressive ability has received significant attention in practical applications. Most existing algorithms usually focus on learning either the consistent or complementary representation of views and, subsequently, integrate one-to-one corresponding sample representations between views. Although these approaches yield effective results, they do not fully exploit the information available from multiple views, limiting the potential for further performance improvement. In this article, we propose an unsupervised multiview representation learning method based on sample relationships, which enables the one-to-many fusion of intraview and interview information. Due to the heterogeneity of views, we need mainly face the two following challenges: 1) the discrepancy in the dimensions of data across different views and 2) the characterization and utilization of sample relationships across these views. To address these two issues, we adopt two modules: the dimension consistency relationship enhancement module and the multiview graph learning module. Thereinto, the relationship enhancement module addresses the discrepancy in data dimensions across different views and dynamically selects data dimensions for each sample that bolsters intraview relationships. The multiview graph learning module devises a novel multiview adjacency matrix to capture both intraview and interview sample relationships. To achieve one-to-many fusion and obtain multiview representations, we employ the graph autoencoder structure. Furthermore, we extend the proposed architecture to the supervised case. We conduct extensive experiments on various real-world multiview datasets, focusing on clustering and multilabel classification tasks, to evaluate the effectiveness of our method. The results demonstrate that our approach significantly improves performance compared to existing methods, highlighting the potential of leveraging sample relationships for multiview representation learning. Our code is released at https://github.com/ lilidan-orm/one-to-many-multiview on GitHub.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":null,"pages":null},"PeriodicalIF":10.2000,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/TNNLS.2024.3482408","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Integrating information from multiple views to obtain potential representations with stronger expressive ability has received significant attention in practical applications. Most existing algorithms usually focus on learning either the consistent or complementary representation of views and, subsequently, integrate one-to-one corresponding sample representations between views. Although these approaches yield effective results, they do not fully exploit the information available from multiple views, limiting the potential for further performance improvement. In this article, we propose an unsupervised multiview representation learning method based on sample relationships, which enables the one-to-many fusion of intraview and interview information. Due to the heterogeneity of views, we need mainly face the two following challenges: 1) the discrepancy in the dimensions of data across different views and 2) the characterization and utilization of sample relationships across these views. To address these two issues, we adopt two modules: the dimension consistency relationship enhancement module and the multiview graph learning module. Thereinto, the relationship enhancement module addresses the discrepancy in data dimensions across different views and dynamically selects data dimensions for each sample that bolsters intraview relationships. The multiview graph learning module devises a novel multiview adjacency matrix to capture both intraview and interview sample relationships. To achieve one-to-many fusion and obtain multiview representations, we employ the graph autoencoder structure. Furthermore, we extend the proposed architecture to the supervised case. We conduct extensive experiments on various real-world multiview datasets, focusing on clustering and multilabel classification tasks, to evaluate the effectiveness of our method. The results demonstrate that our approach significantly improves performance compared to existing methods, highlighting the potential of leveraging sample relationships for multiview representation learning. Our code is released at https://github.com/ lilidan-orm/one-to-many-multiview on GitHub.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
具有一对多动态关系的多视图表示学习
在实际应用中,整合来自多个视图的信息以获得具有更强表达能力的潜在表征已受到广泛关注。大多数现有算法通常侧重于学习视图的一致性或互补性表示,然后在视图之间一对一地整合相应的样本表示。虽然这些方法能产生有效的结果,但它们并不能充分利用多视图中的信息,从而限制了进一步提高性能的潜力。在本文中,我们提出了一种基于样本关系的无监督多视图表征学习方法,该方法能够一对多地融合视图内信息和访谈信息。由于视图的异质性,我们主要需要面对以下两个挑战:1) 不同视图间数据维度的差异;2) 不同视图间样本关系的表征和利用。为了解决这两个问题,我们采用了两个模块:维度一致性关系增强模块和多视图图学习模块。其中,关系增强模块处理不同视图间数据维度的差异,并为每个样本动态选择数据维度,以加强视图内的关系。多视图图学习模块设计了一个新颖的多视图邻接矩阵,以捕捉视图内和采访样本之间的关系。为了实现一对多融合并获得多视图表示,我们采用了图自动编码器结构。此外,我们还将提议的架构扩展到了有监督的情况。我们在各种真实世界的多视图数据集上进行了广泛的实验,重点是聚类和多标签分类任务,以评估我们方法的有效性。结果表明,与现有方法相比,我们的方法显著提高了性能,突出了利用样本关系进行多视图表示学习的潜力。我们的代码发布在 GitHub 上的 https://github.com/ lilidan-orm/one-to-many-multiview。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE transactions on neural networks and learning systems
IEEE transactions on neural networks and learning systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
CiteScore
23.80
自引率
9.60%
发文量
2102
审稿时长
3-8 weeks
期刊介绍: The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.
期刊最新文献
Analysis of Discrete-Time Switched Linear Systems Under Logical Dynamic Switching. Multiview Representation Learning With One-to-Many Dynamic Relationships. Secure and Efficient Federated Learning Against Model Poisoning Attacks in Horizontal and Vertical Data Partitioning. SRCD: Semantic Reasoning With Compound Domains for Single-Domain Generalized Object Detection. Synergistic Attention-Guided Cascaded Graph Diffusion Model for Complementarity Determining Region Synthesis.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1