GRE^2-MDCL: Graph Representation Embedding Enhanced via Multidimensional Contrastive Learning

Kaizhe Fan, Quanjun Li
{"title":"GRE^2-MDCL: Graph Representation Embedding Enhanced via Multidimensional Contrastive Learning","authors":"Kaizhe Fan, Quanjun Li","doi":"arxiv-2409.07725","DOIUrl":null,"url":null,"abstract":"Graph representation learning has emerged as a powerful tool for preserving\ngraph topology when mapping nodes to vector representations, enabling various\ndownstream tasks such as node classification and community detection. However,\nmost current graph neural network models face the challenge of requiring\nextensive labeled data, which limits their practical applicability in\nreal-world scenarios where labeled data is scarce. To address this challenge,\nresearchers have explored Graph Contrastive Learning (GCL), which leverages\nenhanced graph data and contrastive learning techniques. While promising,\nexisting GCL methods often struggle with effectively capturing both local and\nglobal graph structures, and balancing the trade-off between nodelevel and\ngraph-level representations. In this work, we propose Graph Representation\nEmbedding Enhanced via Multidimensional Contrastive Learning (GRE2-MDCL). Our\nmodel introduces a novel triple network architecture with a multi-head\nattention GNN as the core. GRE2-MDCL first globally and locally augments the\ninput graph using SVD and LAGNN techniques. It then constructs a\nmultidimensional contrastive loss, incorporating cross-network, cross-view, and\nneighbor contrast, to optimize the model. Extensive experiments on benchmark\ndatasets Cora, Citeseer, and PubMed demonstrate that GRE2-MDCL achieves\nstate-of-the-art performance, with average accuracies of 82.5%, 72.5%, and\n81.6% respectively. Visualizations further show tighter intra-cluster\naggregation and clearer inter-cluster boundaries, highlighting the\neffectiveness of our framework in improving upon baseline GCL models.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07725","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Graph representation learning has emerged as a powerful tool for preserving graph topology when mapping nodes to vector representations, enabling various downstream tasks such as node classification and community detection. However, most current graph neural network models face the challenge of requiring extensive labeled data, which limits their practical applicability in real-world scenarios where labeled data is scarce. To address this challenge, researchers have explored Graph Contrastive Learning (GCL), which leverages enhanced graph data and contrastive learning techniques. While promising, existing GCL methods often struggle with effectively capturing both local and global graph structures, and balancing the trade-off between nodelevel and graph-level representations. In this work, we propose Graph Representation Embedding Enhanced via Multidimensional Contrastive Learning (GRE2-MDCL). Our model introduces a novel triple network architecture with a multi-head attention GNN as the core. GRE2-MDCL first globally and locally augments the input graph using SVD and LAGNN techniques. It then constructs a multidimensional contrastive loss, incorporating cross-network, cross-view, and neighbor contrast, to optimize the model. Extensive experiments on benchmark datasets Cora, Citeseer, and PubMed demonstrate that GRE2-MDCL achieves state-of-the-art performance, with average accuracies of 82.5%, 72.5%, and 81.6% respectively. Visualizations further show tighter intra-cluster aggregation and clearer inter-cluster boundaries, highlighting the effectiveness of our framework in improving upon baseline GCL models.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
GRE^2-MDCL:通过多维对比学习增强图形表示嵌入功能
图表示学习已经成为一种强大的工具,在将节点映射到向量表示时可以保留图的拓扑结构,从而实现节点分类和群落检测等各种下游任务。然而,目前大多数图神经网络模型都面临着需要大量标注数据的挑战,这限制了它们在标注数据稀缺的现实世界场景中的实际应用。为了应对这一挑战,研究人员探索了图对比学习(GCL),它利用了增强的图数据和对比学习技术。现有的 GCL 方法虽然前景广阔,但在有效捕捉局部和全局图结构,以及平衡节点级和图级表征之间的权衡方面往往力不从心。在这项工作中,我们提出了通过多维对比学习增强图形表征嵌入(GRE2-MDCL)。我们的模型引入了以多头注意力 GNN 为核心的新型三重网络架构。GRE2-MDCL 首先使用 SVD 和 LAGNN 技术对输入图进行全局和局部增强。然后,它将跨网络、跨视图和邻居对比纳入其中,构建多维对比损失,以优化模型。在基准数据集 Cora、Citeseer 和 PubMed 上进行的大量实验表明,GRE2-MDCL 达到了最先进的性能,平均准确率分别为 82.5%、72.5% 和 81.6%。可视化效果进一步显示了更紧密的簇内聚类和更清晰的簇间边界,突出了我们的框架在改进基线 GCL 模型方面的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Almost Sure Convergence of Linear Temporal Difference Learning with Arbitrary Features The Impact of Element Ordering on LM Agent Performance Towards Interpretable End-Stage Renal Disease (ESRD) Prediction: Utilizing Administrative Claims Data with Explainable AI Techniques Extended Deep Submodular Functions Symmetry-Enriched Learning: A Category-Theoretic Framework for Robust Machine Learning Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1