GraKerformer: A Transformer With Graph Kernel for Unsupervised Graph Representation Learning

IF 9.4 1区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS IEEE Transactions on Cybernetics Pub Date : 2024-10-08 DOI:10.1109/TCYB.2024.3465213
Lixiang Xu;Haifeng Liu;Xin Yuan;Enhong Chen;Yuanyan Tang
{"title":"GraKerformer: A Transformer With Graph Kernel for Unsupervised Graph Representation Learning","authors":"Lixiang Xu;Haifeng Liu;Xin Yuan;Enhong Chen;Yuanyan Tang","doi":"10.1109/TCYB.2024.3465213","DOIUrl":null,"url":null,"abstract":"While highly influential in deep learning, especially in natural language processing, the Transformer model has not exhibited competitive performance in unsupervised graph representation learning (UGRL). Conventional approaches, which focus on local substructures on the graph, offer simplicity but often fall short in encapsulating comprehensive structural information of the graph. This deficiency leads to suboptimal generalization performance. To address this, we proposed the GraKerformer model, a variant of the standard Transformer architecture, to mitigate the shortfall in structural information representation and enhance the performance in UGRL. By leveraging the shortest-path graph kernel (SPGK) to weight attention scores and combining graph neural networks, the GraKerformer effectively encodes the nuanced structural information of graphs. We conducted evaluations on the benchmark datasets for graph classification to validate the superior performance of our approach.","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"54 12","pages":"7320-7332"},"PeriodicalIF":9.4000,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Cybernetics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10707645/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

While highly influential in deep learning, especially in natural language processing, the Transformer model has not exhibited competitive performance in unsupervised graph representation learning (UGRL). Conventional approaches, which focus on local substructures on the graph, offer simplicity but often fall short in encapsulating comprehensive structural information of the graph. This deficiency leads to suboptimal generalization performance. To address this, we proposed the GraKerformer model, a variant of the standard Transformer architecture, to mitigate the shortfall in structural information representation and enhance the performance in UGRL. By leveraging the shortest-path graph kernel (SPGK) to weight attention scores and combining graph neural networks, the GraKerformer effectively encodes the nuanced structural information of graphs. We conducted evaluations on the benchmark datasets for graph classification to validate the superior performance of our approach.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
GraKerformer:用于无监督图形表示学习的图形内核变换器
虽然Transformer模型在深度学习,特别是在自然语言处理中具有很大的影响力,但在无监督图表示学习(UGRL)中并没有表现出竞争性的性能。传统的方法侧重于图的局部子结构,提供了简单性,但往往无法封装图的全面结构信息。这一缺陷导致了次优泛化性能。为了解决这个问题,我们提出了GraKerformer模型,它是标准Transformer体系结构的一种变体,以减轻结构信息表示的不足并提高UGRL中的性能。GraKerformer通过利用最短路径图核(SPGK)对注意力得分进行加权,并结合图神经网络,有效地对图的细微结构信息进行编码。我们对图分类的基准数据集进行了评估,以验证我们的方法的优越性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Cybernetics
IEEE Transactions on Cybernetics COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
25.40
自引率
11.00%
发文量
1869
期刊介绍: The scope of the IEEE Transactions on Cybernetics includes computational approaches to the field of cybernetics. Specifically, the transactions welcomes papers on communication and control across machines or machine, human, and organizations. The scope includes such areas as computational intelligence, computer vision, neural networks, genetic algorithms, machine learning, fuzzy systems, cognitive systems, decision making, and robotics, to the extent that they contribute to the theme of cybernetics or demonstrate an application of cybernetics principles.
期刊最新文献
Integral Reinforcement Learning-Based Dynamic Event-Triggered Nonzero-Sum Games of USVs Adaptive Event-Triggered Control Combined With High-Order Backstepping for Pure Feedback Nonlinear Systems Barycentric Coordinate-Based Distributed Localization for Wireless Sensor Networks Under False-Data-Injection Attacks Generation of Granular-Balls for Clustering Based on the Principle of Justifiable Granularity Sampled-Data Stochastic Stabilization of Markovian Jump Systems via an Optimizing Mode-Separation Method
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1