Guangqi Wen;Xin Gao;Wenhui Tan;Peng Cao;Jinzhu Yang;Weiping Li;Osmar R. Zaiane
{"title":"Exploring Attention and Self-Supervised Learning Mechanism for Graph Similarity Learning","authors":"Guangqi Wen;Xin Gao;Wenhui Tan;Peng Cao;Jinzhu Yang;Weiping Li;Osmar R. Zaiane","doi":"10.1109/TNNLS.2024.3513546","DOIUrl":null,"url":null,"abstract":"Graph similarity estimation is a challenging task due to the complex graph structures. Though important and well-studied, three critical aspects are yet to be fully handled in a unified framework: 1) how to learn richer cross-graph interactions from a pairwise node perspective; 2) how to map the similarity matrix into a similarity score by exploiting the inherent structure in the similarity matrix; and 3) how to establish a self-supervised learning mechanism for graph similarity learning. To solve these issues, we explore multiple attention and self-supervised mechanisms for graph similarity learning in this work. More specifically, we propose a unified self-supervised nodewise attention-guided graph similarity learning framework (SNA-GSL) involving: 1) a correlation-guided contrastive learning for capturing valuable node embeddings and 2) a graph similarity learning for predicting similarity scores with multiple proposed attention mechanisms. Extensive experimental results on graph-graph regression task and graph classification task demonstrate that the proposed SNA-GSL performs favorably against state-of-the-art methods. Moreover, the remarkable achievement of our model in the graph classification task is a clear indication of its exceptional generalization capabilities. The code is available at <uri>https://github.com/IntelliDAL/Graph/SNA-GSL</uri>.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"36 6","pages":"11106-11120"},"PeriodicalIF":8.9000,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10804817/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Graph similarity estimation is a challenging task due to the complex graph structures. Though important and well-studied, three critical aspects are yet to be fully handled in a unified framework: 1) how to learn richer cross-graph interactions from a pairwise node perspective; 2) how to map the similarity matrix into a similarity score by exploiting the inherent structure in the similarity matrix; and 3) how to establish a self-supervised learning mechanism for graph similarity learning. To solve these issues, we explore multiple attention and self-supervised mechanisms for graph similarity learning in this work. More specifically, we propose a unified self-supervised nodewise attention-guided graph similarity learning framework (SNA-GSL) involving: 1) a correlation-guided contrastive learning for capturing valuable node embeddings and 2) a graph similarity learning for predicting similarity scores with multiple proposed attention mechanisms. Extensive experimental results on graph-graph regression task and graph classification task demonstrate that the proposed SNA-GSL performs favorably against state-of-the-art methods. Moreover, the remarkable achievement of our model in the graph classification task is a clear indication of its exceptional generalization capabilities. The code is available at https://github.com/IntelliDAL/Graph/SNA-GSL.
期刊介绍:
The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.