{"title":"DropEdge的自适应节点相似度","authors":"Yangcai Xie, Jiecheng Li, Shichao Zhang","doi":"10.1016/j.neucom.2025.129574","DOIUrl":null,"url":null,"abstract":"<div><div>There are two principal impediments of expanding deep graph convolutional networks (GCNs) due to the assumption of smoothness, <em>i.e.,</em> over-fitting and over-smoothing. DropEdge methods relieve the convergence speed and reduce the information loss by randomly dropping a specific rate of edges, hence effectively alleviates these two issues and has been widely used to many backbone models. However, thanks to the blindness and potential risks of randomly removing edges, current DropEdge methods often remove important edges and retain unimportant edges, this inevitably reduce the accuracy of learning results. In order to tackle the challenges in previous DropEdge methods, this paper proposes a precise removal technique through the node similarity, which is closely related to edges. Specifically, we employ the hybrid optimal node similarity to drop edges, on the one hand, the edges that severely affect over-fitting and over-smoothing of nodes, i.e., the edges with high node similarity, are removed; on the other hand, the edges that are outliers and noisy, i.e., the edges with a large difference in similarity to normal nodes, are also removed. Therefore, our methods significantly alleviate over-fitting and over-smoothing, accurately reduce the impact of outliers and noise, more importantly, our methods is a generic skill that can be deployed current GCN and its variants. Experimental results on seven benchmark datasets including three assortative datasets and four disassortative datasets show that our methods outperforms the state-of-the-art methods, improve the performance by a large margin especially for disassortative graphs.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"626 ","pages":"Article 129574"},"PeriodicalIF":6.5000,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adaptive node similarity for DropEdge\",\"authors\":\"Yangcai Xie, Jiecheng Li, Shichao Zhang\",\"doi\":\"10.1016/j.neucom.2025.129574\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>There are two principal impediments of expanding deep graph convolutional networks (GCNs) due to the assumption of smoothness, <em>i.e.,</em> over-fitting and over-smoothing. DropEdge methods relieve the convergence speed and reduce the information loss by randomly dropping a specific rate of edges, hence effectively alleviates these two issues and has been widely used to many backbone models. However, thanks to the blindness and potential risks of randomly removing edges, current DropEdge methods often remove important edges and retain unimportant edges, this inevitably reduce the accuracy of learning results. In order to tackle the challenges in previous DropEdge methods, this paper proposes a precise removal technique through the node similarity, which is closely related to edges. Specifically, we employ the hybrid optimal node similarity to drop edges, on the one hand, the edges that severely affect over-fitting and over-smoothing of nodes, i.e., the edges with high node similarity, are removed; on the other hand, the edges that are outliers and noisy, i.e., the edges with a large difference in similarity to normal nodes, are also removed. Therefore, our methods significantly alleviate over-fitting and over-smoothing, accurately reduce the impact of outliers and noise, more importantly, our methods is a generic skill that can be deployed current GCN and its variants. Experimental results on seven benchmark datasets including three assortative datasets and four disassortative datasets show that our methods outperforms the state-of-the-art methods, improve the performance by a large margin especially for disassortative graphs.</div></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":\"626 \",\"pages\":\"Article 129574\"},\"PeriodicalIF\":6.5000,\"publicationDate\":\"2025-04-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0925231225002462\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/2/4 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225002462","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/2/4 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
There are two principal impediments of expanding deep graph convolutional networks (GCNs) due to the assumption of smoothness, i.e., over-fitting and over-smoothing. DropEdge methods relieve the convergence speed and reduce the information loss by randomly dropping a specific rate of edges, hence effectively alleviates these two issues and has been widely used to many backbone models. However, thanks to the blindness and potential risks of randomly removing edges, current DropEdge methods often remove important edges and retain unimportant edges, this inevitably reduce the accuracy of learning results. In order to tackle the challenges in previous DropEdge methods, this paper proposes a precise removal technique through the node similarity, which is closely related to edges. Specifically, we employ the hybrid optimal node similarity to drop edges, on the one hand, the edges that severely affect over-fitting and over-smoothing of nodes, i.e., the edges with high node similarity, are removed; on the other hand, the edges that are outliers and noisy, i.e., the edges with a large difference in similarity to normal nodes, are also removed. Therefore, our methods significantly alleviate over-fitting and over-smoothing, accurately reduce the impact of outliers and noise, more importantly, our methods is a generic skill that can be deployed current GCN and its variants. Experimental results on seven benchmark datasets including three assortative datasets and four disassortative datasets show that our methods outperforms the state-of-the-art methods, improve the performance by a large margin especially for disassortative graphs.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.