Xin Liu, Xunbin Xiong, Mingyu Yan, Runzhen Xue, Shirui Pan, Songwen Pei, Lei Deng, Xiaochun Ye, Dongrui Fan
{"title":"DropNaE: Alleviating irregularity for large-scale graph representation learning.","authors":"Xin Liu, Xunbin Xiong, Mingyu Yan, Runzhen Xue, Shirui Pan, Songwen Pei, Lei Deng, Xiaochun Ye, Dongrui Fan","doi":"10.1016/j.neunet.2024.106930","DOIUrl":null,"url":null,"abstract":"<p><p>Large-scale graphs are prevalent in various real-world scenarios and can be effectively processed using Graph Neural Networks (GNNs) on GPUs to derive meaningful representations. However, the inherent irregularity found in real-world graphs poses challenges for leveraging the single-instruction multiple-data execution mode of GPUs, leading to inefficiencies in GNN training. In this paper, we try to alleviate this irregularity at its origin-the irregular graph data itself. To this end, we propose DropNaE to alleviate the irregularity in large-scale graphs by conditionally dropping nodes and edges before GNN training. Specifically, we first present a metric to quantify the neighbor heterophily of all nodes in a graph. Then, we propose DropNaE containing two variants to transform the irregular degree distribution of the large-scale graph to a uniform one, based on the proposed metric. Experiments show that DropNaE is highly compatible and can be integrated into popular GNNs to promote both training efficiency and accuracy of used GNNs. DropNaE is offline performed and requires no online computing resources, benefiting the state-of-the-art GNNs in the present and future to a significant extent.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"106930"},"PeriodicalIF":6.0000,"publicationDate":"2024-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1016/j.neunet.2024.106930","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Large-scale graphs are prevalent in various real-world scenarios and can be effectively processed using Graph Neural Networks (GNNs) on GPUs to derive meaningful representations. However, the inherent irregularity found in real-world graphs poses challenges for leveraging the single-instruction multiple-data execution mode of GPUs, leading to inefficiencies in GNN training. In this paper, we try to alleviate this irregularity at its origin-the irregular graph data itself. To this end, we propose DropNaE to alleviate the irregularity in large-scale graphs by conditionally dropping nodes and edges before GNN training. Specifically, we first present a metric to quantify the neighbor heterophily of all nodes in a graph. Then, we propose DropNaE containing two variants to transform the irregular degree distribution of the large-scale graph to a uniform one, based on the proposed metric. Experiments show that DropNaE is highly compatible and can be integrated into popular GNNs to promote both training efficiency and accuracy of used GNNs. DropNaE is offline performed and requires no online computing resources, benefiting the state-of-the-art GNNs in the present and future to a significant extent.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.