Scalable decoupling graph neural network with feature-oriented optimization

Ningyi Liao, Dingheng Mo, Siqiang Luo, Xiang Li, Pengcheng Yin
{"title":"Scalable decoupling graph neural network with feature-oriented optimization","authors":"Ningyi Liao, Dingheng Mo, Siqiang Luo, Xiang Li, Pengcheng Yin","doi":"10.1007/s00778-023-00829-6","DOIUrl":null,"url":null,"abstract":"<p>Recent advances in data processing have stimulated the demand for learning graphs of very large scales. Graph neural networks (GNNs), being an emerging and powerful approach in solving graph learning tasks, are known to be difficult to scale up. Most scalable models apply node-based techniques in simplifying the expensive graph message-passing propagation procedure of GNNs. However, we find such acceleration insufficient when applied to million- or even billion-scale graphs. In this work, we propose <span>SCARA</span>, a scalable GNN with feature-oriented optimization for graph computation. <span>SCARA</span> efficiently computes graph embedding from the dimension of node features, and further selects and reuses feature computation results to reduce overhead. Theoretical analysis indicates that our model achieves sub-linear time complexity with a guaranteed precision in propagation process as well as GNN training and inference. We conduct extensive experiments on various datasets to evaluate the efficacy and efficiency of <span>SCARA</span>. Performance comparison with baselines shows that <span>SCARA</span> can reach up to <span>\\(800\\times \\)</span> graph propagation acceleration than current state-of-the-art methods with fast convergence and comparable accuracy. Most notably, it is efficient to process precomputation on the largest available billion-scale GNN dataset Papers100M (111 M nodes, 1.6 B edges) in 13 s.</p>","PeriodicalId":501532,"journal":{"name":"The VLDB Journal","volume":"4 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The VLDB Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00778-023-00829-6","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Recent advances in data processing have stimulated the demand for learning graphs of very large scales. Graph neural networks (GNNs), being an emerging and powerful approach in solving graph learning tasks, are known to be difficult to scale up. Most scalable models apply node-based techniques in simplifying the expensive graph message-passing propagation procedure of GNNs. However, we find such acceleration insufficient when applied to million- or even billion-scale graphs. In this work, we propose SCARA, a scalable GNN with feature-oriented optimization for graph computation. SCARA efficiently computes graph embedding from the dimension of node features, and further selects and reuses feature computation results to reduce overhead. Theoretical analysis indicates that our model achieves sub-linear time complexity with a guaranteed precision in propagation process as well as GNN training and inference. We conduct extensive experiments on various datasets to evaluate the efficacy and efficiency of SCARA. Performance comparison with baselines shows that SCARA can reach up to \(800\times \) graph propagation acceleration than current state-of-the-art methods with fast convergence and comparable accuracy. Most notably, it is efficient to process precomputation on the largest available billion-scale GNN dataset Papers100M (111 M nodes, 1.6 B edges) in 13 s.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
面向特征优化的可扩展解耦图神经网络
数据处理领域的最新进展刺激了对超大规模图形学习的需求。图神经网络(GNN)是解决图学习任务的一种新兴而强大的方法,但众所周知难以扩展。大多数可扩展模型都采用了基于节点的技术,以简化图神经网络昂贵的图消息传递传播过程。然而,我们发现这种加速在应用于百万甚至十亿规模的图时并不充分。在这项工作中,我们提出了 SCARA,这是一种可扩展的 GNN,具有面向特征的图计算优化功能。SCARA 从节点特征维度高效计算图嵌入,并进一步选择和重用特征计算结果,以减少开销。理论分析表明,我们的模型实现了亚线性时间复杂度,并保证了传播过程以及 GNN 训练和推理的精度。我们在各种数据集上进行了大量实验,以评估 SCARA 的功效和效率。与基线方法的性能比较表明,与当前最先进的方法相比,SCARA 的图传播加速度可达 800 倍,而且收敛速度快,精度相当。最值得注意的是,它能在 13 秒内高效处理现有最大的十亿规模 GNN 数据集 Papers100M(111 M 节点,1.6 B 边)的预计算。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A versatile framework for attributed network clustering via K-nearest neighbor augmentation Discovering critical vertices for reinforcement of large-scale bipartite networks DumpyOS: A data-adaptive multi-ary index for scalable data series similarity search Enabling space-time efficient range queries with REncoder AutoCTS++: zero-shot joint neural architecture and hyperparameter search for correlated time series forecasting
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1