HiHGNN: Accelerating HGNNs Through Parallelism and Data Reusability Exploitation

IF 5.6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS IEEE Transactions on Parallel and Distributed Systems Pub Date : 2024-04-30 DOI:10.1109/TPDS.2024.3394841
Runzhen Xue;Dengke Han;Mingyu Yan;Mo Zou;Xiaocheng Yang;Duo Wang;Wenming Li;Zhimin Tang;John Kim;Xiaochun Ye;Dongrui Fan
{"title":"HiHGNN: Accelerating HGNNs Through Parallelism and Data Reusability Exploitation","authors":"Runzhen Xue;Dengke Han;Mingyu Yan;Mo Zou;Xiaocheng Yang;Duo Wang;Wenming Li;Zhimin Tang;John Kim;Xiaochun Ye;Dongrui Fan","doi":"10.1109/TPDS.2024.3394841","DOIUrl":null,"url":null,"abstract":"Heterogeneous graph neural networks (HGNNs) have emerged as powerful algorithms for processing heterogeneous graphs (HetGs), widely used in many critical fields. To capture both structural and semantic information in HetGs, HGNNs first aggregate the neighboring feature vectors for each vertex in each semantic graph and then fuse the aggregated results across all semantic graphs for each vertex. Unfortunately, existing graph neural network accelerators are ill-suited to accelerate HGNNs. This is because they fail to efficiently tackle the specific execution patterns and exploit the high-degree parallelism as well as data reusability inside and across the processing of semantic graphs in HGNNs. In this work, we first quantitatively characterize a set of representative HGNN models on GPU to disclose the execution bound of each stage, inter-semantic-graph parallelism, and inter-semantic-graph data reusability in HGNNs. Guided by our findings, we propose a high-performance HGNN accelerator, HiHGNN, to alleviate the execution bound and exploit the newfound parallelism and data reusability in HGNNs. Specifically, we first propose a bound-aware stage-fusion methodology that tailors to HGNN acceleration, to fuse and pipeline the execution stages being aware of their execution bounds. Second, we design an independency-aware parallel execution design to exploit the inter-semantic-graph parallelism. Finally, we present a similarity-aware execution scheduling to exploit the inter-semantic-graph data reusability. Compared to the state-of-the-art software framework running on NVIDIA GPU T4 and GPU A100, HiHGNN respectively achieves an average 40.0× and 8.3× speedup as well as 99.59% and 99.74% energy reduction with quintile the memory bandwidth of GPU A100.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":null,"pages":null},"PeriodicalIF":5.6000,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Parallel and Distributed Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10510500/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Heterogeneous graph neural networks (HGNNs) have emerged as powerful algorithms for processing heterogeneous graphs (HetGs), widely used in many critical fields. To capture both structural and semantic information in HetGs, HGNNs first aggregate the neighboring feature vectors for each vertex in each semantic graph and then fuse the aggregated results across all semantic graphs for each vertex. Unfortunately, existing graph neural network accelerators are ill-suited to accelerate HGNNs. This is because they fail to efficiently tackle the specific execution patterns and exploit the high-degree parallelism as well as data reusability inside and across the processing of semantic graphs in HGNNs. In this work, we first quantitatively characterize a set of representative HGNN models on GPU to disclose the execution bound of each stage, inter-semantic-graph parallelism, and inter-semantic-graph data reusability in HGNNs. Guided by our findings, we propose a high-performance HGNN accelerator, HiHGNN, to alleviate the execution bound and exploit the newfound parallelism and data reusability in HGNNs. Specifically, we first propose a bound-aware stage-fusion methodology that tailors to HGNN acceleration, to fuse and pipeline the execution stages being aware of their execution bounds. Second, we design an independency-aware parallel execution design to exploit the inter-semantic-graph parallelism. Finally, we present a similarity-aware execution scheduling to exploit the inter-semantic-graph data reusability. Compared to the state-of-the-art software framework running on NVIDIA GPU T4 and GPU A100, HiHGNN respectively achieves an average 40.0× and 8.3× speedup as well as 99.59% and 99.74% energy reduction with quintile the memory bandwidth of GPU A100.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
HiHGNN:通过并行性和数据可重用性开发加速 HGNNs
异构图神经网络(HGNN)是处理异构图(HetGs)的强大算法,广泛应用于许多关键领域。为了捕捉 HetGs 中的结构和语义信息,HGNNs 首先聚合每个语义图中每个顶点的相邻特征向量,然后融合每个顶点在所有语义图中的聚合结果。遗憾的是,现有的图神经网络加速器并不适合加速 HGNN。这是因为它们无法有效地处理特定的执行模式,也无法利用 HGNN 中语义图处理过程中的高度并行性和数据可重用性。在这项工作中,我们首先在 GPU 上定量描述了一组具有代表性的 HGNN 模型,以揭示 HGNN 中每个阶段的执行约束、语义图之间的并行性以及语义图之间的数据可重用性。在研究结果的指导下,我们提出了一种高性能 HGNN 加速器 HiHGNN,以减轻执行约束并利用 HGNN 中新发现的并行性和数据可重用性。具体来说,我们首先提出了一种边界感知的阶段融合方法,该方法适合 HGNN 加速,可在感知各执行阶段的执行边界的情况下对其进行融合和管道化。其次,我们设计了一种独立感知并行执行设计,以利用语义图之间的并行性。最后,我们提出了一种相似性感知执行调度,以利用语义图间数据的可重用性。与运行在英伟达™(NVIDIA®)GPU T4和GPU A100上的最先进软件框架相比,HiHGNN分别实现了平均40.0倍和8.3倍的速度提升,以及99.59%和99.74%的能耗降低,而内存带宽仅为GPU A100的五分之一。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Parallel and Distributed Systems
IEEE Transactions on Parallel and Distributed Systems 工程技术-工程:电子与电气
CiteScore
11.00
自引率
9.40%
发文量
281
审稿时长
5.6 months
期刊介绍: IEEE Transactions on Parallel and Distributed Systems (TPDS) is published monthly. It publishes a range of papers, comments on previously published papers, and survey articles that deal with the parallel and distributed systems research areas of current importance to our readers. Particular areas of interest include, but are not limited to: a) Parallel and distributed algorithms, focusing on topics such as: models of computation; numerical, combinatorial, and data-intensive parallel algorithms, scalability of algorithms and data structures for parallel and distributed systems, communication and synchronization protocols, network algorithms, scheduling, and load balancing. b) Applications of parallel and distributed computing, including computational and data-enabled science and engineering, big data applications, parallel crowd sourcing, large-scale social network analysis, management of big data, cloud and grid computing, scientific and biomedical applications, mobile computing, and cyber-physical systems. c) Parallel and distributed architectures, including architectures for instruction-level and thread-level parallelism; design, analysis, implementation, fault resilience and performance measurements of multiple-processor systems; multicore processors, heterogeneous many-core systems; petascale and exascale systems designs; novel big data architectures; special purpose architectures, including graphics processors, signal processors, network processors, media accelerators, and other special purpose processors and accelerators; impact of technology on architecture; network and interconnect architectures; parallel I/O and storage systems; architecture of the memory hierarchy; power-efficient and green computing architectures; dependable architectures; and performance modeling and evaluation. d) Parallel and distributed software, including parallel and multicore programming languages and compilers, runtime systems, operating systems, Internet computing and web services, resource management including green computing, middleware for grids, clouds, and data centers, libraries, performance modeling and evaluation, parallel programming paradigms, and programming environments and tools.
期刊最新文献
Freyr$^+$:Harvesting Idle Resources in Serverless Computing Via Deep Reinforcement Learning Efficient Inference for Pruned CNN Models on Mobile Devices With Holistic Sparsity Alignment Efficient Cross-Cloud Partial Reduce With CREW DeepCAT+: A Low-Cost and Transferrable Online Configuration Auto-Tuning Approach for Big Data Frameworks An Evaluation Framework for Dynamic Thermal Management Strategies in 3D MultiProcessor System-on-Chip Co-Design
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1