Dengke Han, Mingyu Yan, Xiaochun Ye, Dongrui Fan, Ninghui Sun
{"title":"描述和理解 GPU 上的 HGNN 训练","authors":"Dengke Han, Mingyu Yan, Xiaochun Ye, Dongrui Fan, Ninghui Sun","doi":"arxiv-2407.11790","DOIUrl":null,"url":null,"abstract":"Owing to their remarkable representation capabilities for heterogeneous graph\ndata, Heterogeneous Graph Neural Networks (HGNNs) have been widely adopted in\nmany critical real-world domains such as recommendation systems and medical\nanalysis. Prior to their practical application, identifying the optimal HGNN\nmodel parameters tailored to specific tasks through extensive training is a\ntime-consuming and costly process. To enhance the efficiency of HGNN training,\nit is essential to characterize and analyze the execution semantics and\npatterns within the training process to identify performance bottlenecks. In\nthis study, we conduct an in-depth quantification and analysis of two\nmainstream HGNN training scenarios, including single-GPU and multi-GPU\ndistributed training. Based on the characterization results, we disclose the\nperformance bottlenecks and their underlying causes in different HGNN training\nscenarios and provide optimization guidelines from both software and hardware\nperspectives.","PeriodicalId":501291,"journal":{"name":"arXiv - CS - Performance","volume":"7 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Characterizing and Understanding HGNN Training on GPUs\",\"authors\":\"Dengke Han, Mingyu Yan, Xiaochun Ye, Dongrui Fan, Ninghui Sun\",\"doi\":\"arxiv-2407.11790\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Owing to their remarkable representation capabilities for heterogeneous graph\\ndata, Heterogeneous Graph Neural Networks (HGNNs) have been widely adopted in\\nmany critical real-world domains such as recommendation systems and medical\\nanalysis. Prior to their practical application, identifying the optimal HGNN\\nmodel parameters tailored to specific tasks through extensive training is a\\ntime-consuming and costly process. To enhance the efficiency of HGNN training,\\nit is essential to characterize and analyze the execution semantics and\\npatterns within the training process to identify performance bottlenecks. In\\nthis study, we conduct an in-depth quantification and analysis of two\\nmainstream HGNN training scenarios, including single-GPU and multi-GPU\\ndistributed training. Based on the characterization results, we disclose the\\nperformance bottlenecks and their underlying causes in different HGNN training\\nscenarios and provide optimization guidelines from both software and hardware\\nperspectives.\",\"PeriodicalId\":501291,\"journal\":{\"name\":\"arXiv - CS - Performance\",\"volume\":\"7 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Performance\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2407.11790\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Performance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.11790","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Characterizing and Understanding HGNN Training on GPUs
Owing to their remarkable representation capabilities for heterogeneous graph
data, Heterogeneous Graph Neural Networks (HGNNs) have been widely adopted in
many critical real-world domains such as recommendation systems and medical
analysis. Prior to their practical application, identifying the optimal HGNN
model parameters tailored to specific tasks through extensive training is a
time-consuming and costly process. To enhance the efficiency of HGNN training,
it is essential to characterize and analyze the execution semantics and
patterns within the training process to identify performance bottlenecks. In
this study, we conduct an in-depth quantification and analysis of two
mainstream HGNN training scenarios, including single-GPU and multi-GPU
distributed training. Based on the characterization results, we disclose the
performance bottlenecks and their underlying causes in different HGNN training
scenarios and provide optimization guidelines from both software and hardware
perspectives.