Robin Kobus, Daniel Jünger, Christian Hundt, B. Schmidt
{"title":"Gossip","authors":"Robin Kobus, Daniel Jünger, Christian Hundt, B. Schmidt","doi":"10.1145/3337821.3337889","DOIUrl":null,"url":null,"abstract":"Nowadays, a growing number of servers and workstations feature an increasing number of GPUs. However, slow communication among GPUs can lead to poor application performance. Thus, there is a latent demand for efficient multi-GPU communication primitives on such systems. This paper focuses on the gather, scatter and all-to-all collectives, which are important operations for various algorithms including parallel sorting and distributed hashing. We present two distinct communication strategies (ring-based and flow-oriented) to generate transfer plans for their topology-aware implementation on NVLink-connected multi-GPU systems. We achieve a throughput of up to 526 GB/s for all-to-all and 148 GB/s for scatter/gather on a DGX-1 server with only a small memory overhead. Furthermore, we propose a cost-neutral alternative to the DGX-1 Volta topology that provides an expected higher throughput for the all-to-all collective while preserving the throughput in case of scatter/gather. Our Gossip library is freely available at https://github.com/Funatiq/gossip.","PeriodicalId":92101,"journal":{"name":"Medical critic and psychological journal","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"1863-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical critic and psychological journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3337821.3337889","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
Nowadays, a growing number of servers and workstations feature an increasing number of GPUs. However, slow communication among GPUs can lead to poor application performance. Thus, there is a latent demand for efficient multi-GPU communication primitives on such systems. This paper focuses on the gather, scatter and all-to-all collectives, which are important operations for various algorithms including parallel sorting and distributed hashing. We present two distinct communication strategies (ring-based and flow-oriented) to generate transfer plans for their topology-aware implementation on NVLink-connected multi-GPU systems. We achieve a throughput of up to 526 GB/s for all-to-all and 148 GB/s for scatter/gather on a DGX-1 server with only a small memory overhead. Furthermore, we propose a cost-neutral alternative to the DGX-1 Volta topology that provides an expected higher throughput for the all-to-all collective while preserving the throughput in case of scatter/gather. Our Gossip library is freely available at https://github.com/Funatiq/gossip.