Leveraging Network-level parallelism with Multiple Process-Endpoints for MPI Broadcast

Amit Ruhela, B. Ramesh, S. Chakraborty, H. Subramoni, J. Hashmi, D. Panda
{"title":"Leveraging Network-level parallelism with Multiple Process-Endpoints for MPI Broadcast","authors":"Amit Ruhela, B. Ramesh, S. Chakraborty, H. Subramoni, J. Hashmi, D. Panda","doi":"10.1109/IPDRM49579.2019.00009","DOIUrl":null,"url":null,"abstract":"The Message Passing Interface has been the dominating programming model for developing scalable and high-performance parallel applications. Collective operations empower group communication operations in a portable, and efficient manner and are used by a large number of applications across different domains. Optimization of collective operations is the key to achieve good performance speed-ups and portability. Broadcast or One-to-all communication is one of the most commonly used collectives in MPI applications. However, the existing algorithms for broadcast do not effectively utilize the high degree of parallelism and increased message rate capabilities offered by modern architectures. In this paper, we address these challenges and propose a Scalable Multi-Endpoint broadcast algorithm that combines hierarchical communication with multiple endpoints per node for high performance and scalability. We evaluate the proposed algorithm against state-of-the-art designs in other MPI libraries, including MVAPICH2, Intel MPI, and Spectrum MPI. We demonstrate the benefits of the proposed algorithm at benchmark and application level at scale on four different hardware architectures, including Intel Cascade Lake, Intel Skylake, AMD EPYC, and IBM POWER9, and with InfiniBand and Omni-Path interconnects. Compared to other state-of-the-art designs, our proposed design shows up to 2.5 times performance improvements at a microbenchmark level with 128 Nodes. We also observe up to 37% improvement in broadcast communication latency for the SPECMPI scientific applications","PeriodicalId":256149,"journal":{"name":"2019 IEEE/ACM Third Annual Workshop on Emerging Parallel and Distributed Runtime Systems and Middleware (IPDRM)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE/ACM Third Annual Workshop on Emerging Parallel and Distributed Runtime Systems and Middleware (IPDRM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IPDRM49579.2019.00009","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

The Message Passing Interface has been the dominating programming model for developing scalable and high-performance parallel applications. Collective operations empower group communication operations in a portable, and efficient manner and are used by a large number of applications across different domains. Optimization of collective operations is the key to achieve good performance speed-ups and portability. Broadcast or One-to-all communication is one of the most commonly used collectives in MPI applications. However, the existing algorithms for broadcast do not effectively utilize the high degree of parallelism and increased message rate capabilities offered by modern architectures. In this paper, we address these challenges and propose a Scalable Multi-Endpoint broadcast algorithm that combines hierarchical communication with multiple endpoints per node for high performance and scalability. We evaluate the proposed algorithm against state-of-the-art designs in other MPI libraries, including MVAPICH2, Intel MPI, and Spectrum MPI. We demonstrate the benefits of the proposed algorithm at benchmark and application level at scale on four different hardware architectures, including Intel Cascade Lake, Intel Skylake, AMD EPYC, and IBM POWER9, and with InfiniBand and Omni-Path interconnects. Compared to other state-of-the-art designs, our proposed design shows up to 2.5 times performance improvements at a microbenchmark level with 128 Nodes. We also observe up to 37% improvement in broadcast communication latency for the SPECMPI scientific applications
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用MPI广播的多进程端点的网络级并行性
消息传递接口一直是开发可伸缩和高性能并行应用程序的主要编程模型。集合操作以一种可移植、高效的方式支持组通信操作,并被跨不同领域的大量应用程序使用。集体操作的优化是实现良好性能加速和可移植性的关键。广播或一对所有通信是MPI应用程序中最常用的集合之一。然而,现有的广播算法不能有效地利用现代体系结构提供的高度并行性和增加的消息速率能力。在本文中,我们解决了这些挑战,并提出了一种可扩展的多端点广播算法,该算法将分层通信与每个节点的多个端点相结合,以实现高性能和可扩展性。我们针对其他MPI库(包括MVAPICH2、Intel MPI和Spectrum MPI)中的最新设计评估了所提出的算法。我们在四种不同的硬件体系结构(包括Intel Cascade Lake、Intel Skylake、AMD EPYC和IBM POWER9)以及InfiniBand和Omni-Path互连上演示了所提出算法在基准测试和应用级别上的优势。与其他最先进的设计相比,我们提出的设计在具有128个节点的微基准测试级别上的性能提高了2.5倍。我们还观察到,在SPECMPI科学应用中,广播通信延迟提高了37%
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Leveraging Network-level parallelism with Multiple Process-Endpoints for MPI Broadcast [Title page] Design and Evaluation of Shared Memory CommunicationBenchmarks on Emerging Architectures using MVAPICH2 [Copyright notice] Sequential Codelet Model of Program Execution. A Super-Codelet model based on the Hierarchical Turing Machine.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1