Recent experiences in using MPI-3 RMA in the DASH PGAS runtime

Joseph Schuchart, R. Kowalewski, Karl Fuerlinger
{"title":"Recent experiences in using MPI-3 RMA in the DASH PGAS runtime","authors":"Joseph Schuchart, R. Kowalewski, Karl Fuerlinger","doi":"10.1145/3176364.3176367","DOIUrl":null,"url":null,"abstract":"The Partitioned Global Address Space (PGAS) programming model has become a viable alternative to traditional message passing using MPI. The DASH project provides a PGAS abstraction entirely based on C++11. The underlying DASH RunTime, DART, provides communication and management functionality transparently to the user. In order to facilitate incremental transitions of existing MPI-parallel codes, the development of DART has focused on creating a PGAS runtime based on the MPI-3 RMA standard. From an MPI-RMA user perspective, this paper outlines our recent experiences in the development of DART and presents insights into issues that we faced and how we attempted to solve them, including issues surrounding memory allocation and memory consistency as well as communication latencies. We implemented a set of benchmarks for global memory allocation latency in the framework of the OSU micro-benchmark suite and present results for allocation and communication latency measurements of different global memory allocation strategies under three different MPI implementations.","PeriodicalId":371083,"journal":{"name":"Proceedings of Workshops of HPC Asia","volume":"70 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of Workshops of HPC Asia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3176364.3176367","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

The Partitioned Global Address Space (PGAS) programming model has become a viable alternative to traditional message passing using MPI. The DASH project provides a PGAS abstraction entirely based on C++11. The underlying DASH RunTime, DART, provides communication and management functionality transparently to the user. In order to facilitate incremental transitions of existing MPI-parallel codes, the development of DART has focused on creating a PGAS runtime based on the MPI-3 RMA standard. From an MPI-RMA user perspective, this paper outlines our recent experiences in the development of DART and presents insights into issues that we faced and how we attempted to solve them, including issues surrounding memory allocation and memory consistency as well as communication latencies. We implemented a set of benchmarks for global memory allocation latency in the framework of the OSU micro-benchmark suite and present results for allocation and communication latency measurements of different global memory allocation strategies under three different MPI implementations.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
最近在DASH PGAS运行时中使用MPI-3 RMA的经验
分区全局地址空间(PGAS)编程模型已经成为使用MPI的传统消息传递的可行替代方案。DASH项目提供了一个完全基于c++ 11的PGAS抽象。底层DASH运行时DART为用户提供透明的通信和管理功能。为了促进现有mpi并行代码的增量转换,DART的开发重点是基于MPI-3 RMA标准创建PGAS运行时。从MPI-RMA用户的角度来看,本文概述了我们最近在DART开发中的经验,并提出了我们面临的问题以及我们如何解决这些问题的见解,包括围绕内存分配和内存一致性以及通信延迟的问题。我们在OSU微基准套件框架中实现了一组全局内存分配延迟的基准测试,并给出了三种不同MPI实现下不同全局内存分配策略的分配和通信延迟测量结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Scaling collectives on large clusters using Intel(R) architecture processors and fabric OpenMP-based parallel implementation of matrix-matrix multiplication on the intel knights landing Recent experiences in using MPI-3 RMA in the DASH PGAS runtime Optimizing a particle-in-cell code on Intel knights landing Towards a parallel algebraic multigrid solver using PGAS
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1