First class communication in MPI

E. Demaine
{"title":"First class communication in MPI","authors":"E. Demaine","doi":"10.1109/MPIDC.1996.534113","DOIUrl":null,"url":null,"abstract":"We compare three concurrent-programming languages based on message-passing: Concurrent ML (CML), Occam and MPI. The main advantage of the CML extension of Standard ML (SML) is that communication events are first-class just like normal program variables (e.g., integers), that is, they can be created at run-time, assigned to variables, and passed to and returned from functions. In addition, it provides dynamic process and channel creation. Occam, first designed for transputers, is based on a static model of process and channel creation. We examine how these limitations enforce severe restrictions on communication events, and how they affect the flexibility of Occam programs. The MPI (Message Passing Interface) standard provides a common way to access message-passing in C and Fortran. Although MPI was designed for parallel and distributed computation, it can also be viewed as a general concurrent-programming language. In particular most Occam features and several important facilities of CML can be implemented in MPI. For example, MPI-2 supports dynamic process and channel creation, and less general first-class communication events. We propose an extension to MPI which provides the CML choose, wrap, and guard combinators. This would make MPI a strong base for the flexible concurrency available in CML. Assuming that the modifications are incorporated into the standard and its implementations higher-order concurrency and its advantages will become more widespread.","PeriodicalId":432081,"journal":{"name":"Proceedings. Second MPI Developer's Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1996-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. Second MPI Developer's Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MPIDC.1996.534113","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

We compare three concurrent-programming languages based on message-passing: Concurrent ML (CML), Occam and MPI. The main advantage of the CML extension of Standard ML (SML) is that communication events are first-class just like normal program variables (e.g., integers), that is, they can be created at run-time, assigned to variables, and passed to and returned from functions. In addition, it provides dynamic process and channel creation. Occam, first designed for transputers, is based on a static model of process and channel creation. We examine how these limitations enforce severe restrictions on communication events, and how they affect the flexibility of Occam programs. The MPI (Message Passing Interface) standard provides a common way to access message-passing in C and Fortran. Although MPI was designed for parallel and distributed computation, it can also be viewed as a general concurrent-programming language. In particular most Occam features and several important facilities of CML can be implemented in MPI. For example, MPI-2 supports dynamic process and channel creation, and less general first-class communication events. We propose an extension to MPI which provides the CML choose, wrap, and guard combinators. This would make MPI a strong base for the flexible concurrency available in CML. Assuming that the modifications are incorporated into the standard and its implementations higher-order concurrency and its advantages will become more widespread.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
MPI中的一级通信
我们比较了三种基于消息传递的并发编程语言:并发ML (CML)、Occam和MPI。标准ML (SML)的CML扩展的主要优点是,通信事件就像普通的程序变量(例如整数)一样是一级的,也就是说,它们可以在运行时创建,分配给变量,传递给函数并从函数返回。此外,它还提供动态流程和通道创建。Occam最初是为转发器设计的,它基于过程和信道创建的静态模型。我们将研究这些限制如何对通信事件施加严格的限制,以及它们如何影响Occam程序的灵活性。MPI(消息传递接口)标准提供了一种在C和Fortran中访问消息传递的通用方法。尽管MPI是为并行和分布式计算而设计的,但它也可以被视为一种通用的并发编程语言。特别是大多数Occam特性和CML的一些重要功能都可以在MPI中实现。例如,MPI-2支持动态进程和通道创建,以及不太通用的一等通信事件。我们提出对MPI的扩展,它提供了CML选择、包装和保护组合子。这将使MPI成为CML中灵活并发性的强大基础。假设将这些修改合并到标准中并实现高阶并发性及其优点将得到更广泛的应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Parallel molecular dynamics visualization using MPI with MPE graphics Early implementation of Para++ with MPI-2 Generalized communicators in the Message Passing Interface MPI performance on the SGI Power Challenge Cone beam tomography using MPI on heterogeneous workstation clusters
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1