高效消息传递机制对高性能分布式科学计算的影响

S. L. Mirtaheri, Ehsan Mousavi Khaneghah, M. Sharifi, M. A. Azgomi
{"title":"高效消息传递机制对高性能分布式科学计算的影响","authors":"S. L. Mirtaheri, Ehsan Mousavi Khaneghah, M. Sharifi, M. A. Azgomi","doi":"10.1109/ISPA.2008.131","DOIUrl":null,"url":null,"abstract":"Parallel programming and distributed programming are two solutions for scientific applications to provide high performance and fast response time in parallel systems and distributed systems. Parallel and distributed systems must provide inter process communication (IPC) mechanisms like message passing mechanism as underlying platforms to enable communication between local and especially geographically dispersed and physically distributed processes. Communication overhead is the major problem in these systems and there are a lot of efforts to develop more efficient message passing mechanisms or to improve the network communication speed. This paper provides hard evidence that an efficient implementation of message passing mechanism on multi-computers reduces the execution time of a molecular dynamics code. A well-known program for macromolecular dynamics and mechanics called CHARMm is executed on a networked cluster. The performance of CHARMm is measured with two distributed implementations of message passing, namely a kernel-level implementation called DIPC2006 and a renowned library level implementation called MPI. It is shown that the performance of CHARMm on a DIPC2006 configured cluster is by far better than its performance on an optimized MPI configured similar cluster. Even ignoring the favorable points of kernel-level implementations, like safety, privilege, reliability, and primitiveness, the insight is twofold. Scientists are nowadays faced with more computational complexity and look for more efficient systems and mechanisms. Efficient distributed IPC mechanisms have direct effect on running scientistspsila simulations faster, and computer engineers may try harder to develop more efficient distributed implementations of IPC.","PeriodicalId":345341,"journal":{"name":"2008 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"102 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"The Influence of Efficient Message Passing Mechanisms on High Performance Distributed Scientific Computing\",\"authors\":\"S. L. Mirtaheri, Ehsan Mousavi Khaneghah, M. Sharifi, M. A. Azgomi\",\"doi\":\"10.1109/ISPA.2008.131\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Parallel programming and distributed programming are two solutions for scientific applications to provide high performance and fast response time in parallel systems and distributed systems. Parallel and distributed systems must provide inter process communication (IPC) mechanisms like message passing mechanism as underlying platforms to enable communication between local and especially geographically dispersed and physically distributed processes. Communication overhead is the major problem in these systems and there are a lot of efforts to develop more efficient message passing mechanisms or to improve the network communication speed. This paper provides hard evidence that an efficient implementation of message passing mechanism on multi-computers reduces the execution time of a molecular dynamics code. A well-known program for macromolecular dynamics and mechanics called CHARMm is executed on a networked cluster. The performance of CHARMm is measured with two distributed implementations of message passing, namely a kernel-level implementation called DIPC2006 and a renowned library level implementation called MPI. It is shown that the performance of CHARMm on a DIPC2006 configured cluster is by far better than its performance on an optimized MPI configured similar cluster. Even ignoring the favorable points of kernel-level implementations, like safety, privilege, reliability, and primitiveness, the insight is twofold. Scientists are nowadays faced with more computational complexity and look for more efficient systems and mechanisms. Efficient distributed IPC mechanisms have direct effect on running scientistspsila simulations faster, and computer engineers may try harder to develop more efficient distributed implementations of IPC.\",\"PeriodicalId\":345341,\"journal\":{\"name\":\"2008 IEEE International Symposium on Parallel and Distributed Processing with Applications\",\"volume\":\"102 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2008-12-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2008 IEEE International Symposium on Parallel and Distributed Processing with Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISPA.2008.131\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 IEEE International Symposium on Parallel and Distributed Processing with Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISPA.2008.131","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

并行编程和分布式编程是科学应用在并行系统和分布式系统中提供高性能和快速响应时间的两种解决方案。并行和分布式系统必须提供进程间通信(IPC)机制,如消息传递机制,作为底层平台,以支持本地、特别是地理上分散的和物理上分布的进程之间的通信。通信开销是这些系统中的主要问题,因此需要开发更有效的消息传递机制或提高网络通信速度。本文提供了确凿的证据,证明在多计算机上有效地实现消息传递机制可以减少分子动力学代码的执行时间。一个著名的大分子动力学和力学程序CHARMm是在一个网络集群上执行的。CHARMm的性能是用两个消息传递的分布式实现来衡量的,即称为DIPC2006的内核级实现和称为MPI的著名库级实现。结果表明,CHARMm在DIPC2006配置的集群上的性能远远优于在优化的MPI配置的类似集群上的性能。即使忽略内核级实现的优点,如安全性、特权、可靠性和原语性,这种见解也是双重的。如今,科学家们面临着更复杂的计算,并寻求更有效的系统和机制。高效的分布式IPC机制对科学家更快地运行计算机模拟有直接影响,计算机工程师可能会更努力地开发更有效的分布式IPC实现。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
The Influence of Efficient Message Passing Mechanisms on High Performance Distributed Scientific Computing
Parallel programming and distributed programming are two solutions for scientific applications to provide high performance and fast response time in parallel systems and distributed systems. Parallel and distributed systems must provide inter process communication (IPC) mechanisms like message passing mechanism as underlying platforms to enable communication between local and especially geographically dispersed and physically distributed processes. Communication overhead is the major problem in these systems and there are a lot of efforts to develop more efficient message passing mechanisms or to improve the network communication speed. This paper provides hard evidence that an efficient implementation of message passing mechanism on multi-computers reduces the execution time of a molecular dynamics code. A well-known program for macromolecular dynamics and mechanics called CHARMm is executed on a networked cluster. The performance of CHARMm is measured with two distributed implementations of message passing, namely a kernel-level implementation called DIPC2006 and a renowned library level implementation called MPI. It is shown that the performance of CHARMm on a DIPC2006 configured cluster is by far better than its performance on an optimized MPI configured similar cluster. Even ignoring the favorable points of kernel-level implementations, like safety, privilege, reliability, and primitiveness, the insight is twofold. Scientists are nowadays faced with more computational complexity and look for more efficient systems and mechanisms. Efficient distributed IPC mechanisms have direct effect on running scientistspsila simulations faster, and computer engineers may try harder to develop more efficient distributed implementations of IPC.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Image Feature Vector Construction Using Interest Point Based Regions A Fully Dynamic Distributed Algorithm for a B-Coloring of Graphs Fixed Point Decimal Multiplication Using RPS Algorithm Self-Stabilizing Construction of Bounded Size Clusters ScatterClipse: A Model-Driven Tool-Chain for Developing, Testing, and Prototyping Wireless Sensor Networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1