Hybrid preemptive scheduling of MPI applications on the grids

Aurélien Bouteiller, Hinde-Lilia Bouziane, T. Hérault, Pierre Lemarinier, F. Cappello
{"title":"Hybrid preemptive scheduling of MPI applications on the grids","authors":"Aurélien Bouteiller, Hinde-Lilia Bouziane, T. Hérault, Pierre Lemarinier, F. Cappello","doi":"10.1109/GRID.2004.39","DOIUrl":null,"url":null,"abstract":"Time sharing between cluster resources in grid is a major issue in cluster and grid integration. Classical grid architecture involves a higher level scheduler which submits nonoverlapping jobs to the independent batch schedulers of each cluster of the grid. The sequentiality induced by this approach does not fit with the expected number of users and job heterogeneity of the grids. Time sharing techniques address this issue by allowing simultaneous executions of many applications on the same resources. Co-scheduling and gang scheduling are the two best known techniques for time sharing cluster resources. Co-scheduling relies on the operating system of each node to schedule the processes of every application. Gang scheduling ensures that the same application is scheduled on all nodes simultaneously. Previous work has proven that co-scheduling techniques outperforms gang scheduling when physical memory is not exhausted. In this paper, we introduce a new hybrid sharing technique providing checkpoint based explicit memory management. It consists in co-scheduling parallel applications within a set, until the memory capacity of the node is reached, and using gang scheduling related techniques to switch from one set to another one. We compare experimentally the merits of the three solutions: co, gang and hybrid scheduling, in the context of out-of-core computing, which is likely to occur in the grid context, where many users share the same resources. The experiments show that the hybrid solution is as efficient as the co-scheduling technique when the physical memory is not exhausted, and is more efficient than gang scheduling and co-scheduling when physical memory is exhausted.","PeriodicalId":335281,"journal":{"name":"Fifth IEEE/ACM International Workshop on Grid Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2004-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Fifth IEEE/ACM International Workshop on Grid Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GRID.2004.39","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12

Abstract

Time sharing between cluster resources in grid is a major issue in cluster and grid integration. Classical grid architecture involves a higher level scheduler which submits nonoverlapping jobs to the independent batch schedulers of each cluster of the grid. The sequentiality induced by this approach does not fit with the expected number of users and job heterogeneity of the grids. Time sharing techniques address this issue by allowing simultaneous executions of many applications on the same resources. Co-scheduling and gang scheduling are the two best known techniques for time sharing cluster resources. Co-scheduling relies on the operating system of each node to schedule the processes of every application. Gang scheduling ensures that the same application is scheduled on all nodes simultaneously. Previous work has proven that co-scheduling techniques outperforms gang scheduling when physical memory is not exhausted. In this paper, we introduce a new hybrid sharing technique providing checkpoint based explicit memory management. It consists in co-scheduling parallel applications within a set, until the memory capacity of the node is reached, and using gang scheduling related techniques to switch from one set to another one. We compare experimentally the merits of the three solutions: co, gang and hybrid scheduling, in the context of out-of-core computing, which is likely to occur in the grid context, where many users share the same resources. The experiments show that the hybrid solution is as efficient as the co-scheduling technique when the physical memory is not exhausted, and is more efficient than gang scheduling and co-scheduling when physical memory is exhausted.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
网格上MPI应用的混合抢占调度
网格中集群资源间的时间共享是集群与网格集成中的一个主要问题。经典的网格体系结构涉及一个更高级别的调度程序,它将不重叠的作业提交给网格的每个集群的独立批处理调度程序。这种方法引起的顺序性不适合网格的预期用户数量和作业异质性。分时技术通过允许在相同的资源上同时执行许多应用程序来解决这个问题。协同调度和组调度是两种最著名的分时集群资源调度技术。协同调度依赖于每个节点的操作系统来调度每个应用程序的进程。组调度确保在所有节点上同时调度相同的应用程序。以前的工作已经证明,当物理内存没有耗尽时,协同调度技术优于组调度。本文介绍了一种新的混合共享技术,提供基于检查点的显式内存管理。它包括在一个集合内共同调度并行应用程序,直到达到节点的内存容量,并使用组调度相关技术从一个集合切换到另一个集合。我们通过实验比较了三种解决方案的优点:co、gang和混合调度,在核外计算的背景下,这可能发生在网格环境中,其中许多用户共享相同的资源。实验表明,在物理内存不耗尽的情况下,混合调度方法的效率与协同调度技术相当,在物理内存耗尽的情况下,混合调度方法的效率高于组调度和协同调度技术。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Dynamic measurement scenarios in the virtual laboratory system Dynamic reconfiguration for grid fabrics A global grid for analysis of arthropod evolution Usage policy-based CPU sharing in virtual organizations Toward characterizing the performance of SOAP toolkits
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1