Towards Optimality in Parallel Job Scheduling

Benjamin Berg, Jan-Pieter L. Dorsman, Mor Harchol-Balter
{"title":"Towards Optimality in Parallel Job Scheduling","authors":"Benjamin Berg, Jan-Pieter L. Dorsman, Mor Harchol-Balter","doi":"10.1145/3219617.3219666","DOIUrl":null,"url":null,"abstract":"To keep pace with Moore's law, chip designers have focused on increasing the number of cores per chip. To effectively leverage these multi-core chips, one must decide how many cores to assign to each job. Given that jobs receive sublinear speedups from additional cores, there is a tradeoff: allocating more cores to an individual job reduces the job's runtime, but decreases the efficiency of the overall system. We ask how the system should assign cores to jobs so as to minimize the mean response time over a stream of incoming jobs. To answer this question, we develop an analytical model of jobs running on a multi-core machine. We prove that EQUI, a policy which continuously divides cores evenly across jobs, is optimal when all jobs follow a single speedup curve and have exponentially distributed sizes. We also consider a class of \"fixed-width\" policies, which choose a single level of parallelization, k, to use for all jobs. We prove that, surprisingly, fixed-width policies which use the optimal fixed level of parallelization, k*, become near-optimal as the number of cores becomes large. In the case where jobs may follow different speedup curves, finding a good scheduling policy is even more challenging. In particular, EQUI is no longer optimal, but a very simple policy, GREEDY*, performs well empirically.","PeriodicalId":210440,"journal":{"name":"Abstracts of the 2018 ACM International Conference on Measurement and Modeling of Computer Systems","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2017-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"17","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Abstracts of the 2018 ACM International Conference on Measurement and Modeling of Computer Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3219617.3219666","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 17

Abstract

To keep pace with Moore's law, chip designers have focused on increasing the number of cores per chip. To effectively leverage these multi-core chips, one must decide how many cores to assign to each job. Given that jobs receive sublinear speedups from additional cores, there is a tradeoff: allocating more cores to an individual job reduces the job's runtime, but decreases the efficiency of the overall system. We ask how the system should assign cores to jobs so as to minimize the mean response time over a stream of incoming jobs. To answer this question, we develop an analytical model of jobs running on a multi-core machine. We prove that EQUI, a policy which continuously divides cores evenly across jobs, is optimal when all jobs follow a single speedup curve and have exponentially distributed sizes. We also consider a class of "fixed-width" policies, which choose a single level of parallelization, k, to use for all jobs. We prove that, surprisingly, fixed-width policies which use the optimal fixed level of parallelization, k*, become near-optimal as the number of cores becomes large. In the case where jobs may follow different speedup curves, finding a good scheduling policy is even more challenging. In particular, EQUI is no longer optimal, but a very simple policy, GREEDY*, performs well empirically.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
并行作业调度的最优性研究
为了跟上摩尔定律的步伐,芯片设计师们一直致力于增加每个芯片的核心数量。为了有效地利用这些多核芯片,必须决定为每个作业分配多少核。如果作业从额外的内核中获得亚线性的速度提升,那么就需要权衡:为单个作业分配更多的内核会减少作业的运行时间,但会降低整个系统的效率。我们询问系统应该如何为作业分配内核,以最小化传入作业流的平均响应时间。为了回答这个问题,我们开发了在多核机器上运行的作业的分析模型。我们证明了当所有作业都遵循单个加速曲线并且具有指数分布的大小时,EQUI策略是最优的。我们还考虑了一类“固定宽度”策略,它选择单个并行化级别k,用于所有作业。我们证明,令人惊讶的是,使用最优的固定并行化水平k*的固定宽度策略,随着内核数量的增加而变得接近最优。在作业可能遵循不同加速曲线的情况下,找到一个好的调度策略更具挑战性。特别是,EQUI不再是最优的,但一个非常简单的策略,GREEDY*,在经验上表现良好。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Session details: Networking Asymptotically Optimal Load Balancing Topologies On Resource Pooling and Separation for LRU Caching Working Set Size Estimation Techniques in Virtualized Environments: One Size Does not Fit All PreFix: Switch Failure Prediction in Datacenter Networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1