Workload interleaving with performance guarantees in data centers

Feng Yan, E. Smirni
{"title":"Workload interleaving with performance guarantees in data centers","authors":"Feng Yan, E. Smirni","doi":"10.1109/NOMS.2016.7502934","DOIUrl":null,"url":null,"abstract":"In the era of global, large scale data centers residing in clouds, many applications and users share the same pool of resources for the purpose of reducing costs while maintaining high performance. When multiple workloads access the same resources concurrently, their requests are interleaved, possibly causing delays. Providing performance isolation to individual workloads such that they meet their own performance objectives is important and challenging. The challenge lies in finding accurate, robust, compact metrics and models that drive algorithms which can meet different performance objectives while achieving efficient utilization of resources. This dissertation proposes a set of methodologies and tools aiming at solving the challenging performance isolation problem of workload interleaving in data centers, focusing on both storage components and computing components. At the storage node level, we consider methodologies for better interleaving user traffic with background workloads, such as tasks for improving reliability, availability, and power savings. At the storage cluster level, we propose methodologies on how to efficiently conduct work consolidation and schedule asynchronous updates without violating user performance targets. At the computing node level, we present priority scheduling middleware that employs different policies to schedule background tasks. Finally, at the computing cluster level, we develop a new Hadoop scheduler called DyScale to exploit capabilities offered by heterogeneous cores in order to achieve a variety of performance objectives. All works have been evaluated through extensive simulation using enterprise traces or real testbed implementation, and have been accepted for publications in leading performance conferences.","PeriodicalId":344879,"journal":{"name":"NOMS 2016 - 2016 IEEE/IFIP Network Operations and Management Symposium","volume":"45 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"NOMS 2016 - 2016 IEEE/IFIP Network Operations and Management Symposium","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NOMS.2016.7502934","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

In the era of global, large scale data centers residing in clouds, many applications and users share the same pool of resources for the purpose of reducing costs while maintaining high performance. When multiple workloads access the same resources concurrently, their requests are interleaved, possibly causing delays. Providing performance isolation to individual workloads such that they meet their own performance objectives is important and challenging. The challenge lies in finding accurate, robust, compact metrics and models that drive algorithms which can meet different performance objectives while achieving efficient utilization of resources. This dissertation proposes a set of methodologies and tools aiming at solving the challenging performance isolation problem of workload interleaving in data centers, focusing on both storage components and computing components. At the storage node level, we consider methodologies for better interleaving user traffic with background workloads, such as tasks for improving reliability, availability, and power savings. At the storage cluster level, we propose methodologies on how to efficiently conduct work consolidation and schedule asynchronous updates without violating user performance targets. At the computing node level, we present priority scheduling middleware that employs different policies to schedule background tasks. Finally, at the computing cluster level, we develop a new Hadoop scheduler called DyScale to exploit capabilities offered by heterogeneous cores in order to achieve a variety of performance objectives. All works have been evaluated through extensive simulation using enterprise traces or real testbed implementation, and have been accepted for publications in leading performance conferences.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在数据中心中,工作负载交叉与性能保证
在全球化、大规模数据中心驻留在云中的时代,许多应用程序和用户共享相同的资源池,目的是在保持高性能的同时降低成本。当多个工作负载并发访问相同的资源时,它们的请求是交错的,可能会导致延迟。为各个工作负载提供性能隔离,使它们能够满足自己的性能目标,这既重要又具有挑战性。挑战在于找到准确、稳健、紧凑的指标和模型,以驱动能够满足不同性能目标的算法,同时实现资源的有效利用。本文提出了一套方法和工具,旨在解决数据中心工作负载交错的挑战性性能隔离问题,重点关注存储组件和计算组件。在存储节点级别,我们考虑更好地将用户流量与后台工作负载交织在一起的方法,例如用于提高可靠性、可用性和节能的任务。在存储集群级别,我们提出了如何在不违反用户性能目标的情况下有效地执行工作整合和调度异步更新的方法。在计算节点级别,我们提出了优先级调度中间件,该中间件采用不同的策略来调度后台任务。最后,在计算集群级别,我们开发了一个名为DyScale的新的Hadoop调度器,以利用异构内核提供的功能来实现各种性能目标。所有工作都通过使用企业跟踪或真实测试平台实现的广泛模拟进行了评估,并已在领先的性能会议上发表。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
PIoT: Programmable IoT using Information Centric Networking Workload interleaving with performance guarantees in data centers Outsourced invoice service: Service-clearing as SaaS in mobility service marketplaces Dynamic load management for IMS networks using network function virtualization On-demand dynamic network service deployment over NaaS architecture
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1