本地资源整形器用于MapReduce

Peng Lu, Young Choon Lee, V. Gramoli, Luke M. Leslie, Albert Y. Zomaya
{"title":"本地资源整形器用于MapReduce","authors":"Peng Lu, Young Choon Lee, V. Gramoli, Luke M. Leslie, Albert Y. Zomaya","doi":"10.1109/CloudCom.2014.55","DOIUrl":null,"url":null,"abstract":"Resource capacity is often over provisioned to primarily deal with short periods of peak load. Shaping these peaks by shifting them to low utilization periods (valleys) is referred to as \"resource consumption shaping\". While originally aimed at the data center level, the resource consumption shaping we consider focuses on local resources, like CPU or I/O as we have identified that individual jobs also incur load peaks and valleys on these resources. In this paper, we present Local Resource Shaper (LRS), which limits fairness in resource sharing between co-located MapReduce tasks. LRS enables Hadoop to maximize resource utilization and minimize resource contention independently of job type. Co-located MapReduce tasks are often prone to resource contention (i.e., Load peak) due to similar resource usage patterns particularly with traditional fair resource sharing. In essence, LRS differentiates co-located tasks through active and passive slots that serve as containers for interchangeable map or reduce tasks. LRS lets an active slot consume as much resources as possible, and a passive slot make use of any unused resources. LRS leverages such slot differentiation with its new scheduler, Interleave. Our results show that LRS always outperforms the best static slot configuration with three Hadoop schedulers in terms of both resource utilization and performance.","PeriodicalId":249306,"journal":{"name":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Local Resource Shaper for MapReduce\",\"authors\":\"Peng Lu, Young Choon Lee, V. Gramoli, Luke M. Leslie, Albert Y. Zomaya\",\"doi\":\"10.1109/CloudCom.2014.55\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Resource capacity is often over provisioned to primarily deal with short periods of peak load. Shaping these peaks by shifting them to low utilization periods (valleys) is referred to as \\\"resource consumption shaping\\\". While originally aimed at the data center level, the resource consumption shaping we consider focuses on local resources, like CPU or I/O as we have identified that individual jobs also incur load peaks and valleys on these resources. In this paper, we present Local Resource Shaper (LRS), which limits fairness in resource sharing between co-located MapReduce tasks. LRS enables Hadoop to maximize resource utilization and minimize resource contention independently of job type. Co-located MapReduce tasks are often prone to resource contention (i.e., Load peak) due to similar resource usage patterns particularly with traditional fair resource sharing. In essence, LRS differentiates co-located tasks through active and passive slots that serve as containers for interchangeable map or reduce tasks. LRS lets an active slot consume as much resources as possible, and a passive slot make use of any unused resources. LRS leverages such slot differentiation with its new scheduler, Interleave. Our results show that LRS always outperforms the best static slot configuration with three Hadoop schedulers in terms of both resource utilization and performance.\",\"PeriodicalId\":249306,\"journal\":{\"name\":\"2014 IEEE 6th International Conference on Cloud Computing Technology and Science\",\"volume\":\"34 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-12-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 IEEE 6th International Conference on Cloud Computing Technology and Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CloudCom.2014.55\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE 6th International Conference on Cloud Computing Technology and Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CloudCom.2014.55","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

摘要

资源容量通常被过度配置,主要用于处理短时间的峰值负载。通过将这些峰值转移到低利用率时期(低谷)来塑造这些峰值被称为“资源消耗塑造”。虽然最初的目标是数据中心级别,但我们考虑的资源消耗塑造主要关注本地资源,如CPU或I/O,因为我们已经确定单个作业也会在这些资源上产生负载高峰和低谷。在本文中,我们提出了Local Resource Shaper (LRS),它限制了同址MapReduce任务之间资源共享的公平性。LRS使Hadoop能够最大限度地提高资源利用率,并最小化独立于作业类型的资源争用。由于相似的资源使用模式,特别是传统的公平资源共享,共置MapReduce任务经常容易出现资源争用(即负载峰值)。从本质上讲,LRS通过主动槽和被动槽来区分共定位任务,这些槽作为可互换的map或reduce任务的容器。LRS允许活动槽使用尽可能多的资源,而被动槽使用任何未使用的资源。LRS通过其新的调度器Interleave利用了这种槽位差异。我们的结果表明,在资源利用率和性能方面,LRS总是优于具有三个Hadoop调度器的最佳静态槽配置。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Local Resource Shaper for MapReduce
Resource capacity is often over provisioned to primarily deal with short periods of peak load. Shaping these peaks by shifting them to low utilization periods (valleys) is referred to as "resource consumption shaping". While originally aimed at the data center level, the resource consumption shaping we consider focuses on local resources, like CPU or I/O as we have identified that individual jobs also incur load peaks and valleys on these resources. In this paper, we present Local Resource Shaper (LRS), which limits fairness in resource sharing between co-located MapReduce tasks. LRS enables Hadoop to maximize resource utilization and minimize resource contention independently of job type. Co-located MapReduce tasks are often prone to resource contention (i.e., Load peak) due to similar resource usage patterns particularly with traditional fair resource sharing. In essence, LRS differentiates co-located tasks through active and passive slots that serve as containers for interchangeable map or reduce tasks. LRS lets an active slot consume as much resources as possible, and a passive slot make use of any unused resources. LRS leverages such slot differentiation with its new scheduler, Interleave. Our results show that LRS always outperforms the best static slot configuration with three Hadoop schedulers in terms of both resource utilization and performance.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Exploring the Performance Impact of Virtualization on an HPC Cloud Performance Study of Spindle, A Web Analytics Query Engine Implemented in Spark Role of System Modeling for Audit of QoS Provisioning in Cloud Services Dependability Analysis on Open Stack IaaS Cloud: Bug Anaysis and Fault Injection Delegated Access for Hadoop Clusters in the Cloud
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1