首页 > 最新文献

2011 International Green Computing Conference and Workshops最新文献

英文 中文
Statistical GPU power analysis using tree-based methods 统计GPU功率分析使用基于树的方法
Pub Date : 2011-07-25 DOI: 10.1109/IGCC.2011.6008582
Jianmin Chen, Bin Li, Ying Zhang, Lu Peng, J. Peir
Graphics Processing Units (GPUs) have emerged as a promising platform for parallel computation. With a large number of scalar processors and abundant memory bandwidth, GPUs provide substantial computation power. While delivering high computation performance, the GPU also consumes high power and needs to be equipped with sufficient power supplies and cooling systems. Therefore, it is essential to institute an efficient mechanism for evaluating and understanding the power consumption requirement when running real applications on high-end GPUs. In this paper, we present a high-level GPU power consumption model using sophisticated tree-based random forest methods which can correlate the power consumption with a set of independent performance variables. This statistical model not only predicts the GPU runtime power consumption accurately, but more importantly, it also provides sufficient insights for understanding the dependence between the GPU runtime power consumption and the individual performance metrics. In order to gain more insights, we use a GPU simulator that can collect more runtime performance metrics than hardware counters. We measure the power consumption of a wide-range of CUDA kernels on an experimental system with GTX 280 GPU as statistical samples for our power analysis. This methodology can certainly be applied to any other CUDA GPU.
图形处理单元(gpu)已经成为一个很有前途的并行计算平台。gpu具有大量的标量处理器和丰富的内存带宽,可以提供可观的计算能力。在提供高计算性能的同时,GPU的功耗也很高,需要配备足够的电源和散热系统。因此,在高端gpu上运行实际应用程序时,必须建立一个有效的机制来评估和理解功耗需求。在本文中,我们使用复杂的基于树的随机森林方法提出了一个高级GPU功耗模型,该模型可以将功耗与一组独立的性能变量关联起来。这个统计模型不仅准确地预测了GPU运行时功耗,更重要的是,它还为理解GPU运行时功耗与单个性能指标之间的依赖关系提供了足够的见解。为了获得更多的见解,我们使用了一个GPU模拟器,它可以收集比硬件计数器更多的运行时性能指标。我们使用GTX 280 GPU作为功耗分析的统计样本,在实验系统上测量了各种CUDA内核的功耗。这种方法当然可以应用于任何其他CUDA GPU。
{"title":"Statistical GPU power analysis using tree-based methods","authors":"Jianmin Chen, Bin Li, Ying Zhang, Lu Peng, J. Peir","doi":"10.1109/IGCC.2011.6008582","DOIUrl":"https://doi.org/10.1109/IGCC.2011.6008582","url":null,"abstract":"Graphics Processing Units (GPUs) have emerged as a promising platform for parallel computation. With a large number of scalar processors and abundant memory bandwidth, GPUs provide substantial computation power. While delivering high computation performance, the GPU also consumes high power and needs to be equipped with sufficient power supplies and cooling systems. Therefore, it is essential to institute an efficient mechanism for evaluating and understanding the power consumption requirement when running real applications on high-end GPUs. In this paper, we present a high-level GPU power consumption model using sophisticated tree-based random forest methods which can correlate the power consumption with a set of independent performance variables. This statistical model not only predicts the GPU runtime power consumption accurately, but more importantly, it also provides sufficient insights for understanding the dependence between the GPU runtime power consumption and the individual performance metrics. In order to gain more insights, we use a GPU simulator that can collect more runtime performance metrics than hardware counters. We measure the power consumption of a wide-range of CUDA kernels on an experimental system with GTX 280 GPU as statistical samples for our power analysis. This methodology can certainly be applied to any other CUDA GPU.","PeriodicalId":306876,"journal":{"name":"2011 International Green Computing Conference and Workshops","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116228728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
Power and endurance aware Flash-PCM memory system 电源和持久意识闪存- pcm存储系统
Pub Date : 2011-07-25 DOI: 10.1109/IGCC.2011.6008592
Suraj Pathak, Y. Tay, Q. Wei
Two major performance issues of Flash NAND are the write latency for random writes, and the lifetime of NAND chips. Several methods, mainly focusing on the Flash Translation Layer (FTL) or the Flash Buffer Management have been proposed to address these problems. In this paper, we propose an idea of reducing write traffic to Flash by the following steps: First we avoid repeated writes to Flash SSD by finding the redundant writes using a cryptographic HASH cipher. We design a set of acceleration techniques to reduce the latency overhead of this extra computational cost. Then we propose a PCM-based buffer extender for Flash SSD where we write the frequent updates to hot pages of Flash into PCM layer, which allows in-page update. Finally, while merging the PCM updated data to the Flash page, we use a special merging technique to change the flushes into sequential flushes, as sequential writes on Flash are almost thrice as fast as random writes. We maintain the redundant write finder mechanism in PCM. We test our design using a trace-driven simulator. The results show that compared to the traditional design technique, lifetime of Flash SSD can be more than quadrupled, while consuming 20% less power to do so with some improvement in write performance as well.
闪存NAND的两个主要性能问题是随机写入的写入延迟和NAND芯片的寿命。为了解决这些问题,人们提出了几种方法,主要集中在Flash转换层(FTL)或Flash缓冲区管理上。在本文中,我们提出了一个通过以下步骤减少对Flash写入流量的想法:首先,我们通过使用加密HASH密码查找冗余写入来避免对Flash SSD的重复写入。我们设计了一组加速技术来减少这种额外计算成本的延迟开销。然后,我们提出了一种基于PCM的Flash SSD缓存扩展器,将Flash热页的频繁更新写入PCM层,从而实现页内更新。最后,在将PCM更新的数据合并到Flash页面时,我们使用一种特殊的合并技术将刷新更改为顺序刷新,因为Flash上的顺序写入速度几乎是随机写入速度的三倍。我们在PCM中保留了冗余写查找器机制。我们使用跟踪驱动模拟器测试我们的设计。结果表明,与传统设计技术相比,Flash SSD的寿命可提高四倍以上,功耗降低20%,写入性能也有一定提高。
{"title":"Power and endurance aware Flash-PCM memory system","authors":"Suraj Pathak, Y. Tay, Q. Wei","doi":"10.1109/IGCC.2011.6008592","DOIUrl":"https://doi.org/10.1109/IGCC.2011.6008592","url":null,"abstract":"Two major performance issues of Flash NAND are the write latency for random writes, and the lifetime of NAND chips. Several methods, mainly focusing on the Flash Translation Layer (FTL) or the Flash Buffer Management have been proposed to address these problems. In this paper, we propose an idea of reducing write traffic to Flash by the following steps: First we avoid repeated writes to Flash SSD by finding the redundant writes using a cryptographic HASH cipher. We design a set of acceleration techniques to reduce the latency overhead of this extra computational cost. Then we propose a PCM-based buffer extender for Flash SSD where we write the frequent updates to hot pages of Flash into PCM layer, which allows in-page update. Finally, while merging the PCM updated data to the Flash page, we use a special merging technique to change the flushes into sequential flushes, as sequential writes on Flash are almost thrice as fast as random writes. We maintain the redundant write finder mechanism in PCM. We test our design using a trace-driven simulator. The results show that compared to the traditional design technique, lifetime of Flash SSD can be more than quadrupled, while consuming 20% less power to do so with some improvement in write performance as well.","PeriodicalId":306876,"journal":{"name":"2011 International Green Computing Conference and Workshops","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130893666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
QuARES: Quality-aware data collection in energy harvesting sensor networks 方形:能量采集传感器网络中的质量感知数据收集
Pub Date : 2011-07-25 DOI: 10.1109/IGCC.2011.6008548
Nga Dang, E. Bozorgzadeh, N. Venkatasubramanian
Renewable energy technology has become a promising solution to reduce energy concerns due to limited battery in wireless sensor networks. While this enables us to prolong the lifetime of a sensor network (perpetually), unstable environmental energy sources bring challenges in the design of sustainable sensor networks. In this paper, we propose an adaptive energy harvesting management framework, QuARES, which exploits an application's tolerance to quality degradation to adjust application quality based on energy harvesting conditions. The proposed framework consists of two stages: an offline stage which uses prediction of harvested energy to allocate energy budget for time slots; and an online stage to tackle the fluctuation in time-varying energy harvesting profile. We implemented the application and our framework in a network simulator, QualNet. In comparison with other approaches (e.g., [9]), our system offers improved sustainability (low energy consumption, no node deaths) during operation with data quality improvement ranging from 30–70%. QuARES is currently being deployed in a campus-wide pervasive space at UCI called Responsphere[11].
可再生能源技术已成为一个有前途的解决方案,以减少能源问题,由于有限的电池在无线传感器网络。虽然这使我们能够延长传感器网络的寿命(永久),但不稳定的环境能源给可持续传感器网络的设计带来了挑战。在本文中,我们提出了一个自适应能量收集管理框架QuARES,它利用应用程序对质量退化的容忍度来根据能量收集条件调整应用程序的质量。提出的框架包括两个阶段:离线阶段,该阶段使用收集的能量预测来分配时间段的能量预算;并建立了在线阶段,以解决时变能量收集剖面的波动问题。我们在网络模拟器QualNet中实现了应用程序和框架。与其他方法(例如[9])相比,我们的系统在运行期间提供了更好的可持续性(低能耗,无节点死亡),数据质量提高了30-70%。QuARES目前被部署在UCI一个名为Responsphere的校园普及空间中[11]。
{"title":"QuARES: Quality-aware data collection in energy harvesting sensor networks","authors":"Nga Dang, E. Bozorgzadeh, N. Venkatasubramanian","doi":"10.1109/IGCC.2011.6008548","DOIUrl":"https://doi.org/10.1109/IGCC.2011.6008548","url":null,"abstract":"Renewable energy technology has become a promising solution to reduce energy concerns due to limited battery in wireless sensor networks. While this enables us to prolong the lifetime of a sensor network (perpetually), unstable environmental energy sources bring challenges in the design of sustainable sensor networks. In this paper, we propose an adaptive energy harvesting management framework, QuARES, which exploits an application's tolerance to quality degradation to adjust application quality based on energy harvesting conditions. The proposed framework consists of two stages: an offline stage which uses prediction of harvested energy to allocate energy budget for time slots; and an online stage to tackle the fluctuation in time-varying energy harvesting profile. We implemented the application and our framework in a network simulator, QualNet. In comparison with other approaches (e.g., [9]), our system offers improved sustainability (low energy consumption, no node deaths) during operation with data quality improvement ranging from 30–70%. QuARES is currently being deployed in a campus-wide pervasive space at UCI called Responsphere[11].","PeriodicalId":306876,"journal":{"name":"2011 International Green Computing Conference and Workshops","volume":"76 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134128157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Dynamic memoization for energy efficiency in financial applications 金融应用中能源效率的动态记忆
Pub Date : 2011-07-25 DOI: 10.1109/IGCC.2011.6008559
G. Agosta, Marco Bessi, E. Capra, C. Francalanci
Software applications directly impact on IT energy consumptions as they indirectly guide hardware operations. Optimizing algorithms has a direct beneficial impact on energy efficiency, but it requires domain knowledge and an accurate analysis of the code, which may be infeasible and too costly to perform for large code bases. In this paper we present an approach based on dynamic memoization to increase software energy efficiency. This implies to identify a subset of pure functions that can be tabulated and to automatically store the results corresponding to the most frequent invocations. We implemented a prototype software system to apply memoization and tested it on a set of financial functions. Empirical results show average energy savings of 74% and time performance savings of 79%.
软件应用程序直接影响IT能源消耗,因为它们间接指导硬件操作。优化算法对能源效率有直接的有益影响,但它需要领域知识和对代码的准确分析,这对于大型代码库来说可能是不可行的,而且成本太高。在本文中,我们提出了一种基于动态记忆的方法来提高软件的能效。这意味着要确定一个纯函数的子集,这些函数可以被制表,并自动存储与最频繁调用相对应的结果。我们实现了一个应用记忆的原型软件系统,并在一组财务功能上进行了测试。实证结果显示,平均节能74%,时间性能节省79%。
{"title":"Dynamic memoization for energy efficiency in financial applications","authors":"G. Agosta, Marco Bessi, E. Capra, C. Francalanci","doi":"10.1109/IGCC.2011.6008559","DOIUrl":"https://doi.org/10.1109/IGCC.2011.6008559","url":null,"abstract":"Software applications directly impact on IT energy consumptions as they indirectly guide hardware operations. Optimizing algorithms has a direct beneficial impact on energy efficiency, but it requires domain knowledge and an accurate analysis of the code, which may be infeasible and too costly to perform for large code bases. In this paper we present an approach based on dynamic memoization to increase software energy efficiency. This implies to identify a subset of pure functions that can be tabulated and to automatically store the results corresponding to the most frequent invocations. We implemented a prototype software system to apply memoization and tested it on a set of financial functions. Empirical results show average energy savings of 74% and time performance savings of 79%.","PeriodicalId":306876,"journal":{"name":"2011 International Green Computing Conference and Workshops","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132082028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Power and frequency analysis for data and control independence in embedded processors 嵌入式处理器中数据和控制独立性的功率和频率分析
Pub Date : 2011-07-25 DOI: 10.1109/IGCC.2011.6008593
Farzad Samie, A. Baniasadi
In this work we study control independence in embedded processors. We classify control independent instructions to data dependent and data independent and measure each group's frequency and behavior. Moreover, we study how control independent instructions impact power dissipation and resource utilization. We also investigate control independent instructions' behavior for different processors and branch predictors. Our study shows that data independent instructions account for 34% of the control independent instructions in the applications studied here. We also show that control independent instructions account for upto 12% of the processor energy and 15.6%, 11.2% and 8.6% of the instructions fetched, decoded and executed respectively. We also show that control independent instruction frequency increases with register update unit (RUU) size and issue width but shows little sensitivity to branch predictor size. In addition, we illustrate that control independent data independent instructions account for upto 6% of the processor energy. We also show that control independent data independent instruction frequency increases with RUU size and issue width.
本文主要研究嵌入式处理器的控制独立性。我们将控制独立指令分为数据依赖指令和数据独立指令,并测量每组指令的频率和行为。此外,我们还研究了控制独立指令对功耗和资源利用率的影响。我们还研究了不同处理器和分支预测器的控制独立指令的行为。我们的研究表明,在本文研究的应用程序中,数据独立指令占控制独立指令的34%。我们还表明,控制独立指令占处理器能量的12%,分别占获取、解码和执行指令的15.6%、11.2%和8.6%。我们还表明,控制独立指令频率随着寄存器更新单元(RUU)大小和问题宽度的增加而增加,但对分支预测器大小的敏感性很小。此外,我们说明了控制独立数据独立指令占处理器能量的高达6%。我们还表明,控制独立数据独立指令频率随着RUU大小和问题宽度的增加而增加。
{"title":"Power and frequency analysis for data and control independence in embedded processors","authors":"Farzad Samie, A. Baniasadi","doi":"10.1109/IGCC.2011.6008593","DOIUrl":"https://doi.org/10.1109/IGCC.2011.6008593","url":null,"abstract":"In this work we study control independence in embedded processors. We classify control independent instructions to data dependent and data independent and measure each group's frequency and behavior. Moreover, we study how control independent instructions impact power dissipation and resource utilization. We also investigate control independent instructions' behavior for different processors and branch predictors. Our study shows that data independent instructions account for 34% of the control independent instructions in the applications studied here. We also show that control independent instructions account for upto 12% of the processor energy and 15.6%, 11.2% and 8.6% of the instructions fetched, decoded and executed respectively. We also show that control independent instruction frequency increases with register update unit (RUU) size and issue width but shows little sensitivity to branch predictor size. In addition, we illustrate that control independent data independent instructions account for upto 6% of the processor energy. We also show that control independent data independent instruction frequency increases with RUU size and issue width.","PeriodicalId":306876,"journal":{"name":"2011 International Green Computing Conference and Workshops","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132573420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exploring performance, power, and temperature characteristics of 3D systems with on-chip DRAM 利用片上DRAM探索3D系统的性能、功耗和温度特性
Pub Date : 2011-07-25 DOI: 10.1109/IGCC.2011.6008579
Jie Meng, Daniel Rossell, A. Coskun
3D integration enables stacking DRAM layers on processor cores within the same chip. On-chip memory has the potential to dramatically improve performance due to lower memory access latency and higher bandwidth. Higher core performance increases power density, requiring a thorough evaluation of the tradeoff between performance and temperature. This paper presents a comprehensive framework for exploring the power, performance, and temperature characteristics of 3D systems with on-chip DRAM. Utilizing this framework, we quantify the performance improvement as well as the power and thermal profiles of parallel workloads running on a 16-core 3D system with on-chip DRAM. The 3D system improves application performance by 72.6% on average in comparison to an equivalent 2D chip with off-chip memory. Power consumption per core increases by up to 32.7%. The increase in peak chip temperature, however, is limited to 1.5°C as the lower power DRAM layers share the heat of the hotter cores. Experimental results show that while DRAM stacking is a promising technique for high-end systems, efficient thermal management strategies are needed in embedded systems with cost or space restrictions to compensate for the lack of efficient cooling.
3D集成可以在同一芯片内的处理器内核上堆叠DRAM层。由于更低的内存访问延迟和更高的带宽,片上存储器具有显著提高性能的潜力。更高的核心性能会增加功率密度,需要对性能和温度之间的权衡进行彻底的评估。本文提出了一个全面的框架,用于探索具有片上DRAM的3D系统的功率,性能和温度特性。利用这个框架,我们量化了性能改进以及在带有片上DRAM的16核3D系统上运行的并行工作负载的功耗和热概况。与具有片外存储器的同等2D芯片相比,3D系统将应用程序性能平均提高了72.6%。单核功耗最高提高32.7%。然而,芯片峰值温度的增加被限制在1.5°C,因为低功耗DRAM层共享较热内核的热量。实验结果表明,虽然DRAM堆叠技术在高端系统中是一种很有前途的技术,但在成本或空间限制的嵌入式系统中,需要有效的热管理策略来弥补有效冷却的不足。
{"title":"Exploring performance, power, and temperature characteristics of 3D systems with on-chip DRAM","authors":"Jie Meng, Daniel Rossell, A. Coskun","doi":"10.1109/IGCC.2011.6008579","DOIUrl":"https://doi.org/10.1109/IGCC.2011.6008579","url":null,"abstract":"3D integration enables stacking DRAM layers on processor cores within the same chip. On-chip memory has the potential to dramatically improve performance due to lower memory access latency and higher bandwidth. Higher core performance increases power density, requiring a thorough evaluation of the tradeoff between performance and temperature. This paper presents a comprehensive framework for exploring the power, performance, and temperature characteristics of 3D systems with on-chip DRAM. Utilizing this framework, we quantify the performance improvement as well as the power and thermal profiles of parallel workloads running on a 16-core 3D system with on-chip DRAM. The 3D system improves application performance by 72.6% on average in comparison to an equivalent 2D chip with off-chip memory. Power consumption per core increases by up to 32.7%. The increase in peak chip temperature, however, is limited to 1.5°C as the lower power DRAM layers share the heat of the hotter cores. Experimental results show that while DRAM stacking is a promising technique for high-end systems, efficient thermal management strategies are needed in embedded systems with cost or space restrictions to compensate for the lack of efficient cooling.","PeriodicalId":306876,"journal":{"name":"2011 International Green Computing Conference and Workshops","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133253502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
CACM: Current-aware capacity management in consolidated server enclosures ccm:集成服务器框中的电流感知容量管理
Pub Date : 2011-07-25 DOI: 10.1109/IGCC.2011.6008588
Hui Chen, Meina Song, Junde Song, Ada Gavrilovska, K. Schwan, M. Kesavan
Using virtualization to consolidate servers is a routine method for reducing power consumption in data centers. Current practice, however, assumes homogeneous servers that operate in a homogeneous physical environment. Experimental evidence collected in our mid-size, fully instrumented data center challenges those assumptions, by finding that chassis construction can significantly influence cooling power usage. In particular, the multiple power domains in a single chassis can have different levels of power efficiency, and further, power consumption is affected by the differences in electrical current levels across these two domains. This paper describes experiments designed to validate these facts, followed by a proposed current-aware capacity management system (CACM) that controls resource allocation across power domains by periodically migrating virtual machines among servers. The method not only fully accounts for the influence of current difference between the two domains, but also enforces power caps and safety levels for node temperature levels. Comparisons with industry-standard techniques that are not aware of physical constraints show that current-awareness can improve performance as well as power consumption, with about 16% in energy savings. Such savings indicate the utility of adding physical awareness to the ways in which IT systems are managed.
使用虚拟化技术对服务器进行整合是降低数据中心能耗的常用方法。然而,当前的实践假设在同构物理环境中操作的同构服务器。在我们的中型、设备齐全的数据中心收集的实验证据挑战了这些假设,发现机箱结构会显著影响冷却功率的使用。特别是,单个机箱中的多个功率域可以具有不同水平的功率效率,而且,功耗受到这两个域之间电流水平差异的影响。本文介绍了为验证这些事实而设计的实验,然后介绍了提出的电流感知容量管理系统(ccm),该系统通过在服务器之间定期迁移虚拟机来控制跨电源域的资源分配。该方法不仅充分考虑了两个域之间电流差异的影响,而且还对节点温度水平实施了功率上限和安全级别。与不考虑物理限制的行业标准技术相比,当前的意识技术可以提高性能和功耗,节省约16%的能源。这样的节省表明了在IT系统管理方式中增加物理意识的效用。
{"title":"CACM: Current-aware capacity management in consolidated server enclosures","authors":"Hui Chen, Meina Song, Junde Song, Ada Gavrilovska, K. Schwan, M. Kesavan","doi":"10.1109/IGCC.2011.6008588","DOIUrl":"https://doi.org/10.1109/IGCC.2011.6008588","url":null,"abstract":"Using virtualization to consolidate servers is a routine method for reducing power consumption in data centers. Current practice, however, assumes homogeneous servers that operate in a homogeneous physical environment. Experimental evidence collected in our mid-size, fully instrumented data center challenges those assumptions, by finding that chassis construction can significantly influence cooling power usage. In particular, the multiple power domains in a single chassis can have different levels of power efficiency, and further, power consumption is affected by the differences in electrical current levels across these two domains. This paper describes experiments designed to validate these facts, followed by a proposed current-aware capacity management system (CACM) that controls resource allocation across power domains by periodically migrating virtual machines among servers. The method not only fully accounts for the influence of current difference between the two domains, but also enforces power caps and safety levels for node temperature levels. Comparisons with industry-standard techniques that are not aware of physical constraints show that current-awareness can improve performance as well as power consumption, with about 16% in energy savings. Such savings indicate the utility of adding physical awareness to the ways in which IT systems are managed.","PeriodicalId":306876,"journal":{"name":"2011 International Green Computing Conference and Workshops","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115069642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
GDCSim: A tool for analyzing Green Data Center design and resource management techniques GDCSim:分析绿色数据中心设计和资源管理技术的工具
Pub Date : 2011-07-25 DOI: 10.1109/IGCC.2011.6008612
S. Gupta, Rose Robin Gilbert, Ayan Banerjee, Z. Abbasi, T. Mukherjee, G. Varsamopoulos
Energy consumption in data centers can be reduced by efficient design of the data centers and efficient management of computing resources and cooling units. A major obstacle in the analysis of data centers is the lack of a holistic simulator, where the impact of new computing resource (or cooling) management techniques can be tested with diffierent designs (i.e., layouts and configurations) of data centers. To fill this gap, this paper proposes Green Data Center Simulator (GDCSim) for studying the energy efficiency of data centers under various data center geometries, workload characteristics, platform power management schemes, and scheduling algorithms. GDCSim is used to iteratively design green data centers. Further, it is validated against established CFD simulators. GDCSim is developed as a part of the BlueTool infrastructure project at Impact Lab.
通过对数据中心的高效设计、对计算资源和冷却单元的高效管理,可以降低数据中心的能耗。数据中心分析的一个主要障碍是缺乏一个整体模拟器,在这个模拟器中,可以用数据中心的不同设计(即布局和配置)来测试新的计算资源(或冷却)管理技术的影响。为了填补这一空白,本文提出了绿色数据中心模拟器(GDCSim),用于研究各种数据中心几何形状、工作负载特征、平台电源管理方案和调度算法下数据中心的能源效率。采用GDCSim迭代设计绿色数据中心。此外,在已建立的CFD模拟器上进行了验证。GDCSim是Impact Lab的蓝牙基础设施项目的一部分。
{"title":"GDCSim: A tool for analyzing Green Data Center design and resource management techniques","authors":"S. Gupta, Rose Robin Gilbert, Ayan Banerjee, Z. Abbasi, T. Mukherjee, G. Varsamopoulos","doi":"10.1109/IGCC.2011.6008612","DOIUrl":"https://doi.org/10.1109/IGCC.2011.6008612","url":null,"abstract":"Energy consumption in data centers can be reduced by efficient design of the data centers and efficient management of computing resources and cooling units. A major obstacle in the analysis of data centers is the lack of a holistic simulator, where the impact of new computing resource (or cooling) management techniques can be tested with diffierent designs (i.e., layouts and configurations) of data centers. To fill this gap, this paper proposes Green Data Center Simulator (GDCSim) for studying the energy efficiency of data centers under various data center geometries, workload characteristics, platform power management schemes, and scheduling algorithms. GDCSim is used to iteratively design green data centers. Further, it is validated against established CFD simulators. GDCSim is developed as a part of the BlueTool infrastructure project at Impact Lab.","PeriodicalId":306876,"journal":{"name":"2011 International Green Computing Conference and Workshops","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116478335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 64
MIND: A black-box energy consumption model for disk arrays MIND:磁盘阵列的黑盒能耗模型
Pub Date : 2011-07-25 DOI: 10.1109/IGCC.2011.6008571
Zhuo Liu, Jian Zhou, Weikuan Yu, Fei Wu, X. Qin, C. Xie
Energy consumption is becoming a growing concern in data centers. Many energy-conservation techniques have been proposed to address this problem. However, an integrated method is still needed to evaluate energy efficiency of storage systems and various power conservation techniques. Extensive measurements of different workloads on storage systems are often very time-consuming and require expensive equipments. We have analyzed changing characteristics such as power and performance of stand-alone disks and RAID arrays, and then defined MIND as a black box power model for RAID arrays. MIND is devised to quantitatively measure the power consumption of redundant disk arrays running different workloads in a variety of execution modes. In MIND, we define five modes (idle, standby, and several types of access) and four actions, to precisely characterize power states and changes of RAID arrays. In addition, we develop corresponding metrics for each mode and action, and then integrate the model and a measurement algorithm into a popular trace tool - blktrace. With these features, we are able to run different IO traces on large-scale storage systems with power conservation techniques. Accurate energy consumption and performance statistics are then collected to evaluate energy efficiency of storage system designs and power conservation techniques. Our experiments running both synthetic and real-world workloads on enterprise RAID arrays show that MIND can estimate power consumptions of disk arrays with an error rate less than 2%.
能源消耗正在成为数据中心日益关注的问题。为了解决这个问题,已经提出了许多节能技术。然而,仍然需要一个综合的方法来评估存储系统和各种节能技术的能源效率。对存储系统上不同工作负载的广泛测量通常非常耗时,并且需要昂贵的设备。我们分析了独立磁盘和RAID阵列的功率和性能等变化特征,然后将MIND定义为RAID阵列的黑盒功率模型。MIND旨在定量测量在各种执行模式下运行不同工作负载的冗余磁盘阵列的功耗。在MIND中,我们定义了五种模式(空闲、待机和几种类型的访问)和四种操作,以精确地描述RAID阵列的电源状态和变化。此外,我们为每种模式和动作制定了相应的度量,然后将该模型和测量算法集成到流行的跟踪工具- blktrace中。有了这些特性,我们就可以在具有节能技术的大型存储系统上运行不同的IO跟踪。然后收集准确的能耗和性能统计数据,以评估存储系统设计和节能技术的能效。我们在企业RAID阵列上运行合成工作负载和实际工作负载的实验表明,MIND可以以低于2%的错误率估计磁盘阵列的功耗。
{"title":"MIND: A black-box energy consumption model for disk arrays","authors":"Zhuo Liu, Jian Zhou, Weikuan Yu, Fei Wu, X. Qin, C. Xie","doi":"10.1109/IGCC.2011.6008571","DOIUrl":"https://doi.org/10.1109/IGCC.2011.6008571","url":null,"abstract":"Energy consumption is becoming a growing concern in data centers. Many energy-conservation techniques have been proposed to address this problem. However, an integrated method is still needed to evaluate energy efficiency of storage systems and various power conservation techniques. Extensive measurements of different workloads on storage systems are often very time-consuming and require expensive equipments. We have analyzed changing characteristics such as power and performance of stand-alone disks and RAID arrays, and then defined MIND as a black box power model for RAID arrays. MIND is devised to quantitatively measure the power consumption of redundant disk arrays running different workloads in a variety of execution modes. In MIND, we define five modes (idle, standby, and several types of access) and four actions, to precisely characterize power states and changes of RAID arrays. In addition, we develop corresponding metrics for each mode and action, and then integrate the model and a measurement algorithm into a popular trace tool - blktrace. With these features, we are able to run different IO traces on large-scale storage systems with power conservation techniques. Accurate energy consumption and performance statistics are then collected to evaluate energy efficiency of storage system designs and power conservation techniques. Our experiments running both synthetic and real-world workloads on enterprise RAID arrays show that MIND can estimate power consumptions of disk arrays with an error rate less than 2%.","PeriodicalId":306876,"journal":{"name":"2011 International Green Computing Conference and Workshops","volume":"281 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134148912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
DiscPOP: Power-aware buffer management for disk accesses DiscPOP:用于磁盘访问的电源感知缓冲区管理
Pub Date : 2011-07-25 DOI: 10.1109/IGCC.2011.6008590
Xiongzi Ge, D. Feng, D. Du
Much research has been conducted on energy efficient cache buffer management for disk based storage systems. Some of them use greedy prefetching technique to artificially increase disk idle intervals if there are a large number of known future requests. However, this might result in sub-optimal solution by not exploiting the relationship between I/O access pattern (sequential/random) and application pattern (CPU required for computing time). In a CPU-bound application, by explicitly taking into account this relationship it may reduce energy conservation by up to 50% and increase power cycle number by 100% compared to an existing efficient prefetching scheme without this consideration. In this paper, we consider the tradeoff between disk power consumption, performance guarantee and disk reliability all together by proposing a Disk characteristic based Power-Optimal Prefetching (DiscPOP) scheme. Specifically, we make two contributions: (i) A theoretical model is conducted to analyze energy-efficient cache buffer management in disk I/O system and it is formulated as an optimization problem. We have shown it can be solved via an Integer Linear Programming (ILP) technique. (ii) We propose a simple Divide-and-Conquer based offline algorithm named Greedy Partition (GP) to divide the problem into several small ones and solve them separately via an ILP solver. We use trace-driven simulations to evaluate our proposed scheme. The results show GP outperforms the traditional aggressive prefetching by up to 29.2% more disk energy conservation and 20.6% power cycle reduction.
在基于磁盘的存储系统的节能缓存缓冲区管理方面进行了大量的研究。如果有大量已知的未来请求,它们中的一些使用贪婪预取技术来人为地增加磁盘空闲间隔。但是,由于没有利用I/O访问模式(顺序/随机)和应用程序模式(计算时间所需的CPU)之间的关系,这可能导致次优解决方案。在cpu密集型应用程序中,与不考虑这种关系的现有高效预取方案相比,通过显式地考虑这种关系,可以减少高达50%的节能,并将电源循环次数增加100%。在本文中,我们通过提出一种基于磁盘特性的功率最优预取(DiscPOP)方案,同时考虑了磁盘功耗、性能保证和磁盘可靠性之间的权衡。具体来说,我们做了两个贡献:(i)建立了一个理论模型来分析磁盘i /O系统中节能缓存缓冲区管理,并将其表述为一个优化问题。我们已经证明了它可以通过整数线性规划(ILP)技术来解决。(ii)我们提出了一种简单的基于分而治之的离线算法贪婪分割(GP),将问题分成几个小问题,并通过ILP求解器分别求解。我们使用跟踪驱动模拟来评估我们提出的方案。结果表明,GP比传统的主动预取节省了29.2%的磁盘能量,减少了20.6%的功率循环。
{"title":"DiscPOP: Power-aware buffer management for disk accesses","authors":"Xiongzi Ge, D. Feng, D. Du","doi":"10.1109/IGCC.2011.6008590","DOIUrl":"https://doi.org/10.1109/IGCC.2011.6008590","url":null,"abstract":"Much research has been conducted on energy efficient cache buffer management for disk based storage systems. Some of them use greedy prefetching technique to artificially increase disk idle intervals if there are a large number of known future requests. However, this might result in sub-optimal solution by not exploiting the relationship between I/O access pattern (sequential/random) and application pattern (CPU required for computing time). In a CPU-bound application, by explicitly taking into account this relationship it may reduce energy conservation by up to 50% and increase power cycle number by 100% compared to an existing efficient prefetching scheme without this consideration. In this paper, we consider the tradeoff between disk power consumption, performance guarantee and disk reliability all together by proposing a Disk characteristic based Power-Optimal Prefetching (DiscPOP) scheme. Specifically, we make two contributions: (i) A theoretical model is conducted to analyze energy-efficient cache buffer management in disk I/O system and it is formulated as an optimization problem. We have shown it can be solved via an Integer Linear Programming (ILP) technique. (ii) We propose a simple Divide-and-Conquer based offline algorithm named Greedy Partition (GP) to divide the problem into several small ones and solve them separately via an ILP solver. We use trace-driven simulations to evaluate our proposed scheme. The results show GP outperforms the traditional aggressive prefetching by up to 29.2% more disk energy conservation and 20.6% power cycle reduction.","PeriodicalId":306876,"journal":{"name":"2011 International Green Computing Conference and Workshops","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132837122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2011 International Green Computing Conference and Workshops
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1