On energy efficiency of liquid cooled HPC datacenters

M. Patterson, S. Krishnan, John M. Walters
{"title":"On energy efficiency of liquid cooled HPC datacenters","authors":"M. Patterson, S. Krishnan, John M. Walters","doi":"10.1109/ITHERM.2016.7517615","DOIUrl":null,"url":null,"abstract":"HPC Data Center's performance and growth are now being limited by both cost and power. A cost-efficient data center and an energy-efficient data center are all too often mutually exclusive, but they do not have to be. Liquid cooling is one area that, when done right, can improve both costs and energy efficiency. The design for liquid cooling systems generally begins with the ASHRAE liquid cooling datacenter classes. These provide guidance to both datacenter facility cooling system designers and electronic equipment manufacturers by providing a common baseline and understanding of the interface conditions between the cooling and the IT equipment. Further, the liquid cooling classes also suggest possible cooling equipment for a given datacenter class. Due to the aforementioned cooling equipment prescription, perception exists that moving from W1/W2 class environments to W3 or W4 classes represent increased energy efficiency during IT equipment operation. In this paper we show this not to be the case universally and explore a more detailed, technical approach to optimizing both cost and energy efficiency. The range of parameters includes geographical and climate conditions, state of the existing data center cooling infrastructure (greenfield, retrofit, cluster change-out, expansion), and IT level liquid cooling architecture. Through this analysis we show that for energy efficient operation of the IT equipment there exists an optimum liquid operating temperature that can also provide the lowest TCO. This temperature can drive the right capital investment as well as reduce facility operational expense and IT operational expense. We also explore the impact on reliability, the controls architecture, use of efficiency metrics, cluster compute performance, and opportunities for energy re-use.","PeriodicalId":426908,"journal":{"name":"2016 15th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITherm)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 15th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITherm)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITHERM.2016.7517615","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

Abstract

HPC Data Center's performance and growth are now being limited by both cost and power. A cost-efficient data center and an energy-efficient data center are all too often mutually exclusive, but they do not have to be. Liquid cooling is one area that, when done right, can improve both costs and energy efficiency. The design for liquid cooling systems generally begins with the ASHRAE liquid cooling datacenter classes. These provide guidance to both datacenter facility cooling system designers and electronic equipment manufacturers by providing a common baseline and understanding of the interface conditions between the cooling and the IT equipment. Further, the liquid cooling classes also suggest possible cooling equipment for a given datacenter class. Due to the aforementioned cooling equipment prescription, perception exists that moving from W1/W2 class environments to W3 or W4 classes represent increased energy efficiency during IT equipment operation. In this paper we show this not to be the case universally and explore a more detailed, technical approach to optimizing both cost and energy efficiency. The range of parameters includes geographical and climate conditions, state of the existing data center cooling infrastructure (greenfield, retrofit, cluster change-out, expansion), and IT level liquid cooling architecture. Through this analysis we show that for energy efficient operation of the IT equipment there exists an optimum liquid operating temperature that can also provide the lowest TCO. This temperature can drive the right capital investment as well as reduce facility operational expense and IT operational expense. We also explore the impact on reliability, the controls architecture, use of efficiency metrics, cluster compute performance, and opportunities for energy re-use.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
液冷HPC数据中心的能源效率研究
高性能计算数据中心的性能和增长现在受到成本和功率的限制。一个经济高效的数据中心和一个高能效的数据中心常常是相互排斥的,但它们并不一定要如此。液体冷却是一个领域,如果做得好,可以提高成本和能源效率。液体冷却系统的设计通常从ASHRAE液体冷却数据中心课程开始。这些标准为数据中心设施冷却系统设计人员和电子设备制造商提供了一个共同的基线,并了解了冷却设备和IT设备之间的接口条件。此外,液冷类还建议为给定的数据中心类提供可能的冷却设备。由于前面提到的冷却设备处方,人们认为从W1/W2类环境转移到W3或W4类环境意味着在IT设备运行期间提高了能源效率。在本文中,我们表明这种情况并非普遍存在,并探索了一种更详细的技术方法来优化成本和能源效率。参数范围包括地理和气候条件、现有数据中心冷却基础设施的状态(新建、改造、集群变更、扩展)以及IT级液冷架构。通过这一分析,我们表明,对于IT设备的节能运行,存在一个最佳的液体工作温度,也可以提供最低的TCO。这种温度可以推动正确的资本投资,并减少设施运营费用和IT运营费用。我们还探讨了对可靠性的影响、控制体系结构、效率指标的使用、集群计算性能以及能源再利用的机会。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Analytical model of graphene-enabled ultra-low power phase change memory ALN thin-films as heat spreaders in III–V photonics devices Part 2: Simulations Experimental study of bubble dynamics in highly wetting dielectric liquid pool boiling through high-speed video Condensate mobility actuated by microsurface topography and wettability modifications Inverse approach to characterize die-attach thermal interface of light emitting diodes
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1