首页 > 最新文献

2018 IEEE International Symposium on High Performance Computer Architecture (HPCA)最新文献

英文 中文
A Spot Capacity Market to Increase Power Infrastructure Utilization in Multi-tenant Data Centers 增加多租户数据中心电力基础设施利用率的现货容量市场
M. A. Islam, Xiaoqi Ren, Shaolei Ren, A. Wierman
Despite the common practice of oversubscription, power capacity is largely under-utilized in data centers. A significant factor driving this under-utilization is fluctuation of the aggregate power demand, resulting in unused “spot (power) capacity”. In this paper, we tap into spot capacity for improving power infrastructure utilization in multi-tenant data centers, an important but under-explored type of data center where multiple tenants house their own physical servers. We propose a novel market, called SpotDC, to allocate spot capacity to tenants on demand. Specifically, SpotDC extracts tenants’ racklevel spot capacity demand through an elastic demand function, based on which the operator sets the market price for spot capacity allocation. We evaluate SpotDC using both testbed experiments and simulations, demonstrating that SpotDC improves power infrastructure utilization and creates a “win-win” situation: the data center operator increases its profit (by nearly 10%), while tenants improve their performance (by 1.2–1.8x on average compared to the no spot capacity case, yet at a marginal cost).
尽管普遍存在超额订阅的做法,但数据中心的电力容量在很大程度上没有得到充分利用。造成这种利用不足的一个重要因素是总电力需求的波动,导致未使用的"现货(电力)容量"。在本文中,我们利用现场容量来提高多租户数据中心的电力基础设施利用率,多租户数据中心是一种重要但尚未开发的数据中心类型,其中多个租户放置自己的物理服务器。我们提出了一个新颖的市场,称为SpotDC,根据需要将现货容量分配给租户。具体而言,SpotDC通过弹性需求函数提取租户的机架级现货容量需求,运营商在此基础上设定现货容量分配的市场价格。我们使用测试平台实验和模拟来评估SpotDC,证明SpotDC提高了电力基础设施的利用率,并创造了一个“双赢”的局面:数据中心运营商增加了利润(近10%),而租户提高了他们的性能(与没有现货容量的情况相比,平均提高了1.2 - 1.8倍,但成本很低)。
{"title":"A Spot Capacity Market to Increase Power Infrastructure Utilization in Multi-tenant Data Centers","authors":"M. A. Islam, Xiaoqi Ren, Shaolei Ren, A. Wierman","doi":"10.1145/3078505.3078542","DOIUrl":"https://doi.org/10.1145/3078505.3078542","url":null,"abstract":"Despite the common practice of oversubscription, power capacity is largely under-utilized in data centers. A significant factor driving this under-utilization is fluctuation of the aggregate power demand, resulting in unused “spot (power) capacity”. In this paper, we tap into spot capacity for improving power infrastructure utilization in multi-tenant data centers, an important but under-explored type of data center where multiple tenants house their own physical servers. We propose a novel market, called SpotDC, to allocate spot capacity to tenants on demand. Specifically, SpotDC extracts tenants’ racklevel spot capacity demand through an elastic demand function, based on which the operator sets the market price for spot capacity allocation. We evaluate SpotDC using both testbed experiments and simulations, demonstrating that SpotDC improves power infrastructure utilization and creates a “win-win” situation: the data center operator increases its profit (by nearly 10%), while tenants improve their performance (by 1.2–1.8x on average compared to the no spot capacity case, yet at a marginal cost).","PeriodicalId":154694,"journal":{"name":"2018 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"735 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122704608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Compressing DMA Engine: Leveraging Activation Sparsity for Training Deep Neural Networks 压缩DMA引擎:利用激活稀疏性训练深度神经网络
Minsoo Rhu, Mike O'Connor, Niladrish Chatterjee, Jeff Pool, S. Keckler
Popular deep learning frameworks require users to fine-tune their memory usage so that the training data of a deep neural network (DNN) fits within the GPU physical memory. Prior work tries to address this restriction by virtualizing the memory usage of DNNs, enabling both CPU and GPU memory to be utilized for memory allocations. Despite its merits, virtualizing memory can incur significant performance overheads when the time needed to copy data back and forth from CPU memory is higher than the latency to perform DNN computations. We introduce a high-performance virtualization strategy based on a "compressing DMA engine" (cDMA) that drastically reduces the size of the data structures that are targeted for CPU-side allocations. The cDMA engine offers an average 2.6x (maximum 13.8x) compression ratio by exploiting the sparsity inherent in offloaded data, improving the performance of virtualized DNNs by an average 53% (maximum 79%) when evaluated on an NVIDIA Titan Xp.
流行的深度学习框架需要用户微调他们的内存使用,以便深度神经网络(DNN)的训练数据适合GPU的物理内存。先前的工作试图通过虚拟化dnn的内存使用来解决这一限制,使CPU和GPU内存都可以用于内存分配。尽管有其优点,但当从CPU内存来回复制数据所需的时间高于执行DNN计算的延迟时,虚拟化内存可能会导致显著的性能开销。我们引入了一种基于“压缩DMA引擎”(cDMA)的高性能虚拟化策略,该策略极大地减少了cpu端分配目标数据结构的大小。通过利用卸载数据固有的稀疏性,cDMA引擎提供了平均2.6倍(最高13.8倍)的压缩比,在NVIDIA Titan Xp上评估时,虚拟化dnn的性能平均提高了53%(最高79%)。
{"title":"Compressing DMA Engine: Leveraging Activation Sparsity for Training Deep Neural Networks","authors":"Minsoo Rhu, Mike O'Connor, Niladrish Chatterjee, Jeff Pool, S. Keckler","doi":"10.1109/HPCA.2018.00017","DOIUrl":"https://doi.org/10.1109/HPCA.2018.00017","url":null,"abstract":"Popular deep learning frameworks require users to fine-tune their memory usage so that the training data of a deep neural network (DNN) fits within the GPU physical memory. Prior work tries to address this restriction by virtualizing the memory usage of DNNs, enabling both CPU and GPU memory to be utilized for memory allocations. Despite its merits, virtualizing memory can incur significant performance overheads when the time needed to copy data back and forth from CPU memory is higher than the latency to perform DNN computations. We introduce a high-performance virtualization strategy based on a \"compressing DMA engine\" (cDMA) that drastically reduces the size of the data structures that are targeted for CPU-side allocations. The cDMA engine offers an average 2.6x (maximum 13.8x) compression ratio by exploiting the sparsity inherent in offloaded data, improving the performance of virtualized DNNs by an average 53% (maximum 79%) when evaluated on an NVIDIA Titan Xp.","PeriodicalId":154694,"journal":{"name":"2018 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132416853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 152
期刊
2018 IEEE International Symposium on High Performance Computer Architecture (HPCA)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1