A compressed memory hierarchy using an indirect index cache

Erik G. Hallnor, S. Reinhardt
{"title":"A compressed memory hierarchy using an indirect index cache","authors":"Erik G. Hallnor, S. Reinhardt","doi":"10.1145/1054943.1054945","DOIUrl":null,"url":null,"abstract":"The large and growing impact of memory hierarchies on overall system performance compels designers to investigate innovative techniques to improve memory-system efficiency. We propose and analyze a memory hierarchy that increases both the effective capacity of memory structures and the effective bandwidth of interconnects by storing and transmitting data in compressed form.Caches play a key role in hiding memory latencies. However, cache sizes are constrained by die area and cost. A cache's effective size can be increased by storing compressed data, if the storage unused by a compressed block can be allocated to other blocks. We use a modified Indirect Index Cache to allocate variable amounts of storage to different blocks, depending on their compressibility.By coupling our compressed cache design with a similarly compressed main memory, we can easily transfer data between these structures in a compressed state, increasing the effective memory bus bandwidth. This optimization further improves performance when bus bandwidth is critical.Our simulation results, using the SPEC CPU2000 benchmarks, show that our design increases performance by up to 225% on some benchmarks while degrading performance in general by no more than 2%, other than a 12% decrease on a single benchmark. Compressed bus transfers alone account for up to 80% of this improvement, with the remainder coming from increased effective cache capacity. As memory latencies increase, our design becomes even more beneficial.","PeriodicalId":249099,"journal":{"name":"Workshop on Memory Performance Issues","volume":"208 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2004-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"46","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Workshop on Memory Performance Issues","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/1054943.1054945","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 46

Abstract

The large and growing impact of memory hierarchies on overall system performance compels designers to investigate innovative techniques to improve memory-system efficiency. We propose and analyze a memory hierarchy that increases both the effective capacity of memory structures and the effective bandwidth of interconnects by storing and transmitting data in compressed form.Caches play a key role in hiding memory latencies. However, cache sizes are constrained by die area and cost. A cache's effective size can be increased by storing compressed data, if the storage unused by a compressed block can be allocated to other blocks. We use a modified Indirect Index Cache to allocate variable amounts of storage to different blocks, depending on their compressibility.By coupling our compressed cache design with a similarly compressed main memory, we can easily transfer data between these structures in a compressed state, increasing the effective memory bus bandwidth. This optimization further improves performance when bus bandwidth is critical.Our simulation results, using the SPEC CPU2000 benchmarks, show that our design increases performance by up to 225% on some benchmarks while degrading performance in general by no more than 2%, other than a 12% decrease on a single benchmark. Compressed bus transfers alone account for up to 80% of this improvement, with the remainder coming from increased effective cache capacity. As memory latencies increase, our design becomes even more beneficial.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用间接索引缓存的压缩内存层次结构
内存层次结构对整个系统性能的巨大且不断增长的影响迫使设计人员研究创新技术来提高内存系统的效率。我们提出并分析了一种存储器层次结构,它通过压缩形式存储和传输数据来增加存储器结构的有效容量和互连的有效带宽。缓存在隐藏内存延迟方面起着关键作用。然而,高速缓存的大小受到芯片面积和成本的限制。如果可以将压缩块未使用的存储空间分配给其他块,则可以通过存储压缩数据来增加缓存的有效大小。我们使用修改后的间接索引缓存,根据不同块的可压缩性为它们分配可变的存储量。通过将我们的压缩缓存设计与类似的压缩主存相结合,我们可以轻松地在压缩状态下在这些结构之间传输数据,从而增加有效的内存总线带宽。当总线带宽至关重要时,此优化进一步提高了性能。我们使用SPEC CPU2000基准测试的模拟结果表明,我们的设计在某些基准测试上将性能提高了225%,而性能下降一般不超过2%,除了单个基准测试下降12%之外。仅压缩总线传输就占到这一改进的80%,其余部分来自有效缓存容量的增加。随着内存延迟的增加,我们的设计变得更加有益。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Compiler-optimized usage of partitioned memories A case for multi-level main memory On the effectiveness of prefetching and reuse in reducing L1 data cache traffic: a case study of Snort SCIMA-SMP: on-chip memory processor architecture for SMP Evaluating kilo-instruction multiprocessors
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1