23.9 An 8-channel 4.5Gb 180GB/s 18ns-row-latency RAM for the last level cache

T.-K.J. Ting, Gyh-Bin Wang, Ming-Hung Wang, Chun-Peng Wu, Chunyan Wang, C. Lo, Li-Chin Tien, D. Yuan, Yung-Ching Hsieh, Jenn-Shiang Lai, Wen-Pin Hsu, Chien-Chih Huang, Chi-Kang Chen, Yung-Fa Chou, D. Kwai, Zhe Wang, Wei Wu, S. Tomishima, Patrick Stolt, Shih-Lien Lu
{"title":"23.9 An 8-channel 4.5Gb 180GB/s 18ns-row-latency RAM for the last level cache","authors":"T.-K.J. Ting, Gyh-Bin Wang, Ming-Hung Wang, Chun-Peng Wu, Chunyan Wang, C. Lo, Li-Chin Tien, D. Yuan, Yung-Ching Hsieh, Jenn-Shiang Lai, Wen-Pin Hsu, Chien-Chih Huang, Chi-Kang Chen, Yung-Fa Chou, D. Kwai, Zhe Wang, Wei Wu, S. Tomishima, Patrick Stolt, Shih-Lien Lu","doi":"10.1109/ISSCC.2017.7870432","DOIUrl":null,"url":null,"abstract":"In recent years, the demand for memory performance has grown rapidly due to the increasing number of cores on a single CPU, along with the integration of graphics processing units and other accelerators. Caching has been a very effective way to relieve bandwidth demand and to reduce average memory latency. As shown by the cache feature table in Fig. 23.9.1, there is a big latency gap between SRAM caches in the CPU and the external DRAM main memory. As a key element for future computing systems, the last level cache (LLC) should have a high random access bandwidth, a low random access latency, a density of 1 to 8Gb, and all signal pads located on one side of the chip [1]. A logic-process-based solution was proposed [2], but it is not scalable, and has a high standby current due to its need for frequent refresh. HBM2 was also proposed [3], but its row latency is not better than conventional DRAM, and its random-access bandwidth is still limited by tFAW, as shown in Fig. 23.9.1. This paper describes the high-bandwidth low-latency (HBLL) RAM design: how it overcomes these challenges and meets requirements in a cost-effective way.","PeriodicalId":269679,"journal":{"name":"2017 IEEE International Solid-State Circuits Conference (ISSCC)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE International Solid-State Circuits Conference (ISSCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISSCC.2017.7870432","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

In recent years, the demand for memory performance has grown rapidly due to the increasing number of cores on a single CPU, along with the integration of graphics processing units and other accelerators. Caching has been a very effective way to relieve bandwidth demand and to reduce average memory latency. As shown by the cache feature table in Fig. 23.9.1, there is a big latency gap between SRAM caches in the CPU and the external DRAM main memory. As a key element for future computing systems, the last level cache (LLC) should have a high random access bandwidth, a low random access latency, a density of 1 to 8Gb, and all signal pads located on one side of the chip [1]. A logic-process-based solution was proposed [2], but it is not scalable, and has a high standby current due to its need for frequent refresh. HBM2 was also proposed [3], but its row latency is not better than conventional DRAM, and its random-access bandwidth is still limited by tFAW, as shown in Fig. 23.9.1. This paper describes the high-bandwidth low-latency (HBLL) RAM design: how it overcomes these challenges and meets requirements in a cost-effective way.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
23.9最后一级缓存的8通道4.5Gb 180GB/s 18ns行延迟RAM
近年来,由于单个CPU上的核心数量不断增加,以及图形处理单元和其他加速器的集成,对内存性能的需求迅速增长。缓存是缓解带宽需求和减少平均内存延迟的一种非常有效的方法。如图23.9.1中的缓存特征表所示,CPU中的SRAM缓存与外部DRAM主存之间存在较大的延迟差距。最后一级缓存(last level cache, LLC)作为未来计算系统的关键元素,应该具有高随机存取带宽、低随机存取延迟、1 ~ 8Gb的密度,并且所有信号垫都位于芯片的一侧[1]。提出了一种基于逻辑进程的解决方案[2],但它不具有可扩展性,并且由于需要频繁刷新而具有高待机电流。HBM2也被提出[3],但其行时延并不比传统DRAM好,随机存取带宽仍然受到tFAW的限制,如图23.9.1所示。本文介绍了高带宽低延迟(HBLL) RAM的设计:如何克服这些挑战,并以经济有效的方式满足需求。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
20.7 A 13.8µW binaural dual-microphone digital ANSI S1.11 filter bank for hearing aids with zero-short-circuit-current logic in 65nm CMOS 21.6 A 12nW always-on acoustic sensing and object recognition microsystem using frequency-domain feature extraction and SVM classification 7.4 A 915MHz asymmetric radio using Q-enhanced amplifier for a fully integrated 3×3×3mm3 wireless sensor node with 20m non-line-of-sight communication 13.5 A 0.35-to-2.6GHz multilevel outphasing transmitter with a digital interpolating phase modulator enabling up to 400MHz instantaneous bandwidth 5.1 A 5×80W 0.004% THD+N automotive multiphase Class-D audio amplifier with integrated low-latency ΔΣ ADCs for digitized feedback after the output filter
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1