T.-K.J. Ting, Gyh-Bin Wang, Ming-Hung Wang, Chun-Peng Wu, Chunyan Wang, C. Lo, Li-Chin Tien, D. Yuan, Yung-Ching Hsieh, Jenn-Shiang Lai, Wen-Pin Hsu, Chien-Chih Huang, Chi-Kang Chen, Yung-Fa Chou, D. Kwai, Zhe Wang, Wei Wu, S. Tomishima, Patrick Stolt, Shih-Lien Lu
{"title":"23.9 An 8-channel 4.5Gb 180GB/s 18ns-row-latency RAM for the last level cache","authors":"T.-K.J. Ting, Gyh-Bin Wang, Ming-Hung Wang, Chun-Peng Wu, Chunyan Wang, C. Lo, Li-Chin Tien, D. Yuan, Yung-Ching Hsieh, Jenn-Shiang Lai, Wen-Pin Hsu, Chien-Chih Huang, Chi-Kang Chen, Yung-Fa Chou, D. Kwai, Zhe Wang, Wei Wu, S. Tomishima, Patrick Stolt, Shih-Lien Lu","doi":"10.1109/ISSCC.2017.7870432","DOIUrl":null,"url":null,"abstract":"In recent years, the demand for memory performance has grown rapidly due to the increasing number of cores on a single CPU, along with the integration of graphics processing units and other accelerators. Caching has been a very effective way to relieve bandwidth demand and to reduce average memory latency. As shown by the cache feature table in Fig. 23.9.1, there is a big latency gap between SRAM caches in the CPU and the external DRAM main memory. As a key element for future computing systems, the last level cache (LLC) should have a high random access bandwidth, a low random access latency, a density of 1 to 8Gb, and all signal pads located on one side of the chip [1]. A logic-process-based solution was proposed [2], but it is not scalable, and has a high standby current due to its need for frequent refresh. HBM2 was also proposed [3], but its row latency is not better than conventional DRAM, and its random-access bandwidth is still limited by tFAW, as shown in Fig. 23.9.1. This paper describes the high-bandwidth low-latency (HBLL) RAM design: how it overcomes these challenges and meets requirements in a cost-effective way.","PeriodicalId":269679,"journal":{"name":"2017 IEEE International Solid-State Circuits Conference (ISSCC)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE International Solid-State Circuits Conference (ISSCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISSCC.2017.7870432","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
In recent years, the demand for memory performance has grown rapidly due to the increasing number of cores on a single CPU, along with the integration of graphics processing units and other accelerators. Caching has been a very effective way to relieve bandwidth demand and to reduce average memory latency. As shown by the cache feature table in Fig. 23.9.1, there is a big latency gap between SRAM caches in the CPU and the external DRAM main memory. As a key element for future computing systems, the last level cache (LLC) should have a high random access bandwidth, a low random access latency, a density of 1 to 8Gb, and all signal pads located on one side of the chip [1]. A logic-process-based solution was proposed [2], but it is not scalable, and has a high standby current due to its need for frequent refresh. HBM2 was also proposed [3], but its row latency is not better than conventional DRAM, and its random-access bandwidth is still limited by tFAW, as shown in Fig. 23.9.1. This paper describes the high-bandwidth low-latency (HBLL) RAM design: how it overcomes these challenges and meets requirements in a cost-effective way.