快速准确的DRAM仿真:能否进一步加速?

Johannes Feldmann, Kira Kraft, Lukas Steiner, N. Wehn, Matthias Jung
{"title":"快速准确的DRAM仿真:能否进一步加速?","authors":"Johannes Feldmann, Kira Kraft, Lukas Steiner, N. Wehn, Matthias Jung","doi":"10.23919/DATE48585.2020.9116275","DOIUrl":null,"url":null,"abstract":"The simulation of Dynamic Random Access Memories (DRAMs) in a system context requires highly accurate models due to the complex timing and power behavior of DRAMs. However, cycle accurate DRAM models often become the bottleneck regarding the overall simulation time. Therefore, fast but accurate DRAM simulation models are mandatory. This paper proposes two new performance optimized DRAM models that further accelerate the simulation speed with only a negligible degradation in accuracy. The first model is an enhanced Transaction Level Model (TLM), which uses a look-up table to accelerate parts of the simulation that feature a high memory access density for online scenarios. The second model is a neural network based simulator for offline trace analysis. We show a mathematical methodology to generate the inputs for the Look-Up Table (LUT) and an optimized artificial training set for the neural network. The enhanced TLM model is up to 5 times faster compared to a state-of-the-art TLM DRAM simulator. The neural network is able to speed up the simulation up to a factor of 10×, while inferring on a GPU. Both solutions provide only a slight decrease in accuracy of approximately 5%.","PeriodicalId":289525,"journal":{"name":"2020 Design, Automation & Test in Europe Conference & Exhibition (DATE)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Fast and Accurate DRAM Simulation: Can we Further Accelerate it?\",\"authors\":\"Johannes Feldmann, Kira Kraft, Lukas Steiner, N. Wehn, Matthias Jung\",\"doi\":\"10.23919/DATE48585.2020.9116275\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The simulation of Dynamic Random Access Memories (DRAMs) in a system context requires highly accurate models due to the complex timing and power behavior of DRAMs. However, cycle accurate DRAM models often become the bottleneck regarding the overall simulation time. Therefore, fast but accurate DRAM simulation models are mandatory. This paper proposes two new performance optimized DRAM models that further accelerate the simulation speed with only a negligible degradation in accuracy. The first model is an enhanced Transaction Level Model (TLM), which uses a look-up table to accelerate parts of the simulation that feature a high memory access density for online scenarios. The second model is a neural network based simulator for offline trace analysis. We show a mathematical methodology to generate the inputs for the Look-Up Table (LUT) and an optimized artificial training set for the neural network. The enhanced TLM model is up to 5 times faster compared to a state-of-the-art TLM DRAM simulator. The neural network is able to speed up the simulation up to a factor of 10×, while inferring on a GPU. Both solutions provide only a slight decrease in accuracy of approximately 5%.\",\"PeriodicalId\":289525,\"journal\":{\"name\":\"2020 Design, Automation & Test in Europe Conference & Exhibition (DATE)\",\"volume\":\"49 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 Design, Automation & Test in Europe Conference & Exhibition (DATE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/DATE48585.2020.9116275\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 Design, Automation & Test in Europe Conference & Exhibition (DATE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/DATE48585.2020.9116275","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

摘要

动态随机存取存储器(dram)在系统环境中的仿真需要高度精确的模型,因为dram的时序和功率行为非常复杂。然而,周期精确的DRAM模型往往成为整体仿真时间的瓶颈。因此,快速而准确的DRAM仿真模型是必不可少的。本文提出了两种新的性能优化的DRAM模型,进一步加快了仿真速度,而精度的下降可以忽略不计。第一个模型是一个增强的事务级模型(TLM),它使用一个查找表来加速部分模拟,这些模拟以在线场景的高内存访问密度为特征。第二个模型是一个基于神经网络的离线跟踪分析模拟器。我们展示了一种数学方法来生成查找表(LUT)的输入和神经网络的优化人工训练集。增强的TLM模型比最先进的TLM DRAM模拟器快5倍。当在GPU上进行推理时,神经网络能够将模拟速度提高到10倍。这两种解决方案的精度都只有大约5%的轻微下降。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Fast and Accurate DRAM Simulation: Can we Further Accelerate it?
The simulation of Dynamic Random Access Memories (DRAMs) in a system context requires highly accurate models due to the complex timing and power behavior of DRAMs. However, cycle accurate DRAM models often become the bottleneck regarding the overall simulation time. Therefore, fast but accurate DRAM simulation models are mandatory. This paper proposes two new performance optimized DRAM models that further accelerate the simulation speed with only a negligible degradation in accuracy. The first model is an enhanced Transaction Level Model (TLM), which uses a look-up table to accelerate parts of the simulation that feature a high memory access density for online scenarios. The second model is a neural network based simulator for offline trace analysis. We show a mathematical methodology to generate the inputs for the Look-Up Table (LUT) and an optimized artificial training set for the neural network. The enhanced TLM model is up to 5 times faster compared to a state-of-the-art TLM DRAM simulator. The neural network is able to speed up the simulation up to a factor of 10×, while inferring on a GPU. Both solutions provide only a slight decrease in accuracy of approximately 5%.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
In-Memory Resistive RAM Implementation of Binarized Neural Networks for Medical Applications Towards Formal Verification of Optimized and Industrial Multipliers A 100KHz-1GHz Termination-dependent Human Body Communication Channel Measurement using Miniaturized Wearable Devices Computational SRAM Design Automation using Pushed-Rule Bitcells for Energy-Efficient Vector Processing PIM-Aligner: A Processing-in-MRAM Platform for Biological Sequence Alignment
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1