结合模式预测技术的高效节能滤波器缓存设计

K. Vivekanandarajah, T. Srikanthan, S. Bhattacharyya, Prasanna Venkatesh Kannan
{"title":"结合模式预测技术的高效节能滤波器缓存设计","authors":"K. Vivekanandarajah, T. Srikanthan, S. Bhattacharyya, Prasanna Venkatesh Kannan","doi":"10.1109/IWSOC.2003.1213003","DOIUrl":null,"url":null,"abstract":"A filter cache is proposed at a higher level than the L1 (main) cache in the memory hierarchy and is much smaller. The typical size of filter cache is of the order of 256 Bytes. Prediction algorithms popularly based upon the Next Fetch Prediction Table (NFPT) help making the choice between the filter cache and the main cache. In this paper we introduce a new prediction mechanism for predicting filter cache access, which relies on the hit or miss pattern of the instruction access stream over the past filter cache lines accesses. While NFPT makes predominantly incorrect hit-predictions, the proposed Pattern Table based approach reduces this. Predominantly correct prediction achieves efficient cache access, and eliminates cache-miss penalties. Our extensive simulations across a wide range of benchmark applications illustrate that the new prediction scheme is efficient as it results in improved prediction accuracy. Moreover, it reduces energy consumption of the filter cache by as much as 25% compared to NFPT based approaches. Further, the technique implemented is elegant in the form of hardware implementation as it consists only of a shift register and a Look up Table (LUT) and is hence area and energy efficient in contrast to the published prediction techniques.","PeriodicalId":259178,"journal":{"name":"The 3rd IEEE International Workshop on System-on-Chip for Real-Time Applications, 2003. Proceedings.","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2003-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Incorporating pattern prediction technique for energy efficient filter cache design\",\"authors\":\"K. Vivekanandarajah, T. Srikanthan, S. Bhattacharyya, Prasanna Venkatesh Kannan\",\"doi\":\"10.1109/IWSOC.2003.1213003\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A filter cache is proposed at a higher level than the L1 (main) cache in the memory hierarchy and is much smaller. The typical size of filter cache is of the order of 256 Bytes. Prediction algorithms popularly based upon the Next Fetch Prediction Table (NFPT) help making the choice between the filter cache and the main cache. In this paper we introduce a new prediction mechanism for predicting filter cache access, which relies on the hit or miss pattern of the instruction access stream over the past filter cache lines accesses. While NFPT makes predominantly incorrect hit-predictions, the proposed Pattern Table based approach reduces this. Predominantly correct prediction achieves efficient cache access, and eliminates cache-miss penalties. Our extensive simulations across a wide range of benchmark applications illustrate that the new prediction scheme is efficient as it results in improved prediction accuracy. Moreover, it reduces energy consumption of the filter cache by as much as 25% compared to NFPT based approaches. Further, the technique implemented is elegant in the form of hardware implementation as it consists only of a shift register and a Look up Table (LUT) and is hence area and energy efficient in contrast to the published prediction techniques.\",\"PeriodicalId\":259178,\"journal\":{\"name\":\"The 3rd IEEE International Workshop on System-on-Chip for Real-Time Applications, 2003. Proceedings.\",\"volume\":\"44 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2003-07-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The 3rd IEEE International Workshop on System-on-Chip for Real-Time Applications, 2003. Proceedings.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IWSOC.2003.1213003\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 3rd IEEE International Workshop on System-on-Chip for Real-Time Applications, 2003. Proceedings.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IWSOC.2003.1213003","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

摘要

在内存层次结构中,过滤器缓存被建议在比L1(主)缓存更高的级别上,并且要小得多。过滤器缓存的典型大小为256字节。通常基于下一个读取预测表(NFPT)的预测算法有助于在过滤器缓存和主缓存之间做出选择。本文介绍了一种新的预测过滤器缓存访问的机制,该机制依赖于指令访问流在过去过滤器缓存线访问中的命中或未命中模式。虽然NFPT主要导致错误的命中预测,但是基于模式表的建议方法减少了这种情况。主要正确的预测实现了高效的缓存访问,并消除了缓存丢失的惩罚。我们在广泛的基准应用中进行了广泛的模拟,表明新的预测方案是有效的,因为它可以提高预测精度。此外,与基于NFPT的方法相比,它将过滤器缓存的能耗降低了25%。此外,所实现的技术以硬件实现的形式是优雅的,因为它仅由移位寄存器和查找表(LUT)组成,因此与已发布的预测技术相比,它具有面积和能源效率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Incorporating pattern prediction technique for energy efficient filter cache design
A filter cache is proposed at a higher level than the L1 (main) cache in the memory hierarchy and is much smaller. The typical size of filter cache is of the order of 256 Bytes. Prediction algorithms popularly based upon the Next Fetch Prediction Table (NFPT) help making the choice between the filter cache and the main cache. In this paper we introduce a new prediction mechanism for predicting filter cache access, which relies on the hit or miss pattern of the instruction access stream over the past filter cache lines accesses. While NFPT makes predominantly incorrect hit-predictions, the proposed Pattern Table based approach reduces this. Predominantly correct prediction achieves efficient cache access, and eliminates cache-miss penalties. Our extensive simulations across a wide range of benchmark applications illustrate that the new prediction scheme is efficient as it results in improved prediction accuracy. Moreover, it reduces energy consumption of the filter cache by as much as 25% compared to NFPT based approaches. Further, the technique implemented is elegant in the form of hardware implementation as it consists only of a shift register and a Look up Table (LUT) and is hence area and energy efficient in contrast to the published prediction techniques.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Design and implementation of a surface electromyogram system for sport field application A system-on-a-programmable-chip for real-time control of massively parallel arrays of biosensors and actuators Incorporating pattern prediction technique for energy efficient filter cache design The design of a self-maintained memory module for real-time systems Transformations of signed-binary number representations for efficient VLSI arithmetic
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1