Dynamic split: flexible border between instruction and data cache

P. Trancoso
{"title":"Dynamic split: flexible border between instruction and data cache","authors":"P. Trancoso","doi":"10.1109/DSD.2005.35","DOIUrl":null,"url":null,"abstract":"Current microprocessors are optimized for the average use. Nevertheless, it is known that different applications impose different demands on the system. This work focuses on the reconfiguration of the first-level caches. In order to achieve good performance, the first-level cache is split physically into two parts, one for instruction and one for data. This separation has the benefit of avoiding interference between instructions and data. Nevertheless, this separation is strict and determined at design-time. In this work we show a cache design that is able to change the split dynamically at runtime. The proposed design was tested using simulation of a variety of benchmark applications from the MiBench suite on two baseline architectures: embeddedXScale and high-end PowerPC. The results show that, while the average misses rate reduction may seem small; certain applications show a benefit larger than 90%. For miss rate reduction, the dynamic split cache seems to be more relevant for the cache with the smaller associativity (PowerPC). Lastly, the dynamic split cache was also used to reduce the energy consumption without loss of performance. This feature resulted in a significant energy reduction and the results showed that it has a bigger impact for the caches with larger associativity (42% energy reduction for the XScale and 28% for the PowerPC for a large data set size).","PeriodicalId":119054,"journal":{"name":"8th Euromicro Conference on Digital System Design (DSD'05)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"8th Euromicro Conference on Digital System Design (DSD'05)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSD.2005.35","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Current microprocessors are optimized for the average use. Nevertheless, it is known that different applications impose different demands on the system. This work focuses on the reconfiguration of the first-level caches. In order to achieve good performance, the first-level cache is split physically into two parts, one for instruction and one for data. This separation has the benefit of avoiding interference between instructions and data. Nevertheless, this separation is strict and determined at design-time. In this work we show a cache design that is able to change the split dynamically at runtime. The proposed design was tested using simulation of a variety of benchmark applications from the MiBench suite on two baseline architectures: embeddedXScale and high-end PowerPC. The results show that, while the average misses rate reduction may seem small; certain applications show a benefit larger than 90%. For miss rate reduction, the dynamic split cache seems to be more relevant for the cache with the smaller associativity (PowerPC). Lastly, the dynamic split cache was also used to reduce the energy consumption without loss of performance. This feature resulted in a significant energy reduction and the results showed that it has a bigger impact for the caches with larger associativity (42% energy reduction for the XScale and 28% for the PowerPC for a large data set size).
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
动态分割:指令和数据缓存之间的灵活边界
当前的微处理器是针对一般使用进行优化的。然而,众所周知,不同的应用程序会对系统提出不同的要求。这项工作的重点是第一级缓存的重新配置。为了获得良好的性能,第一级缓存被物理地分成两部分,一部分用于指令,另一部分用于数据。这种分离的好处是避免了指令和数据之间的干扰。然而,这种分离在设计时是严格且确定的。在这项工作中,我们展示了一种能够在运行时动态更改分割的缓存设计。我们在两个基准架构(embeddedXScale和高端PowerPC)上对MiBench套件中的各种基准应用程序进行了模拟测试。结果表明,虽然平均脱靶率的降低可能看起来很小;某些应用程序显示收益大于90%。为了降低丢失率,动态分割缓存似乎更适合具有较小关联性的缓存(PowerPC)。最后,在不损失性能的情况下,还采用了动态分割缓存来降低能耗。这个特性显著降低了能耗,结果表明,它对具有较大关联性的缓存有更大的影响(对于大数据集大小的XScale减少42%的能耗,对于PowerPC减少28%的能耗)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A processor for testing mixed-signal cores in system-on-chip Educational tool for the demonstration of DfT principles based on scan methodologies Capturing processor architectures from protocol processing applications: a case study Power-composition profile driven co-synthesis with power management selection for dynamic and leakage energy reduction High-level synthesis in latency insensitive system methodology
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1