Working with Process Variation Aware Caches

M. Mutyam, N. Vijaykrishnan
{"title":"Working with Process Variation Aware Caches","authors":"M. Mutyam, N. Vijaykrishnan","doi":"10.1145/1266366.1266615","DOIUrl":null,"url":null,"abstract":"Deep-submicron designs have to take care of process variation effects as variations in critical process parameters result in large variations in access latencies of hardware components. This is severe in the case of memory components as minimum sized transistors are used in their design. In this work, by considering on-chip data caches, we study the effect of access latency variations on performance. We discuss performance losses due to the worst-case design, wherein the entire cache operates with the worst-case process variation delay, followed by process variation aware cache designs which work at set-level granularity. We then propose a technique called block rearrangement to minimize performance loss incurred by a process variation aware cache which works at set-level granularity. Using block rearrangement technique, we rearrange the physical locations of cache blocks such that a cache set can have its \"n\" blocks (assuming a n-way set-associative cache) in multiple rows instead of a single row as in the case of a cache with conventional addressing scheme. By distributing blocks of a cache set over multiple sets, we minimize the number of sets being affected by process variation. We evaluate our technique using SPEC2000 CPU benchmarks and show that our technique achieves significant performance benefits over caches with conventional addressing scheme","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"80 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"33","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2007 Design, Automation & Test in Europe Conference & Exhibition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/1266366.1266615","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 33

Abstract

Deep-submicron designs have to take care of process variation effects as variations in critical process parameters result in large variations in access latencies of hardware components. This is severe in the case of memory components as minimum sized transistors are used in their design. In this work, by considering on-chip data caches, we study the effect of access latency variations on performance. We discuss performance losses due to the worst-case design, wherein the entire cache operates with the worst-case process variation delay, followed by process variation aware cache designs which work at set-level granularity. We then propose a technique called block rearrangement to minimize performance loss incurred by a process variation aware cache which works at set-level granularity. Using block rearrangement technique, we rearrange the physical locations of cache blocks such that a cache set can have its "n" blocks (assuming a n-way set-associative cache) in multiple rows instead of a single row as in the case of a cache with conventional addressing scheme. By distributing blocks of a cache set over multiple sets, we minimize the number of sets being affected by process variation. We evaluate our technique using SPEC2000 CPU benchmarks and show that our technique achieves significant performance benefits over caches with conventional addressing scheme
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用过程变化感知缓存
深亚微米设计必须考虑到工艺变化的影响,因为关键工艺参数的变化会导致硬件组件访问延迟的巨大变化。这在存储器元件的情况下是严重的,因为它们的设计中使用了最小尺寸的晶体管。在这项工作中,通过考虑片上数据缓存,我们研究了访问延迟变化对性能的影响。我们讨论了由于最坏情况设计造成的性能损失,其中整个缓存以最坏情况进程变化延迟运行,然后是在设置级粒度下工作的进程变化感知缓存设计。然后,我们提出了一种称为块重排的技术,以最大限度地减少在设置级粒度下工作的进程变化感知缓存所带来的性能损失。使用块重排技术,我们重新排列缓存块的物理位置,这样一个缓存集可以在多行中拥有它的“n”块(假设有一个n路集关联缓存),而不是像传统寻址方案的缓存那样在单行中。通过将一个缓存集的块分布到多个集上,我们可以最小化受进程变化影响的集的数量。我们使用SPEC2000 CPU基准测试来评估我们的技术,并表明我们的技术比使用传统寻址方案的缓存实现了显着的性能优势
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Optimization-based Wideband Basis Functions for Efficient Interconnect Extraction System Level Assessment of an Optical NoC in an MPSoC Platform Modeling and Simulation to the Design of ΣΔ Fractional-N Frequency Synthesizer Tool-support for the analysis of hybrid systems and models Development of an ASIP Enabling Flows in Ethernet Access Using a Retargetable Compilation Flow
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1