Variation aware cache partitioning for multithreaded programs

V. Kozhikkottu, Abhisek Pan, Vijay S. Pai, S. Dey, A. Raghunathan
{"title":"Variation aware cache partitioning for multithreaded programs","authors":"V. Kozhikkottu, Abhisek Pan, Vijay S. Pai, S. Dey, A. Raghunathan","doi":"10.1145/2593069.2593240","DOIUrl":null,"url":null,"abstract":"Multithreaded programs are commonly written and optimized for homogeneous multi-core processors assuming equal performance from all the cores. This assumption greatly simplifies the partitioning and balancing of an application's workload across threads; however, it no longer holds when the frequencies of the cores differ due to within-die variations, leading to a degradation in performance. We observe that, in addition to the frequency of the core that it executes on, the performance of a thread is also dependent on the share of shared system resources, such as last-level cache, that it receives. We propose variation-aware cache partitioning as an approach to redress the variation-induced imbalance in the execution times of threads, thereby improving the performance of multi-threaded programs. We discuss the challenges involved in realizing our proposal, including synchronization (e.g., barriers) across threads, which results in faster threads being limited by slower threads, the complex and non-linear relationship between a thread's performance and the cache capacity allocated to it, and the fact that different program phases, can respond quite differently to varying cache capacity. We propose a runtime scheme to perform spatio-temporal cache partitioning while considering both chip characteristics (frequency variations) and program characteristics. We evaluate the proposed technique by applying it to an ensemble of variation-impacted multi-cores executing multi-threaded programs from the PAR-SEC and SPEC-OMP suites, and demonstrate that it results in an average performance improvement of 15% by mitigating the impact of frequency variations.","PeriodicalId":433816,"journal":{"name":"2014 51st ACM/EDAC/IEEE Design Automation Conference (DAC)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 51st ACM/EDAC/IEEE Design Automation Conference (DAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2593069.2593240","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Multithreaded programs are commonly written and optimized for homogeneous multi-core processors assuming equal performance from all the cores. This assumption greatly simplifies the partitioning and balancing of an application's workload across threads; however, it no longer holds when the frequencies of the cores differ due to within-die variations, leading to a degradation in performance. We observe that, in addition to the frequency of the core that it executes on, the performance of a thread is also dependent on the share of shared system resources, such as last-level cache, that it receives. We propose variation-aware cache partitioning as an approach to redress the variation-induced imbalance in the execution times of threads, thereby improving the performance of multi-threaded programs. We discuss the challenges involved in realizing our proposal, including synchronization (e.g., barriers) across threads, which results in faster threads being limited by slower threads, the complex and non-linear relationship between a thread's performance and the cache capacity allocated to it, and the fact that different program phases, can respond quite differently to varying cache capacity. We propose a runtime scheme to perform spatio-temporal cache partitioning while considering both chip characteristics (frequency variations) and program characteristics. We evaluate the proposed technique by applying it to an ensemble of variation-impacted multi-cores executing multi-threaded programs from the PAR-SEC and SPEC-OMP suites, and demonstrate that it results in an average performance improvement of 15% by mitigating the impact of frequency variations.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
多线程程序的变化感知缓存分区
多线程程序通常是为同构多核处理器编写和优化的,假设所有内核的性能相同。这个假设极大地简化了应用程序跨线程工作负载的分区和平衡;然而,当内核的频率因芯片内变化而不同时,它不再适用,从而导致性能下降。我们观察到,除了它执行的核心的频率之外,线程的性能还取决于它接收的共享系统资源的份额,例如最后一级缓存。我们提出变化感知缓存分区作为一种方法来纠正线程执行时间的变化引起的不平衡,从而提高多线程程序的性能。我们讨论了实现我们的建议所涉及的挑战,包括线程之间的同步(例如,障碍),这会导致更快的线程受到较慢线程的限制,线程的性能和分配给它的缓存容量之间的复杂和非线性关系,以及不同的程序阶段可以对不同的缓存容量做出完全不同的响应。我们提出了一种运行时方案来执行时空缓存分区,同时考虑芯片特性(频率变化)和程序特性。我们通过将其应用于PAR-SEC和SPEC-OMP套件中受变化影响的多核执行多线程程序的集成来评估所提出的技术,并证明通过减轻频率变化的影响,它可以平均提高15%的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
The EDA challenges in the dark silicon era CAP: Communication aware programming Advanced soft-error-rate (SER) estimation with striking-time and multi-cycle effects State-restrict MLC STT-RAM designs for high-reliable high-performance memory system OD3P: On-Demand Page Paired PCM
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1