数据预取机制的分类

S. Byna, Yong Chen, Xian-He Sun
{"title":"数据预取机制的分类","authors":"S. Byna, Yong Chen, Xian-He Sun","doi":"10.1109/I-SPAN.2008.24","DOIUrl":null,"url":null,"abstract":"Data prefetching has been considered an effective way to mask data access latency caused by cache misses and to bridge the performance gap between processor and memory. With hardware and/or software support, data prefetching brings data closer to a processor before it is actually needed. Many prefetching techniques have been proposed in the last few years to reduce data access latency by taking advantage of multi-core architectures. In this paper, we propose a taxonomy that classifies various design concerns in developing a prefetching strategy. We discuss various prefetching strategies and issues that have to be considered in designing a prefetching strategy for multi-core processors.","PeriodicalId":305776,"journal":{"name":"2008 International Symposium on Parallel Architectures, Algorithms, and Networks (i-span 2008)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"43","resultStr":"{\"title\":\"A Taxonomy of Data Prefetching Mechanisms\",\"authors\":\"S. Byna, Yong Chen, Xian-He Sun\",\"doi\":\"10.1109/I-SPAN.2008.24\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Data prefetching has been considered an effective way to mask data access latency caused by cache misses and to bridge the performance gap between processor and memory. With hardware and/or software support, data prefetching brings data closer to a processor before it is actually needed. Many prefetching techniques have been proposed in the last few years to reduce data access latency by taking advantage of multi-core architectures. In this paper, we propose a taxonomy that classifies various design concerns in developing a prefetching strategy. We discuss various prefetching strategies and issues that have to be considered in designing a prefetching strategy for multi-core processors.\",\"PeriodicalId\":305776,\"journal\":{\"name\":\"2008 International Symposium on Parallel Architectures, Algorithms, and Networks (i-span 2008)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2008-05-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"43\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2008 International Symposium on Parallel Architectures, Algorithms, and Networks (i-span 2008)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/I-SPAN.2008.24\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 International Symposium on Parallel Architectures, Algorithms, and Networks (i-span 2008)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/I-SPAN.2008.24","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 43

摘要

数据预取被认为是一种有效的方法来掩盖由缓存丢失引起的数据访问延迟,并弥合处理器和内存之间的性能差距。在硬件和/或软件的支持下,数据预取使数据在实际需要之前更接近处理器。在过去的几年中,已经提出了许多预取技术,通过利用多核架构来减少数据访问延迟。在本文中,我们提出了一种分类法,对开发预取策略时的各种设计关注点进行分类。我们讨论了在设计多核处理器的预取策略时必须考虑的各种预取策略和问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A Taxonomy of Data Prefetching Mechanisms
Data prefetching has been considered an effective way to mask data access latency caused by cache misses and to bridge the performance gap between processor and memory. With hardware and/or software support, data prefetching brings data closer to a processor before it is actually needed. Many prefetching techniques have been proposed in the last few years to reduce data access latency by taking advantage of multi-core architectures. In this paper, we propose a taxonomy that classifies various design concerns in developing a prefetching strategy. We discuss various prefetching strategies and issues that have to be considered in designing a prefetching strategy for multi-core processors.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A Novel Congestion Control Scheme in Network-on-Chip Based on Best Effort Delay-Sum Optimization Memory and Thread Placement Effects as a Function of Cache Usage: A Study of the Gaussian Chemistry Code on the SunFire X4600 M2 Quantitative Evaluation of Common Subexpression Elimination on Queue Machines Bio-inspired Algorithms for Mobility Management On Non-Approximability of Coarse-Grained Workflow Grid Scheduling
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1