Overcoming the Memory Hierarchy Inefficiencies in Graph Processing Applications

Jilan Lin, Shuangchen Li, Yufei Ding, Yuan Xie
{"title":"Overcoming the Memory Hierarchy Inefficiencies in Graph Processing Applications","authors":"Jilan Lin, Shuangchen Li, Yufei Ding, Yuan Xie","doi":"10.1109/ICCAD51958.2021.9643434","DOIUrl":null,"url":null,"abstract":"Graph processing participates a vital role in mining relational data. However, the intensive but inefficient memory accesses make graph processing applications severely bottlenecked by the conventional memory hierarchy. In this work, we focus on inefficiencies that exist on both on-chip cache and off-chip memory. First, graph processing is known dominated by expensive random accesses, which are difficult to be captured by conventional cache and prefetcher architectures, leading to low cache hits and exhausting main memory visits. Second, the off-chip bandwidth is further underutilized by the small data granularity. Because each vertex/edge data in the graph only needs 4-8B, which is much smaller than the memory access granularity of 64B. Thus, lots of bandwidth is wasted fetching unnecessary data. Therefore, we present G-MEM, a customized memory hierarchy design for graph processing applications. First, we propose a coherence-free scratchpad as the on-chip memory, which leverages the power-law characteristic of graphs and only stores those hot data that are frequent-accessed. We equip the scratchpad memory with a degree-aware mapping strategy to better manage it for various applications. On the other hand, we design an elastic-granularity DRAM (EG-DRAM) to facilitate the main memory access. The EG-DRAM is based on near-data processing architecture, which processes and coalesces multiple fine-grained memory accesses together to maximize bandwidth efficiency. Putting them together, the G-MEM demonstrates a 2.48 × overall speedup over a vanilla CPU, with 1.44 × and 1.79 × speedup against the state-of-the-art cache architecture and memory subsystem, respectively.","PeriodicalId":370791,"journal":{"name":"2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCAD51958.2021.9643434","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Graph processing participates a vital role in mining relational data. However, the intensive but inefficient memory accesses make graph processing applications severely bottlenecked by the conventional memory hierarchy. In this work, we focus on inefficiencies that exist on both on-chip cache and off-chip memory. First, graph processing is known dominated by expensive random accesses, which are difficult to be captured by conventional cache and prefetcher architectures, leading to low cache hits and exhausting main memory visits. Second, the off-chip bandwidth is further underutilized by the small data granularity. Because each vertex/edge data in the graph only needs 4-8B, which is much smaller than the memory access granularity of 64B. Thus, lots of bandwidth is wasted fetching unnecessary data. Therefore, we present G-MEM, a customized memory hierarchy design for graph processing applications. First, we propose a coherence-free scratchpad as the on-chip memory, which leverages the power-law characteristic of graphs and only stores those hot data that are frequent-accessed. We equip the scratchpad memory with a degree-aware mapping strategy to better manage it for various applications. On the other hand, we design an elastic-granularity DRAM (EG-DRAM) to facilitate the main memory access. The EG-DRAM is based on near-data processing architecture, which processes and coalesces multiple fine-grained memory accesses together to maximize bandwidth efficiency. Putting them together, the G-MEM demonstrates a 2.48 × overall speedup over a vanilla CPU, with 1.44 × and 1.79 × speedup against the state-of-the-art cache architecture and memory subsystem, respectively.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
克服图形处理应用中的内存层次低效
图处理在关系数据挖掘中起着至关重要的作用。然而,大量而低效的内存访问使得图形处理应用受到传统内存层次结构的严重瓶颈。在这项工作中,我们重点关注片上缓存和片外存储器存在的低效率。首先,已知图形处理由昂贵的随机访问主导,这很难被传统的缓存和预取器架构捕获,导致低缓存命中和耗尽主内存访问。其次,由于数据粒度小,片外带宽进一步得不到充分利用。因为图中的每个顶点/边数据只需要4-8B,这比64B的内存访问粒度要小得多。因此,大量的带宽被浪费在获取不必要的数据上。因此,我们提出了G-MEM,一种用于图形处理应用程序的定制内存层次设计。首先,我们提出了一个无相干的刮擦板作为片上存储器,它利用图形的幂律特性,只存储那些频繁访问的热数据。我们为刮板存储器配备了一个度感知映射策略,以便更好地管理它用于各种应用。另一方面,我们设计了弹性粒度DRAM (EG-DRAM),以方便主存储器的访问。EG-DRAM基于近数据处理架构,将多个细粒度内存访问处理并合并在一起,以最大限度地提高带宽效率。把它们放在一起,G-MEM比普通CPU的总体加速速度提高了2.48倍,相对于最先进的缓存体系结构和内存子系统的加速速度分别提高了1.44倍和1.79倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Fast and Accurate PPA Modeling with Transfer Learning Mobileware: A High-Performance MobileNet Accelerator with Channel Stationary Dataflow A General Hardware and Software Co-Design Framework for Energy-Efficient Edge AI ToPro: A Topology Projector and Waveguide Router for Wavelength-Routed Optical Networks-on-Chip Early Validation of SoCs Security Architecture Against Timing Flows Using SystemC-based VPs
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1