Spara: An Energy-Efficient ReRAM-Based Accelerator for Sparse Graph Analytics Applications

Long Zheng, Jieshan Zhao, Yu Huang, Qinggang Wang, Zhen Zeng, Jingling Xue, Xiaofei Liao, Hai Jin
{"title":"Spara: An Energy-Efficient ReRAM-Based Accelerator for Sparse Graph Analytics Applications","authors":"Long Zheng, Jieshan Zhao, Yu Huang, Qinggang Wang, Zhen Zeng, Jingling Xue, Xiaofei Liao, Hai Jin","doi":"10.1109/IPDPS47924.2020.00077","DOIUrl":null,"url":null,"abstract":"Resistive random access memory (ReRAM) addresses the high memory bandwidth requirement challenge of graph analytics by integrating the computing logic in the memory. Due to the matrix-structured crossbar architecture, existing ReRAM-based accelerators, when handling real-world graphs that often have the skewed degree distribution, suffer from the severe sparsity problem arising from zero fillings and activation nondeterminism, incurring substantial ineffectual computations.In this paper, we observe that the sparsity sources lie in the consecutive mapping of source and destination vertex index onto the wordline and bitline of a crossbar. Although exhaustive graph reordering improves the sparsity-induced inefficiency, its totally-random (source and destination) vertex mapping leads to expensive overheads. This work exploits the insight in a mid-point vertex mapping with the random wordlines and consecutive bitlines. A cost-effective preprocessing is proposed to exploit the insight by rapidly exploring the crossbar-fit vertex reorderings but ignores the sparsity arising from activation dynamics. We present a novel ReRAM-based graph analytics accelerator, named Spara, which can maximize the workload density of crossbars dynamically by using a tightly-coupled bank parallel architecture further proposed. Results on real-world and synthesized graphs show that Spara outperforms GraphR and GraphSAR by 8.21 × and 5.01 × in terms of performance, and by 8.97 × and 5.68× in terms of energy savings (on average), while incurring a reasonable (<9.98%) pre-processing overhead.","PeriodicalId":6805,"journal":{"name":"2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"7 1","pages":"696-707"},"PeriodicalIF":0.0000,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"19","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IPDPS47924.2020.00077","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 19

Abstract

Resistive random access memory (ReRAM) addresses the high memory bandwidth requirement challenge of graph analytics by integrating the computing logic in the memory. Due to the matrix-structured crossbar architecture, existing ReRAM-based accelerators, when handling real-world graphs that often have the skewed degree distribution, suffer from the severe sparsity problem arising from zero fillings and activation nondeterminism, incurring substantial ineffectual computations.In this paper, we observe that the sparsity sources lie in the consecutive mapping of source and destination vertex index onto the wordline and bitline of a crossbar. Although exhaustive graph reordering improves the sparsity-induced inefficiency, its totally-random (source and destination) vertex mapping leads to expensive overheads. This work exploits the insight in a mid-point vertex mapping with the random wordlines and consecutive bitlines. A cost-effective preprocessing is proposed to exploit the insight by rapidly exploring the crossbar-fit vertex reorderings but ignores the sparsity arising from activation dynamics. We present a novel ReRAM-based graph analytics accelerator, named Spara, which can maximize the workload density of crossbars dynamically by using a tightly-coupled bank parallel architecture further proposed. Results on real-world and synthesized graphs show that Spara outperforms GraphR and GraphSAR by 8.21 × and 5.01 × in terms of performance, and by 8.97 × and 5.68× in terms of energy savings (on average), while incurring a reasonable (<9.98%) pre-processing overhead.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Spara:一种高效节能的基于reram的稀疏图分析加速器
电阻式随机存取存储器(ReRAM)通过将计算逻辑集成到存储器中,解决了图形分析对存储器带宽要求高的挑战。由于矩阵结构的横杆架构,现有的基于reram的加速器在处理具有偏斜度分布的真实图形时,存在由零填充和激活不确定性引起的严重稀疏性问题,导致大量无效计算。在本文中,我们观察到稀疏性源存在于源和目标顶点索引在横杆的字线和位线上的连续映射。尽管穷举图重排序改善了稀疏性导致的低效率,但它的完全随机(源和目标)顶点映射导致了昂贵的开销。这项工作利用了随机字线和连续位线的中点顶点映射的洞察力。提出了一种经济有效的预处理方法,通过快速探索交叉棒拟合顶点重排序来利用这种洞察力,但忽略了激活动力学引起的稀疏性。我们提出了一种新的基于reram的图形分析加速器,名为Spara,它通过使用进一步提出的紧耦合银行并行架构,可以动态地最大化交叉杆的工作负载密度。真实图和合成图的结果表明,Spara在性能方面比GraphR和GraphSAR高出8.21倍和5.01倍,在节能方面(平均)高出8.97倍和5.68倍,同时产生了合理的预处理开销(<9.98%)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Asynch-SGBDT: Train Stochastic Gradient Boosting Decision Trees in an Asynchronous Parallel Manner Resilience at Extreme Scale and Connections with Other Domains A Tale of Two C's: Convergence and Composability 12 Ways to Fool the Masses with Irreproducible Results Is Asymptotic Cost Analysis Useful in Developing Practical Parallel Algorithms
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1