针对高吞吐量稀疏CNN推理优化的高效FPGA加速器

Jiayu Wen, Yufei Ma, Zhongfeng Wang
{"title":"针对高吞吐量稀疏CNN推理优化的高效FPGA加速器","authors":"Jiayu Wen, Yufei Ma, Zhongfeng Wang","doi":"10.1109/APCCAS50809.2020.9301696","DOIUrl":null,"url":null,"abstract":"Pruning techniques can compress the CNN models by making the insignificant weights to be zeros to release the tremendous workload in large-scale CNNs. However, for hardware architecture, to efficiently load and operate on the nonzero data with high parallelism is a great challenge due to the random location of pruned weights. To address this issue, a sparsity aware CNN accelerator is proposed in this work to process the irregularly pruned CNN models. A candidate pool architecture is designed to only pick the randomly needed activations chosen by nonzero weights. It is set as a three-dimensional structure to relieve the problem of workload imbalance caused by random nonzero weight locations and high parallelism. Besides, a dedicated indexing method is designed to cooperate with the candidate pool architecture to accomplish the whole sparse dataflow. The proposed sparsity aware CNN accelerator is demonstrated on Intel Arria 10 FPGA for multiple popular CNN models that achieves up to 89.7% throughput improvement compared to the baseline design.","PeriodicalId":127075,"journal":{"name":"2020 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"An Efficient FPGA Accelerator Optimized for High Throughput Sparse CNN Inference\",\"authors\":\"Jiayu Wen, Yufei Ma, Zhongfeng Wang\",\"doi\":\"10.1109/APCCAS50809.2020.9301696\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Pruning techniques can compress the CNN models by making the insignificant weights to be zeros to release the tremendous workload in large-scale CNNs. However, for hardware architecture, to efficiently load and operate on the nonzero data with high parallelism is a great challenge due to the random location of pruned weights. To address this issue, a sparsity aware CNN accelerator is proposed in this work to process the irregularly pruned CNN models. A candidate pool architecture is designed to only pick the randomly needed activations chosen by nonzero weights. It is set as a three-dimensional structure to relieve the problem of workload imbalance caused by random nonzero weight locations and high parallelism. Besides, a dedicated indexing method is designed to cooperate with the candidate pool architecture to accomplish the whole sparse dataflow. The proposed sparsity aware CNN accelerator is demonstrated on Intel Arria 10 FPGA for multiple popular CNN models that achieves up to 89.7% throughput improvement compared to the baseline design.\",\"PeriodicalId\":127075,\"journal\":{\"name\":\"2020 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/APCCAS50809.2020.9301696\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/APCCAS50809.2020.9301696","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

摘要

修剪技术可以通过将不重要的权值变为零来压缩CNN模型,从而释放大规模CNN的巨大工作量。然而,对于硬件架构来说,由于剪枝权值的随机位置,如何高效地加载和操作高并行性的非零数据是一个很大的挑战。为了解决这个问题,本文提出了一个稀疏感知CNN加速器来处理不规则修剪的CNN模型。候选池架构被设计为只选择由非零权重选择的随机需要的激活。它被设置为一个三维结构,以缓解随机的非零权重位置和高并行性带来的工作量不平衡问题。此外,设计了一种专用的索引方法,配合候选池架构完成整个稀疏数据流。提出的稀疏感知CNN加速器在Intel Arria 10 FPGA上针对多种流行的CNN模型进行了演示,与基线设计相比,实现了高达89.7%的吞吐量提升。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
An Efficient FPGA Accelerator Optimized for High Throughput Sparse CNN Inference
Pruning techniques can compress the CNN models by making the insignificant weights to be zeros to release the tremendous workload in large-scale CNNs. However, for hardware architecture, to efficiently load and operate on the nonzero data with high parallelism is a great challenge due to the random location of pruned weights. To address this issue, a sparsity aware CNN accelerator is proposed in this work to process the irregularly pruned CNN models. A candidate pool architecture is designed to only pick the randomly needed activations chosen by nonzero weights. It is set as a three-dimensional structure to relieve the problem of workload imbalance caused by random nonzero weight locations and high parallelism. Besides, a dedicated indexing method is designed to cooperate with the candidate pool architecture to accomplish the whole sparse dataflow. The proposed sparsity aware CNN accelerator is demonstrated on Intel Arria 10 FPGA for multiple popular CNN models that achieves up to 89.7% throughput improvement compared to the baseline design.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
"Truth from Practice, Learning beyond Teaching" Exploration in Teaching Analog Integrated Circuit 100 MHz Random Number Generator Design Using Interleaved Metastable NAND/NOR Latches* Performance Analysis of Non-Profiled Side Channel Attacks Based on Convolutional Neural Networks A Self-coupled DT MASH ΔΣ Modulator with High Tolerance to Noise Leakage An Energy-Efficient Time-Domain Binary Neural Network Accelerator with Error-Detection in 28nm CMOS
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1