神经网络修剪和硬件加速

Taehee Jeong, Ehsam Ghasemi, Jorn Tuyls, Elliott Delaye, Ashish Sirasao
{"title":"神经网络修剪和硬件加速","authors":"Taehee Jeong, Ehsam Ghasemi, Jorn Tuyls, Elliott Delaye, Ashish Sirasao","doi":"10.1109/UCC48980.2020.00069","DOIUrl":null,"url":null,"abstract":"Neural network pruning is a critical technique to efficiently deploy neural network models on edge devices with limited computing resources. Although many neural network pruning methods have been published, it is difficult to implement such algorithms due to their inherent complexity. In this work, we propose a functional pruning tool for neural network models. Our pruning procedure is simple and easy to be implemented, and efficient for deployment. Our pruning tool automatically detects redundancy inside neural network models and prunes the redundant channels. Doing so reduces the total number of model parameters and hence, compresses the size of the model. This approach significantly reduces the number of FLOPs needed for executing the neural network model and improves the inference runtime. To further improve the inference runtime of the pruned model, we leveraged Apache TVM to deploy the pruned model on the DPU FPGA-based hardware accelerator. To demonstrate our approach, we pruned the VGG-16 model on Flower dataset and reached 53-fold reduction in model size with only 7% drop in validation accuracy. The inference latency is reduced 4-fold on CPU and 16-fold on FPGA for the pruned models, compared with the latency of the base model on CPU.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Neural network pruning and hardware acceleration\",\"authors\":\"Taehee Jeong, Ehsam Ghasemi, Jorn Tuyls, Elliott Delaye, Ashish Sirasao\",\"doi\":\"10.1109/UCC48980.2020.00069\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neural network pruning is a critical technique to efficiently deploy neural network models on edge devices with limited computing resources. Although many neural network pruning methods have been published, it is difficult to implement such algorithms due to their inherent complexity. In this work, we propose a functional pruning tool for neural network models. Our pruning procedure is simple and easy to be implemented, and efficient for deployment. Our pruning tool automatically detects redundancy inside neural network models and prunes the redundant channels. Doing so reduces the total number of model parameters and hence, compresses the size of the model. This approach significantly reduces the number of FLOPs needed for executing the neural network model and improves the inference runtime. To further improve the inference runtime of the pruned model, we leveraged Apache TVM to deploy the pruned model on the DPU FPGA-based hardware accelerator. To demonstrate our approach, we pruned the VGG-16 model on Flower dataset and reached 53-fold reduction in model size with only 7% drop in validation accuracy. The inference latency is reduced 4-fold on CPU and 16-fold on FPGA for the pruned models, compared with the latency of the base model on CPU.\",\"PeriodicalId\":125849,\"journal\":{\"name\":\"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)\",\"volume\":\"58 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/UCC48980.2020.00069\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/UCC48980.2020.00069","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

神经网络剪枝是在计算资源有限的边缘设备上高效部署神经网络模型的关键技术。虽然已经发表了许多神经网络修剪方法,但由于其固有的复杂性,这些算法难以实现。在这项工作中,我们提出了一个神经网络模型的功能修剪工具。我们的修剪程序简单易行,部署效率高。我们的修剪工具自动检测神经网络模型中的冗余,并修剪冗余通道。这样做可以减少模型参数的总数,从而压缩模型的大小。这种方法大大减少了执行神经网络模型所需的flop数量,并改善了推理运行时。为了进一步改进修剪模型的推理运行时,我们利用Apache TVM在基于DPU fpga的硬件加速器上部署修剪模型。为了证明我们的方法,我们在Flower数据集上修剪了VGG-16模型,模型大小减少了53倍,验证精度仅下降了7%。与基本模型在CPU上的延迟相比,经过修剪的模型在CPU上的延迟减少了4倍,在FPGA上的延迟减少了16倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Neural network pruning and hardware acceleration
Neural network pruning is a critical technique to efficiently deploy neural network models on edge devices with limited computing resources. Although many neural network pruning methods have been published, it is difficult to implement such algorithms due to their inherent complexity. In this work, we propose a functional pruning tool for neural network models. Our pruning procedure is simple and easy to be implemented, and efficient for deployment. Our pruning tool automatically detects redundancy inside neural network models and prunes the redundant channels. Doing so reduces the total number of model parameters and hence, compresses the size of the model. This approach significantly reduces the number of FLOPs needed for executing the neural network model and improves the inference runtime. To further improve the inference runtime of the pruned model, we leveraged Apache TVM to deploy the pruned model on the DPU FPGA-based hardware accelerator. To demonstrate our approach, we pruned the VGG-16 model on Flower dataset and reached 53-fold reduction in model size with only 7% drop in validation accuracy. The inference latency is reduced 4-fold on CPU and 16-fold on FPGA for the pruned models, compared with the latency of the base model on CPU.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Blockchain Mobility Solution for Charging Transactions of Electrical Vehicles Open-source Serverless Architectures: an Evaluation of Apache OpenWhisk Explaining probabilistic Artificial Intelligence (AI) models by discretizing Deep Neural Networks Message from the B2D2LM 2020 Workshop Chairs Dynamic Network Slicing in Fog Computing for Mobile Users in MobFogSim
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1