CluSpa: Computation Reduction in CNN Inference by exploiting Clustering and Sparsity

Imlijungla Longchar, Amey Varhade, Chetan Ingle, Saurabh Baranwal, H. Kapoor
{"title":"CluSpa: Computation Reduction in CNN Inference by exploiting Clustering and Sparsity","authors":"Imlijungla Longchar, Amey Varhade, Chetan Ingle, Saurabh Baranwal, H. Kapoor","doi":"10.1145/3564121.3564132","DOIUrl":null,"url":null,"abstract":"Convolutional Neural Networks (CNNs) have grown in popularity and usage tremendously over the last few years, spanning across different task such as computer vision tasks, natural language processing, video recognition, and recommender systems. Despite the algorithmic advancements that drove the growth of CNN still has considerable computational and memory overhead that poses challenges in achieving real-time performance. Each input image requires millions to even billions of elementary arithmetic operations before the network obtains the result. In CNNs, convolutional and pooling layers are followed by activation layers involving various activation functions. Hence, a lot of work has been done to reduce these costs in the last few years. Numerous optimizations have addressed at both hardware and software levels. In this paper, we propose a software-based solution for improving the performance of inference of networks. We suggest a technique for the approximate computation of the convolution operation based on clustering and sharing of weights. We have utilized Gaussian Mixture Models for clustering. We exploit weight sparsity to further reduce computations on top of the clustering method. We were able to achieve a considerable reduction in the MAC operations and the overall computation speedup on popular CNN architectures","PeriodicalId":166150,"journal":{"name":"Proceedings of the Second International Conference on AI-ML Systems","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Second International Conference on AI-ML Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3564121.3564132","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Convolutional Neural Networks (CNNs) have grown in popularity and usage tremendously over the last few years, spanning across different task such as computer vision tasks, natural language processing, video recognition, and recommender systems. Despite the algorithmic advancements that drove the growth of CNN still has considerable computational and memory overhead that poses challenges in achieving real-time performance. Each input image requires millions to even billions of elementary arithmetic operations before the network obtains the result. In CNNs, convolutional and pooling layers are followed by activation layers involving various activation functions. Hence, a lot of work has been done to reduce these costs in the last few years. Numerous optimizations have addressed at both hardware and software levels. In this paper, we propose a software-based solution for improving the performance of inference of networks. We suggest a technique for the approximate computation of the convolution operation based on clustering and sharing of weights. We have utilized Gaussian Mixture Models for clustering. We exploit weight sparsity to further reduce computations on top of the clustering method. We were able to achieve a considerable reduction in the MAC operations and the overall computation speedup on popular CNN architectures
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
CluSpa:利用聚类和稀疏性减少CNN推理的计算量
卷积神经网络(cnn)在过去几年中得到了极大的普及和使用,跨越了不同的任务,如计算机视觉任务、自然语言处理、视频识别和推荐系统。尽管算法的进步推动了CNN的增长,但它仍然有相当大的计算和内存开销,这给实现实时性能带来了挑战。在网络获得结果之前,每个输入图像需要数百万甚至数十亿的基本算术运算。在cnn中,卷积层和池化层之后是涉及各种激活函数的激活层。因此,在过去的几年里,人们做了很多工作来降低这些成本。在硬件和软件级别都进行了许多优化。在本文中,我们提出了一种基于软件的解决方案来提高网络的推理性能。我们提出了一种基于聚类和权值共享的卷积运算近似计算技术。我们使用高斯混合模型进行聚类。我们利用权值稀疏性在聚类方法的基础上进一步减少计算量。我们能够在流行的CNN架构上实现MAC操作的大幅减少和整体计算速度的提高
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A Hybrid Planning System for Smart Charging of Electric Fleets CluSpa: Computation Reduction in CNN Inference by exploiting Clustering and Sparsity Acceleration-aware, Retraining-free Evolutionary Pruning for Automated Fitment of Deep Learning Models on Edge Devices Patch-wise Features for Blur Image Classification Identification of Causal Dependencies in Multivariate Time Series
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1