Automatic Optimising CNN with Depthwise Separable Convolution on FPGA: (Abstact Only)

Ruizhe Zhao, Xinyu Niu, W. Luk
{"title":"Automatic Optimising CNN with Depthwise Separable Convolution on FPGA: (Abstact Only)","authors":"Ruizhe Zhao, Xinyu Niu, W. Luk","doi":"10.1145/3174243.3174959","DOIUrl":null,"url":null,"abstract":"Convolution layers in Convolutional Neural Networks (CNNs) are effective in vision feature extraction but quite inefficient in computational resource usage. Depthwise separable convolution layer has been proposed in recent publications to enhance the efficiency without reducing the effectiveness by separately computing the spatial and cross-channel correlations from input images and has proven successful in state-of-the-art networks such as MobileNets [1] and Xception [2]. Based on the facts that depthwise separable convolution is highly structured and uses limited resources, we argue that it can well fit reconfigurable platforms like FPGA. To benefit FPGA platforms with this new layer, in this paper, we present a novel framework that can automatically generate and optimise hardware designs for depthwise separable CNNs. Besides, in our framework, existing conventional CNNs can be systematically converted to ones whose standard convolution layers are selectively replaced with functionally identical depthwise separable convolution layers, by carefully balancing the trade-off among speed, accuracy, and resource usage through resource usage modelling and network fine-tuning. Results show that hardware designs generated by our framework can reach at most 231.7 frames per second regarding MobileNets, and for VGG-16 [3], we gain 3.43 times speed-up and 3.54% accuracy decrease on the ImageNet [4] dataset comparing the original model and a layer replaced one.","PeriodicalId":164936,"journal":{"name":"Proceedings of the 2018 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"20","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2018 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3174243.3174959","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 20

Abstract

Convolution layers in Convolutional Neural Networks (CNNs) are effective in vision feature extraction but quite inefficient in computational resource usage. Depthwise separable convolution layer has been proposed in recent publications to enhance the efficiency without reducing the effectiveness by separately computing the spatial and cross-channel correlations from input images and has proven successful in state-of-the-art networks such as MobileNets [1] and Xception [2]. Based on the facts that depthwise separable convolution is highly structured and uses limited resources, we argue that it can well fit reconfigurable platforms like FPGA. To benefit FPGA platforms with this new layer, in this paper, we present a novel framework that can automatically generate and optimise hardware designs for depthwise separable CNNs. Besides, in our framework, existing conventional CNNs can be systematically converted to ones whose standard convolution layers are selectively replaced with functionally identical depthwise separable convolution layers, by carefully balancing the trade-off among speed, accuracy, and resource usage through resource usage modelling and network fine-tuning. Results show that hardware designs generated by our framework can reach at most 231.7 frames per second regarding MobileNets, and for VGG-16 [3], we gain 3.43 times speed-up and 3.54% accuracy decrease on the ImageNet [4] dataset comparing the original model and a layer replaced one.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于FPGA的深度可分离卷积自动优化CNN(摘要)
卷积神经网络(cnn)中的卷积层在视觉特征提取方面是有效的,但在计算资源利用方面效率低下。深度可分离卷积层在最近的出版物中被提出,通过分别计算输入图像的空间和跨通道相关性来提高效率而不降低有效性,并在MobileNets[1]和Xception[2]等最先进的网络中被证明是成功的。基于深度可分离卷积具有高度结构化和使用有限资源的特点,我们认为它可以很好地适应FPGA等可重构平台。为了使FPGA平台受益于这一新层,在本文中,我们提出了一个新的框架,可以自动生成和优化深度可分离cnn的硬件设计。此外,在我们的框架中,通过资源使用建模和网络微调,仔细平衡速度、精度和资源使用之间的权衡,现有的传统cnn可以系统地转换为有选择性地将标准卷积层替换为功能相同的深度可分卷积层的cnn。结果表明,我们的框架生成的硬件设计在mobilenet上最多可以达到231.7帧/秒,对于vgg16[3],在ImageNet[4]数据集上,与原始模型和替换层相比,我们的速度提高了3.43倍,精度降低了3.54%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Architecture and Circuit Design of an All-Spintronic FPGA Session details: Session 6: High Level Synthesis 2 A FPGA Friendly Approximate Computing Framework with Hybrid Neural Networks: (Abstract Only) Software/Hardware Co-design for Multichannel Scheduling in IEEE 802.11p MLME: (Abstract Only) Session details: Special Session: Deep Learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1