Cross-Layer Design Exploration for Energy-Quality Tradeoffs in Spiking and Non-Spiking Deep Artificial Neural Networks

Bing Han;Aayush Ankit;Abhronil Sengupta;Kaushik Roy
{"title":"Cross-Layer Design Exploration for Energy-Quality Tradeoffs in Spiking and Non-Spiking Deep Artificial Neural Networks","authors":"Bing Han;Aayush Ankit;Abhronil Sengupta;Kaushik Roy","doi":"10.1109/TMSCS.2017.2737625","DOIUrl":null,"url":null,"abstract":"Deep learning convolutional artificial neural networks have achieved success in a large number of visual processing tasks and are currently utilized for many real-world applications like image search and speech recognition among others. However, despite achieving high accuracy in such classification problems, they involve significant computational resources. Over the past few years, non-spiking deep convolutional artificial neural network models have evolved into more biologically realistic and event-driven spiking deep convolutional artificial neural networks. Recent research efforts have been directed at developing mechanisms to convert traditional non-spiking deep convolutional artificial neural networks to the spiking ones where neurons communicate by means of spikes. However, there have been limited studies providing insights on the specific power, area, and energy benefits offered by the spiking deep convolutional artificial neural networks in comparison to their non-spiking counterparts. We perform a comprehensive study for hardware implementation of spiking/non-spiking deep convolutional artificial neural networks on MNIST, CIFAR10, and SVHN datasets. To this effect, we design AccelNN - a Neural Network Accelerator to execute neural network benchmarks and analyze the effects of circuit-architecture level techniques to harness event-drivenness. A comparative analysis between spiking and non-spiking versions of deep convolutional artificial neural networks is presented by performing trade-offs between recognition accuracy and corresponding power, latency and energy requirements.","PeriodicalId":100643,"journal":{"name":"IEEE Transactions on Multi-Scale Computing Systems","volume":"4 4","pages":"613-623"},"PeriodicalIF":0.0000,"publicationDate":"2017-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TMSCS.2017.2737625","citationCount":"14","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Multi-Scale Computing Systems","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/8006240/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14

Abstract

Deep learning convolutional artificial neural networks have achieved success in a large number of visual processing tasks and are currently utilized for many real-world applications like image search and speech recognition among others. However, despite achieving high accuracy in such classification problems, they involve significant computational resources. Over the past few years, non-spiking deep convolutional artificial neural network models have evolved into more biologically realistic and event-driven spiking deep convolutional artificial neural networks. Recent research efforts have been directed at developing mechanisms to convert traditional non-spiking deep convolutional artificial neural networks to the spiking ones where neurons communicate by means of spikes. However, there have been limited studies providing insights on the specific power, area, and energy benefits offered by the spiking deep convolutional artificial neural networks in comparison to their non-spiking counterparts. We perform a comprehensive study for hardware implementation of spiking/non-spiking deep convolutional artificial neural networks on MNIST, CIFAR10, and SVHN datasets. To this effect, we design AccelNN - a Neural Network Accelerator to execute neural network benchmarks and analyze the effects of circuit-architecture level techniques to harness event-drivenness. A comparative analysis between spiking and non-spiking versions of deep convolutional artificial neural networks is presented by performing trade-offs between recognition accuracy and corresponding power, latency and energy requirements.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Spiking和NonSpiking深度人工神经网络中能量质量权衡的跨层设计探索
深度学习卷积人工神经网络在大量视觉处理任务中取得了成功,目前被用于许多现实世界的应用,如图像搜索和语音识别等。然而,尽管在这样的分类问题中实现了高精度,但它们涉及大量的计算资源。在过去的几年里,非尖峰深度卷积人工神经网络模型已经进化成更具生物学现实性和事件驱动的尖峰深度卷积神经网络。最近的研究工作致力于开发将传统的非尖峰深度卷积人工神经网络转换为尖峰神经网络的机制,其中神经元通过尖峰进行通信。然而,与非尖峰人工神经网络相比,尖峰深度卷积人工神经网络提供的特定功率、面积和能量效益的研究有限。我们在MNIST、CIFAR10和SVHN数据集上对尖峰/非尖峰深度卷积人工神经网络的硬件实现进行了全面研究。为此,我们设计了AccelNN——一种神经网络加速器,用于执行神经网络基准测试,并分析电路架构级别技术对利用事件驱动性的影响。通过在识别精度和相应的功率、延迟和能量要求之间进行权衡,对深度卷积人工神经网络的尖峰和非尖峰版本进行了比较分析。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A Monolithic 3D Hybrid Architecture for Energy-Efficient Computation H$^2$OEIN: A Hierarchical Hybrid Optical/Electrical Interconnection Network for Exascale Computing Systems A Novel, Simulator for Heterogeneous Cloud Systems that Incorporate Custom Hardware Accelerators Enforcing End-to-End I/O Policies for Scientific Workflows Using Software-Defined Storage Resource Enclaves Low Register-Complexity Systolic Digit-Serial Multiplier Over $GF(2^m)$ Based on Trinomials
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1