Energy-Efficient Deployment of Machine Learning Workloads on Neuromorphic Hardware

Peyton S. Chandarana, Mohammadreza Mohammadi, J. Seekings, Ramtin Zand
{"title":"Energy-Efficient Deployment of Machine Learning Workloads on Neuromorphic Hardware","authors":"Peyton S. Chandarana, Mohammadreza Mohammadi, J. Seekings, Ramtin Zand","doi":"10.1109/IGSC55832.2022.9969357","DOIUrl":null,"url":null,"abstract":"As the technology industry is moving towards implementing tasks such as natural language processing, path planning, image classification, and more on smaller edge computing devices, the demand for more efficient implementations of algorithms and hardware accelerators has become a significant area of research. In recent years, several edge deep learning hardware accelerators have been released that specifically focus on reducing the power and area consumed by deep neural networks (DNNs). On the other hand, spiking neural networks (SNNs) which operate on discrete time-series data, have been shown to achieve substantial power reductions over even the aforementioned edge DNN accelerators when deployed on specialized neuromorphic event-based/asynchronous hardware. While neuromorphic hardware has demonstrated great potential for accelerating deep learning tasks at the edge, the current space of algorithms and hardware is limited and still in rather early development. Thus, many hybrid approaches have been proposed which aim to convert pre-trained DNNs into SNNs. In this work, we provide a general guide to converting pre-trained DNNs into SNNs while also presenting techniques to improve the deployment of converted SNNs on neuromorphic hardware with respect to latency, power, and energy. Our experimental results show that when compared against the Intel Neural Compute Stick 2, Intel's neuromorphic processor, Loihi, consumes up to 27× less power and 5× less energy in the tested image classification tasks by using our SNN improvement techniques.","PeriodicalId":114200,"journal":{"name":"2022 IEEE 13th International Green and Sustainable Computing Conference (IGSC)","volume":"325 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 13th International Green and Sustainable Computing Conference (IGSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IGSC55832.2022.9969357","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

As the technology industry is moving towards implementing tasks such as natural language processing, path planning, image classification, and more on smaller edge computing devices, the demand for more efficient implementations of algorithms and hardware accelerators has become a significant area of research. In recent years, several edge deep learning hardware accelerators have been released that specifically focus on reducing the power and area consumed by deep neural networks (DNNs). On the other hand, spiking neural networks (SNNs) which operate on discrete time-series data, have been shown to achieve substantial power reductions over even the aforementioned edge DNN accelerators when deployed on specialized neuromorphic event-based/asynchronous hardware. While neuromorphic hardware has demonstrated great potential for accelerating deep learning tasks at the edge, the current space of algorithms and hardware is limited and still in rather early development. Thus, many hybrid approaches have been proposed which aim to convert pre-trained DNNs into SNNs. In this work, we provide a general guide to converting pre-trained DNNs into SNNs while also presenting techniques to improve the deployment of converted SNNs on neuromorphic hardware with respect to latency, power, and energy. Our experimental results show that when compared against the Intel Neural Compute Stick 2, Intel's neuromorphic processor, Loihi, consumes up to 27× less power and 5× less energy in the tested image classification tasks by using our SNN improvement techniques.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
神经形态硬件上机器学习工作负载的节能部署
随着技术行业朝着在更小的边缘计算设备上实现自然语言处理、路径规划、图像分类等任务的方向发展,对更有效地实现算法和硬件加速器的需求已成为一个重要的研究领域。近年来,已经发布了一些边缘深度学习硬件加速器,专门用于降低深度神经网络(dnn)的功耗和面积消耗。另一方面,在离散时间序列数据上运行的尖峰神经网络(snn)已经被证明,当部署在专门的基于神经形态事件/异步硬件上时,甚至比前面提到的边缘DNN加速器也能实现大幅的功耗降低。虽然神经形态硬件已经证明了在边缘加速深度学习任务的巨大潜力,但目前算法和硬件的空间有限,而且仍处于相当早期的发展阶段。因此,提出了许多混合方法,旨在将预训练的dnn转换为snn。在这项工作中,我们提供了将预训练的dnn转换为snn的一般指南,同时还介绍了在延迟、功率和能量方面改进转换snn在神经形态硬件上的部署的技术。实验结果表明,与Intel Neural Compute Stick 2相比,使用我们的SNN改进技术,Intel的神经形态处理器Loihi在测试图像分类任务中的功耗降低了27倍,能耗降低了5倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Exploring Automatic Gym Workouts Recognition Locally on Wearable Resource-Constrained Devices Toward a Behavioral-Level End-to-End Framework for Silicon Photonics Accelerators A Review of Smart Buildings Protocol and Systems with a Consideration of Security and Energy Awareness Less is More: Learning Simplicity in Datacenter Scheduling Optimizing Energy Efficiency of Node.js Applications with CPU DVFS Awareness
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1