Nanoscale Accelerators for Artificial Neural Networks

IF 2.3 Q3 NANOSCIENCE & NANOTECHNOLOGY IEEE Nanotechnology Magazine Pub Date : 2022-12-01 DOI:10.1109/MNANO.2022.3208757
Farzad Niknia, Ziheng Wang, Shanshan Liu, A. Louri, Fabrizio Lombardi
{"title":"Nanoscale Accelerators for Artificial Neural Networks","authors":"Farzad Niknia, Ziheng Wang, Shanshan Liu, A. Louri, Fabrizio Lombardi","doi":"10.1109/MNANO.2022.3208757","DOIUrl":null,"url":null,"abstract":"Artificial neural networks (ANNs) are usually implemented in accelerators to achieve efficient processing of inference; the hardware implementation of an ANN accelerator requires careful consideration on overhead metrics (such as delay, energy and area) and performance (usually measured by the accuracy). This paper considers the ASIC-based accelerator from arithmetic design considerations. The feasibility of using different schemes (parallel, serial and hybrid arrangements) and different types of arithmetic computing (floating-point, fixed-point and stochastic computing) when implementing multilayer perceptrons (MLPs) are considered. The evaluation results of MLPs for two popular datasets show that the floating-point/fixed-point-based parallel (hybrid) design achieves the smallest latency (area) and the SC-based design offers the lowest energy dissipation.","PeriodicalId":44724,"journal":{"name":"IEEE Nanotechnology Magazine","volume":"16 1","pages":"14-21"},"PeriodicalIF":2.3000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Nanotechnology Magazine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MNANO.2022.3208757","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"NANOSCIENCE & NANOTECHNOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Artificial neural networks (ANNs) are usually implemented in accelerators to achieve efficient processing of inference; the hardware implementation of an ANN accelerator requires careful consideration on overhead metrics (such as delay, energy and area) and performance (usually measured by the accuracy). This paper considers the ASIC-based accelerator from arithmetic design considerations. The feasibility of using different schemes (parallel, serial and hybrid arrangements) and different types of arithmetic computing (floating-point, fixed-point and stochastic computing) when implementing multilayer perceptrons (MLPs) are considered. The evaluation results of MLPs for two popular datasets show that the floating-point/fixed-point-based parallel (hybrid) design achieves the smallest latency (area) and the SC-based design offers the lowest energy dissipation.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于人工神经网络的纳米加速器
人工神经网络通常在加速器中实现,以实现高效的推理处理;ANN加速器的硬件实现需要仔细考虑开销度量(例如延迟、能量和面积)和性能(通常通过精度来测量)。本文从算法设计的角度考虑了基于ASIC的加速器。考虑了在实现多层感知器(MLP)时使用不同方案(并行、串行和混合排列)和不同类型的算术计算(浮点、定点和随机计算)的可行性。对两个流行数据集的MLP的评估结果表明,基于浮点/定点的并行(混合)设计实现了最小的延迟(面积),而基于SC的设计提供了最低的能耗。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Nanotechnology Magazine
IEEE Nanotechnology Magazine NANOSCIENCE & NANOTECHNOLOGY-
CiteScore
2.90
自引率
6.20%
发文量
46
期刊介绍: IEEE Nanotechnology Magazine publishes peer-reviewed articles that present emerging trends and practices in industrial electronics product research and development, key insights, and tutorial surveys in the field of interest to the member societies of the IEEE Nanotechnology Council. IEEE Nanotechnology Magazine will be limited to the scope of the Nanotechnology Council, which supports the theory, design, and development of nanotechnology and its scientific, engineering, and industrial applications.
期刊最新文献
Guest Editorial [Guest Editorial] The MENED Program at Nanotechnology Council [Column] The Editors’ Desk [Editor's Desk] President's Farewell Message [President's Farewell Message] 2023 Index IEEE Nanotechnology Magazine Vol. 17
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1