Farzad Niknia, Ziheng Wang, Shanshan Liu, A. Louri, Fabrizio Lombardi
{"title":"Nanoscale Accelerators for Artificial Neural Networks","authors":"Farzad Niknia, Ziheng Wang, Shanshan Liu, A. Louri, Fabrizio Lombardi","doi":"10.1109/MNANO.2022.3208757","DOIUrl":null,"url":null,"abstract":"Artificial neural networks (ANNs) are usually implemented in accelerators to achieve efficient processing of inference; the hardware implementation of an ANN accelerator requires careful consideration on overhead metrics (such as delay, energy and area) and performance (usually measured by the accuracy). This paper considers the ASIC-based accelerator from arithmetic design considerations. The feasibility of using different schemes (parallel, serial and hybrid arrangements) and different types of arithmetic computing (floating-point, fixed-point and stochastic computing) when implementing multilayer perceptrons (MLPs) are considered. The evaluation results of MLPs for two popular datasets show that the floating-point/fixed-point-based parallel (hybrid) design achieves the smallest latency (area) and the SC-based design offers the lowest energy dissipation.","PeriodicalId":44724,"journal":{"name":"IEEE Nanotechnology Magazine","volume":"16 1","pages":"14-21"},"PeriodicalIF":2.3000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Nanotechnology Magazine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MNANO.2022.3208757","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"NANOSCIENCE & NANOTECHNOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial neural networks (ANNs) are usually implemented in accelerators to achieve efficient processing of inference; the hardware implementation of an ANN accelerator requires careful consideration on overhead metrics (such as delay, energy and area) and performance (usually measured by the accuracy). This paper considers the ASIC-based accelerator from arithmetic design considerations. The feasibility of using different schemes (parallel, serial and hybrid arrangements) and different types of arithmetic computing (floating-point, fixed-point and stochastic computing) when implementing multilayer perceptrons (MLPs) are considered. The evaluation results of MLPs for two popular datasets show that the floating-point/fixed-point-based parallel (hybrid) design achieves the smallest latency (area) and the SC-based design offers the lowest energy dissipation.
期刊介绍:
IEEE Nanotechnology Magazine publishes peer-reviewed articles that present emerging trends and practices in industrial electronics product research and development, key insights, and tutorial surveys in the field of interest to the member societies of the IEEE Nanotechnology Council. IEEE Nanotechnology Magazine will be limited to the scope of the Nanotechnology Council, which supports the theory, design, and development of nanotechnology and its scientific, engineering, and industrial applications.