利用浮点运算推理/训练纳米级人工神经网络的 ASIC 设计

IF 2.1 4区 工程技术 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC IEEE Transactions on Nanotechnology Pub Date : 2024-02-20 DOI:10.1109/TNANO.2024.3367916
Farzad Niknia;Ziheng Wang;Shanshan Liu;Pedro Reviriego;Ahmed Louri;Fabrizio Lombardi
{"title":"利用浮点运算推理/训练纳米级人工神经网络的 ASIC 设计","authors":"Farzad Niknia;Ziheng Wang;Shanshan Liu;Pedro Reviriego;Ahmed Louri;Fabrizio Lombardi","doi":"10.1109/TNANO.2024.3367916","DOIUrl":null,"url":null,"abstract":"Inference and on-chip training of Artificial Neural Networks (ANNs) are challenging computational processes for large datasets; hardware implementations are needed to accelerate this computation, while meeting metrics such as operating frequency, power dissipation and accuracy. In this article, a high-performance ASIC-based design is proposed to implement both forward and backward propagations of multi-layer perceptrons (MLPs) at the nanoscales. To attain a higher accuracy, floating-point arithmetic units for a multiply-and-accumulate (MAC) array are employed in the proposed design; moreover, a hybrid implementation scheme is utilized to achieve flexibility (for networks of different size) and comprehensively low hardware overhead. The proposed design is fully pipelined, and its performance is independent of network size, except for the number of cycles and latency. The efficiency of the proposed nanoscale MLP-based design for inference (as taking place over multiple steps) and training (due to the complex processing in backward propagation by eliminating many redundant calculations) is analyzed. Moreover, the impact of different floating-point precision formats on the final accuracy and hardware metrics under the same design constraints is studied. A comparative evaluation of the proposed MLP design for different datasets and floating-point precision formats is provided. Results show that compared to current schemes found in the technical literatures, the proposed design has the best operating frequency and accuracy with still good latency and energy dissipation.","PeriodicalId":449,"journal":{"name":"IEEE Transactions on Nanotechnology","volume":"23 ","pages":"208-216"},"PeriodicalIF":2.1000,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ASIC Design of Nanoscale Artificial Neural Networks for Inference/Training by Floating-Point Arithmetic\",\"authors\":\"Farzad Niknia;Ziheng Wang;Shanshan Liu;Pedro Reviriego;Ahmed Louri;Fabrizio Lombardi\",\"doi\":\"10.1109/TNANO.2024.3367916\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Inference and on-chip training of Artificial Neural Networks (ANNs) are challenging computational processes for large datasets; hardware implementations are needed to accelerate this computation, while meeting metrics such as operating frequency, power dissipation and accuracy. In this article, a high-performance ASIC-based design is proposed to implement both forward and backward propagations of multi-layer perceptrons (MLPs) at the nanoscales. To attain a higher accuracy, floating-point arithmetic units for a multiply-and-accumulate (MAC) array are employed in the proposed design; moreover, a hybrid implementation scheme is utilized to achieve flexibility (for networks of different size) and comprehensively low hardware overhead. The proposed design is fully pipelined, and its performance is independent of network size, except for the number of cycles and latency. The efficiency of the proposed nanoscale MLP-based design for inference (as taking place over multiple steps) and training (due to the complex processing in backward propagation by eliminating many redundant calculations) is analyzed. Moreover, the impact of different floating-point precision formats on the final accuracy and hardware metrics under the same design constraints is studied. A comparative evaluation of the proposed MLP design for different datasets and floating-point precision formats is provided. Results show that compared to current schemes found in the technical literatures, the proposed design has the best operating frequency and accuracy with still good latency and energy dissipation.\",\"PeriodicalId\":449,\"journal\":{\"name\":\"IEEE Transactions on Nanotechnology\",\"volume\":\"23 \",\"pages\":\"208-216\"},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2024-02-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Nanotechnology\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10440496/\",\"RegionNum\":4,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Nanotechnology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10440496/","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

人工神经网络(ANN)的推理和片上训练是大型数据集的挑战性计算过程;需要硬件实现来加速这种计算,同时满足工作频率、功耗和精度等指标。本文提出了一种基于 ASIC 的高性能设计,可在纳米尺度上实现多层感知器 (MLP) 的前向和后向传播。为了达到更高的精度,该设计采用了乘积(MAC)阵列的浮点运算单元;此外,还采用了混合实现方案,以实现灵活性(适用于不同规模的网络)和全面的低硬件开销。所提出的设计是完全流水线式的,除了周期数和延迟外,其性能与网络规模无关。分析了所提出的基于纳米级 MLP 的设计在推理(分多步进行)和训练(由于消除了许多冗余计算,在后向传播中进行了复杂处理)方面的效率。此外,还研究了在相同设计约束条件下,不同浮点精度格式对最终精度和硬件指标的影响。针对不同的数据集和浮点精度格式,对所提出的 MLP 设计进行了比较评估。结果表明,与技术文献中的现有方案相比,所提出的设计具有最佳的工作频率和精度,同时还具有良好的延迟和能耗。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
ASIC Design of Nanoscale Artificial Neural Networks for Inference/Training by Floating-Point Arithmetic
Inference and on-chip training of Artificial Neural Networks (ANNs) are challenging computational processes for large datasets; hardware implementations are needed to accelerate this computation, while meeting metrics such as operating frequency, power dissipation and accuracy. In this article, a high-performance ASIC-based design is proposed to implement both forward and backward propagations of multi-layer perceptrons (MLPs) at the nanoscales. To attain a higher accuracy, floating-point arithmetic units for a multiply-and-accumulate (MAC) array are employed in the proposed design; moreover, a hybrid implementation scheme is utilized to achieve flexibility (for networks of different size) and comprehensively low hardware overhead. The proposed design is fully pipelined, and its performance is independent of network size, except for the number of cycles and latency. The efficiency of the proposed nanoscale MLP-based design for inference (as taking place over multiple steps) and training (due to the complex processing in backward propagation by eliminating many redundant calculations) is analyzed. Moreover, the impact of different floating-point precision formats on the final accuracy and hardware metrics under the same design constraints is studied. A comparative evaluation of the proposed MLP design for different datasets and floating-point precision formats is provided. Results show that compared to current schemes found in the technical literatures, the proposed design has the best operating frequency and accuracy with still good latency and energy dissipation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Nanotechnology
IEEE Transactions on Nanotechnology 工程技术-材料科学:综合
CiteScore
4.80
自引率
8.30%
发文量
74
审稿时长
8.3 months
期刊介绍: The IEEE Transactions on Nanotechnology is devoted to the publication of manuscripts of archival value in the general area of nanotechnology, which is rapidly emerging as one of the fastest growing and most promising new technological developments for the next generation and beyond.
期刊最新文献
High-Speed and Area-Efficient Serial IMPLY-Based Approximate Subtractor and Comparator for Image Processing and Neural Networks Design of a Graphene Based Terahertz Perfect Metamaterial Absorber With Multiple Sensing Performance Impact of Electron and Hole Trap Profiles in BE-TOX on Retention Characteristics of 3D NAND Flash Memory Full 3-D Monte Carlo Simulation of Coupled Electron-Phonon Transport: Self-Heating in a Nanoscale FinFET Experimental Investigations and Characterization of Surfactant Activated Mixed Metal Oxide (MMO) Nanomaterial
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1