用于 DNN 的可重构多精度量化感知非线性激活函数硬件模块

IF 1.9 3区 工程技术 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Microelectronics Journal Pub Date : 2024-07-22 DOI:10.1016/j.mejo.2024.106346
Qi Hong , Zhiming Liu , Qiang Long , Hao Tong , Tianxu Zhang , Xiaowen Zhu , Yunong Zhao , Hua Ru , Yuxing Zha , Ziyuan Zhou , Jiashun Wu , Hongtao Tan , Weiqiang Hong , Yaohua Xu , Xiaohui Guo
{"title":"用于 DNN 的可重构多精度量化感知非线性激活函数硬件模块","authors":"Qi Hong ,&nbsp;Zhiming Liu ,&nbsp;Qiang Long ,&nbsp;Hao Tong ,&nbsp;Tianxu Zhang ,&nbsp;Xiaowen Zhu ,&nbsp;Yunong Zhao ,&nbsp;Hua Ru ,&nbsp;Yuxing Zha ,&nbsp;Ziyuan Zhou ,&nbsp;Jiashun Wu ,&nbsp;Hongtao Tan ,&nbsp;Weiqiang Hong ,&nbsp;Yaohua Xu ,&nbsp;Xiaohui Guo","doi":"10.1016/j.mejo.2024.106346","DOIUrl":null,"url":null,"abstract":"<div><p>In recent years, the increasing variety of nonlinear activation functions (NAFs) in deep neural networks (DNNs) has led to higher computational demands. However, hardware implementation faces challenges such as lack of flexibility, high hardware cost, and limited accuracy. This paper proposes a highly flexible and low-cost hardware solution for implementing activation functions to overcome these issues. Based on the piecewise linear (PWL) approximation method, our method supports NAFs with different accuracy configurations through a customized implementation strategy to meet the requirements in different scenario applications. In this paper, the symmetry of the activation function is investigated, and incorporate curve translation preprocessing and data quantization to significantly reduce hardware storage costs. The modular hardware architecture proposed in this study supports NAFs of multiple accuracies, which is suitable for designing deep learning neural network accelerators in various scenarios, avoiding the need to design dedicated hardware circuits for the activation function layer and enhances circuit design efficiency. The proposed hardware architecture is validated on the Xilinx XC7Z010 development board. The experimental results show that the average absolute error (AAE) is reduced by about 35.6 % at a clock frequency of 312.5 MHz. Additionally, the accuracy loss of the model is maximized to −0.684 % after replacing the activation layer function of DNNs under the PyTorch framework.</p></div>","PeriodicalId":49818,"journal":{"name":"Microelectronics Journal","volume":null,"pages":null},"PeriodicalIF":1.9000,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A reconfigurable multi-precision quantization-aware nonlinear activation function hardware module for DNNs\",\"authors\":\"Qi Hong ,&nbsp;Zhiming Liu ,&nbsp;Qiang Long ,&nbsp;Hao Tong ,&nbsp;Tianxu Zhang ,&nbsp;Xiaowen Zhu ,&nbsp;Yunong Zhao ,&nbsp;Hua Ru ,&nbsp;Yuxing Zha ,&nbsp;Ziyuan Zhou ,&nbsp;Jiashun Wu ,&nbsp;Hongtao Tan ,&nbsp;Weiqiang Hong ,&nbsp;Yaohua Xu ,&nbsp;Xiaohui Guo\",\"doi\":\"10.1016/j.mejo.2024.106346\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>In recent years, the increasing variety of nonlinear activation functions (NAFs) in deep neural networks (DNNs) has led to higher computational demands. However, hardware implementation faces challenges such as lack of flexibility, high hardware cost, and limited accuracy. This paper proposes a highly flexible and low-cost hardware solution for implementing activation functions to overcome these issues. Based on the piecewise linear (PWL) approximation method, our method supports NAFs with different accuracy configurations through a customized implementation strategy to meet the requirements in different scenario applications. In this paper, the symmetry of the activation function is investigated, and incorporate curve translation preprocessing and data quantization to significantly reduce hardware storage costs. The modular hardware architecture proposed in this study supports NAFs of multiple accuracies, which is suitable for designing deep learning neural network accelerators in various scenarios, avoiding the need to design dedicated hardware circuits for the activation function layer and enhances circuit design efficiency. The proposed hardware architecture is validated on the Xilinx XC7Z010 development board. The experimental results show that the average absolute error (AAE) is reduced by about 35.6 % at a clock frequency of 312.5 MHz. Additionally, the accuracy loss of the model is maximized to −0.684 % after replacing the activation layer function of DNNs under the PyTorch framework.</p></div>\",\"PeriodicalId\":49818,\"journal\":{\"name\":\"Microelectronics Journal\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.9000,\"publicationDate\":\"2024-07-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Microelectronics Journal\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S187923912400050X\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Microelectronics Journal","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S187923912400050X","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

近年来,深度神经网络(DNN)中的非线性激活函数(NAF)种类越来越多,导致了更高的计算要求。然而,硬件实现面临着缺乏灵活性、硬件成本高、精度有限等挑战。本文提出了一种高度灵活、低成本的硬件解决方案来实现激活函数,以克服这些问题。我们的方法基于分片线性(PWL)近似方法,通过定制的实现策略支持不同精度配置的 NAF,以满足不同场景应用的要求。本文对激活函数的对称性进行了研究,并结合曲线平移预处理和数据量化技术,以显著降低硬件存储成本。本研究提出的模块化硬件架构支持多种精度的 NAF,适用于设计各种场景下的深度学习神经网络加速器,避免了为激活函数层设计专用硬件电路,提高了电路设计效率。所提出的硬件架构在赛灵思 XC7Z010 开发板上进行了验证。实验结果表明,在 312.5 MHz 的时钟频率下,平均绝对误差 (AAE) 降低了约 35.6%。此外,在 PyTorch 框架下替换 DNN 的激活层函数后,模型的精度损失达到最大值 -0.684%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A reconfigurable multi-precision quantization-aware nonlinear activation function hardware module for DNNs

In recent years, the increasing variety of nonlinear activation functions (NAFs) in deep neural networks (DNNs) has led to higher computational demands. However, hardware implementation faces challenges such as lack of flexibility, high hardware cost, and limited accuracy. This paper proposes a highly flexible and low-cost hardware solution for implementing activation functions to overcome these issues. Based on the piecewise linear (PWL) approximation method, our method supports NAFs with different accuracy configurations through a customized implementation strategy to meet the requirements in different scenario applications. In this paper, the symmetry of the activation function is investigated, and incorporate curve translation preprocessing and data quantization to significantly reduce hardware storage costs. The modular hardware architecture proposed in this study supports NAFs of multiple accuracies, which is suitable for designing deep learning neural network accelerators in various scenarios, avoiding the need to design dedicated hardware circuits for the activation function layer and enhances circuit design efficiency. The proposed hardware architecture is validated on the Xilinx XC7Z010 development board. The experimental results show that the average absolute error (AAE) is reduced by about 35.6 % at a clock frequency of 312.5 MHz. Additionally, the accuracy loss of the model is maximized to −0.684 % after replacing the activation layer function of DNNs under the PyTorch framework.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Microelectronics Journal
Microelectronics Journal 工程技术-工程:电子与电气
CiteScore
4.00
自引率
27.30%
发文量
222
审稿时长
43 days
期刊介绍: Published since 1969, the Microelectronics Journal is an international forum for the dissemination of research and applications of microelectronic systems, circuits, and emerging technologies. Papers published in the Microelectronics Journal have undergone peer review to ensure originality, relevance, and timeliness. The journal thus provides a worldwide, regular, and comprehensive update on microelectronic circuits and systems. The Microelectronics Journal invites papers describing significant research and applications in all of the areas listed below. Comprehensive review/survey papers covering recent developments will also be considered. The Microelectronics Journal covers circuits and systems. This topic includes but is not limited to: Analog, digital, mixed, and RF circuits and related design methodologies; Logic, architectural, and system level synthesis; Testing, design for testability, built-in self-test; Area, power, and thermal analysis and design; Mixed-domain simulation and design; Embedded systems; Non-von Neumann computing and related technologies and circuits; Design and test of high complexity systems integration; SoC, NoC, SIP, and NIP design and test; 3-D integration design and analysis; Emerging device technologies and circuits, such as FinFETs, SETs, spintronics, SFQ, MTJ, etc. Application aspects such as signal and image processing including circuits for cryptography, sensors, and actuators including sensor networks, reliability and quality issues, and economic models are also welcome.
期刊最新文献
Thermoreflectance property of gallium nitride 3-D impedance matching network (IMN) based on through-silicon via (TSV) for RF energy harvesting system A new method for temperature field characterization of microsystems based on transient thermal simulation Editorial Board Study on the influence mechanism of gate oxide degradation on DM EMI signals in SiC MOSFET
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1