Tianyang Yu;Bi Wu;Ke Chen;Chenggang Yan;Weiqiang Liu
{"title":"Toward Efficient Retraining: A Large-Scale Approximate Neural Network Framework With Cross-Layer Optimization","authors":"Tianyang Yu;Bi Wu;Ke Chen;Chenggang Yan;Weiqiang Liu","doi":"10.1109/TVLSI.2024.3386900","DOIUrl":null,"url":null,"abstract":"Leveraging approximate multipliers in approximate neural networks (ApproxNNs) can effectively reduce hardware area and power consumption, making them suitable for edge-side applications. However, the propagation of layer-by-layer errors limits the application of approximate multipliers to large-scale ApproxNNs and complex tasks. Currently, retraining techniques that consider approximate multiplication errors are commonly used to compensate for the accuracy loss. However, due to the irregularity of the errors introduced by approximate multiplier, it is difficult for the existing generic acceleration hardware (e.g., GPU) to efficiently simulate its function and accelerate retraining, which thereby leads to a huge retraining overhead in ApproxNNs’ application. In this article, we propose an ApproxNN framework that introduces errors with regular and controlled positions for high-efficiency retraining of large-scale ApproxNNs. An approximate multiplier design that matches this framework is also presented to verify the effectiveness of the proposed ApproxNN framework. Experiment results demonstrate that the proposed ApproxNN framework is able to achieve up to \n<inline-formula> <tex-math>$46\\times $ </tex-math></inline-formula>\n speedup in retraining, and the proposed approximate multiplier reduces area/power-delay product (PDP) by 31%/63% compared to the exact multiplier. Compared with the floating-point neural network (NN) model, an accuracy decrease of only 1.13% is achieved when applied to ResNet50 on ImageNet dataset with only 15-epochs retraining, which surpasses other state-of-the-art designs.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":null,"pages":null},"PeriodicalIF":2.8000,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10504795/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Leveraging approximate multipliers in approximate neural networks (ApproxNNs) can effectively reduce hardware area and power consumption, making them suitable for edge-side applications. However, the propagation of layer-by-layer errors limits the application of approximate multipliers to large-scale ApproxNNs and complex tasks. Currently, retraining techniques that consider approximate multiplication errors are commonly used to compensate for the accuracy loss. However, due to the irregularity of the errors introduced by approximate multiplier, it is difficult for the existing generic acceleration hardware (e.g., GPU) to efficiently simulate its function and accelerate retraining, which thereby leads to a huge retraining overhead in ApproxNNs’ application. In this article, we propose an ApproxNN framework that introduces errors with regular and controlled positions for high-efficiency retraining of large-scale ApproxNNs. An approximate multiplier design that matches this framework is also presented to verify the effectiveness of the proposed ApproxNN framework. Experiment results demonstrate that the proposed ApproxNN framework is able to achieve up to
$46\times $
speedup in retraining, and the proposed approximate multiplier reduces area/power-delay product (PDP) by 31%/63% compared to the exact multiplier. Compared with the floating-point neural network (NN) model, an accuracy decrease of only 1.13% is achieved when applied to ResNet50 on ImageNet dataset with only 15-epochs retraining, which surpasses other state-of-the-art designs.
期刊介绍:
The IEEE Transactions on VLSI Systems is published as a monthly journal under the co-sponsorship of the IEEE Circuits and Systems Society, the IEEE Computer Society, and the IEEE Solid-State Circuits Society.
Design and realization of microelectronic systems using VLSI/ULSI technologies require close collaboration among scientists and engineers in the fields of systems architecture, logic and circuit design, chips and wafer fabrication, packaging, testing and systems applications. Generation of specifications, design and verification must be performed at all abstraction levels, including the system, register-transfer, logic, circuit, transistor and process levels.
To address this critical area through a common forum, the IEEE Transactions on VLSI Systems have been founded. The editorial board, consisting of international experts, invites original papers which emphasize and merit the novel systems integration aspects of microelectronic systems including interactions among systems design and partitioning, logic and memory design, digital and analog circuit design, layout synthesis, CAD tools, chips and wafer fabrication, testing and packaging, and systems level qualification. Thus, the coverage of these Transactions will focus on VLSI/ULSI microelectronic systems integration.