{"title":"VLSI Architecture Design for Adder Convolution Neural Network Accelerator","authors":"Mingyong Zhuang, Xinhui Liao, Huhong Wu, Jianyang Zhou, Zichao Guo","doi":"10.1109/asid52932.2021.9651682","DOIUrl":null,"url":null,"abstract":"Convolution Neural Network (ConvNet) achieved good performance in a variety of image processing tasks. How-ever, a large number of multiplication operations in convolution layers affect mobile device deployment of the ConvNet. Recently, an Adder Convolution Network (AdderNet) was proposed to reduce the multiplication operations of common convolutional neural networks. In this paper, we analyzed differences in calculation processes between the AdderNet and the ConvNet and proposed the VLSI architecture of the AdderNet. In addition to analyzing resource consumption of adder convolutional layers, we also built the whole LeNet neural network with the adder convolutional layers and calculated inference latency. Experiment results showed the proposed VLSI architecture of the AdderNet reduced the latency by 29.26%. Compared with the multiplication convolution layer, the resource consumptions of DSP, Flip-Flop, and LUT for the adder convolution layer were reduced respectively by 6.25%, 0.31%, and 0.86%.","PeriodicalId":150884,"journal":{"name":"2021 IEEE 15th International Conference on Anti-counterfeiting, Security, and Identification (ASID)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 15th International Conference on Anti-counterfeiting, Security, and Identification (ASID)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/asid52932.2021.9651682","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Convolution Neural Network (ConvNet) achieved good performance in a variety of image processing tasks. How-ever, a large number of multiplication operations in convolution layers affect mobile device deployment of the ConvNet. Recently, an Adder Convolution Network (AdderNet) was proposed to reduce the multiplication operations of common convolutional neural networks. In this paper, we analyzed differences in calculation processes between the AdderNet and the ConvNet and proposed the VLSI architecture of the AdderNet. In addition to analyzing resource consumption of adder convolutional layers, we also built the whole LeNet neural network with the adder convolutional layers and calculated inference latency. Experiment results showed the proposed VLSI architecture of the AdderNet reduced the latency by 29.26%. Compared with the multiplication convolution layer, the resource consumptions of DSP, Flip-Flop, and LUT for the adder convolution layer were reduced respectively by 6.25%, 0.31%, and 0.86%.