{"title":"An Efficient Hardware Architecture for Activation Function in Deep Learning Processor","authors":"Lin Li, Shengbing Zhang, Juan Wu","doi":"10.1109/ICIVC.2018.8492754","DOIUrl":null,"url":null,"abstract":"In order to explore the efficient design and implementation of activation function in deep learning processor, this paper presents an efficient five-stage pipelined hardware architecture for activation function based on the piecewise linear interpolation, and a novel neuron data-LUT address mapping algorithm. Compared with the previous designs based on serial calculation, the proposed hardware architecture can achieve at least 3 times of acceleration. Four commonly used activation functions are designed based on the proposed hardware architecture, which is implemented on the XC6VLX240T of Xilinx. The LeNet-5 and AlexNet are selected as benchmarks to test the inference accuracy of different activation functions with different piecewise numbers on the MNIST and CIFAR-10 test sets in the deep learning processor prototype system. The experiment results show that the proposed hardware architecture can effectively accomplish the relevant calculation of activation functions in the deep learning processor and the accuracy loss is negligible. The proposed hardware architecture is adaptable for numerous activation functions, which can be widely used in the design of other deep learning processors.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIVC.2018.8492754","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
In order to explore the efficient design and implementation of activation function in deep learning processor, this paper presents an efficient five-stage pipelined hardware architecture for activation function based on the piecewise linear interpolation, and a novel neuron data-LUT address mapping algorithm. Compared with the previous designs based on serial calculation, the proposed hardware architecture can achieve at least 3 times of acceleration. Four commonly used activation functions are designed based on the proposed hardware architecture, which is implemented on the XC6VLX240T of Xilinx. The LeNet-5 and AlexNet are selected as benchmarks to test the inference accuracy of different activation functions with different piecewise numbers on the MNIST and CIFAR-10 test sets in the deep learning processor prototype system. The experiment results show that the proposed hardware architecture can effectively accomplish the relevant calculation of activation functions in the deep learning processor and the accuracy loss is negligible. The proposed hardware architecture is adaptable for numerous activation functions, which can be widely used in the design of other deep learning processors.