{"title":"A Two-stage Training Mechanism for the CNN with Trainable Activation Function","authors":"K. Chen, Jing-Wen Liang","doi":"10.1109/ISOCC50952.2020.9333116","DOIUrl":null,"url":null,"abstract":"Activation function design is critical in the convolutional neural network (CNN) because it affects the learning speed and the precision of classification. In a hardware implementation, using traditional activation function may cause large hardware area overhead due to its complicated calculation such as exponential. To reduce the hardware overhead, Taylor series expansion is a popular way to approximate the traditional activation function. However, this approach brings some approximation errors, which reduce the accuracy of the involved CNN model. Therefore, the trainable activation function and a two-stage training mechanism are proposed in this paper to compensate for the accuracy loss due to the Taylor series expansion. After initializing the involved trainable activation function, the coefficients of the trainable activation function according to different neural network layer will be adjusted properly along with the neural network training process. Compared with the conventional approach, the proposed trainable activation function can involve fewer Taylor expansion terms to improve the classification accuracy by 2.24% to 53.96%. Therefore, CNN with trainable activation functions can achieve better classification accuracy with less area cost.","PeriodicalId":270577,"journal":{"name":"2020 International SoC Design Conference (ISOCC)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International SoC Design Conference (ISOCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISOCC50952.2020.9333116","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Activation function design is critical in the convolutional neural network (CNN) because it affects the learning speed and the precision of classification. In a hardware implementation, using traditional activation function may cause large hardware area overhead due to its complicated calculation such as exponential. To reduce the hardware overhead, Taylor series expansion is a popular way to approximate the traditional activation function. However, this approach brings some approximation errors, which reduce the accuracy of the involved CNN model. Therefore, the trainable activation function and a two-stage training mechanism are proposed in this paper to compensate for the accuracy loss due to the Taylor series expansion. After initializing the involved trainable activation function, the coefficients of the trainable activation function according to different neural network layer will be adjusted properly along with the neural network training process. Compared with the conventional approach, the proposed trainable activation function can involve fewer Taylor expansion terms to improve the classification accuracy by 2.24% to 53.96%. Therefore, CNN with trainable activation functions can achieve better classification accuracy with less area cost.