{"title":"基于蒙特卡罗方法的深度卷积网络精度分析","authors":"Robert Krutsch, S. Naidu","doi":"10.1109/DASIP.2016.7853814","DOIUrl":null,"url":null,"abstract":"Convolution Neural Networks today provide the best results for many image detection and image recognition problems. The network accuracy increase in the past years is obtained through an increase in complexity of the structure and amount of parameters of the deep networks. Memory bandwidth and power consumption constraints are limiting the deployment of such state-of-the-art architecture in low power embedded applications. Reduced coefficient bit depth is one of the most frequently used approach to bring the deep learning neural networks into low power embedded hardware accelerators. In this paper we propose a reduced precision, fixed point implementation that can reduce bandwidth and power consumption significantly. The results show that with an 8bit representation for more than 64% of the parameters less than 0.5% accuracy is lost. As expected, the error resilience varies from layer to layer and convolution kernel to convolution kernel. To cope with this variability and understand what parameter need what type of precision we have developed a Monte Carlo simulation tool that explores the decision space.","PeriodicalId":6494,"journal":{"name":"2016 Conference on Design and Architectures for Signal and Image Processing (DASIP)","volume":"42 1","pages":"162-167"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Monte Carlo method based precision analysis of deep convolution nets\",\"authors\":\"Robert Krutsch, S. Naidu\",\"doi\":\"10.1109/DASIP.2016.7853814\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Convolution Neural Networks today provide the best results for many image detection and image recognition problems. The network accuracy increase in the past years is obtained through an increase in complexity of the structure and amount of parameters of the deep networks. Memory bandwidth and power consumption constraints are limiting the deployment of such state-of-the-art architecture in low power embedded applications. Reduced coefficient bit depth is one of the most frequently used approach to bring the deep learning neural networks into low power embedded hardware accelerators. In this paper we propose a reduced precision, fixed point implementation that can reduce bandwidth and power consumption significantly. The results show that with an 8bit representation for more than 64% of the parameters less than 0.5% accuracy is lost. As expected, the error resilience varies from layer to layer and convolution kernel to convolution kernel. To cope with this variability and understand what parameter need what type of precision we have developed a Monte Carlo simulation tool that explores the decision space.\",\"PeriodicalId\":6494,\"journal\":{\"name\":\"2016 Conference on Design and Architectures for Signal and Image Processing (DASIP)\",\"volume\":\"42 1\",\"pages\":\"162-167\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 Conference on Design and Architectures for Signal and Image Processing (DASIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DASIP.2016.7853814\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 Conference on Design and Architectures for Signal and Image Processing (DASIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DASIP.2016.7853814","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Monte Carlo method based precision analysis of deep convolution nets
Convolution Neural Networks today provide the best results for many image detection and image recognition problems. The network accuracy increase in the past years is obtained through an increase in complexity of the structure and amount of parameters of the deep networks. Memory bandwidth and power consumption constraints are limiting the deployment of such state-of-the-art architecture in low power embedded applications. Reduced coefficient bit depth is one of the most frequently used approach to bring the deep learning neural networks into low power embedded hardware accelerators. In this paper we propose a reduced precision, fixed point implementation that can reduce bandwidth and power consumption significantly. The results show that with an 8bit representation for more than 64% of the parameters less than 0.5% accuracy is lost. As expected, the error resilience varies from layer to layer and convolution kernel to convolution kernel. To cope with this variability and understand what parameter need what type of precision we have developed a Monte Carlo simulation tool that explores the decision space.