{"title":"An Analytical Method to Determine Minimum Per-Layer Precision of Deep Neural Networks","authors":"Charbel Sakr, Naresh R Shanbhag","doi":"10.1109/ICASSP.2018.8461702","DOIUrl":null,"url":null,"abstract":"There has been growing interest in the deployment of deep learning systems onto resource-constrained platforms for fast and efficient inference. However, typical models are overwhelmingly complex, making such integration very challenging and requiring compression mechanisms such as reduced precision. We present a layer-wise granular precision analysis which allows us to efficiently quantize pre-trained deep neural networks at minimal cost in terms of accuracy degradation. Our results are consistent with recent findings that perturbations in earlier layers are most destructive and hence needing more precision than in later layers. Our approach allows for significant complexity reduction demonstrated by numerical results on the MNIST and CIFAR-10 datasets. Indeed, for an equivalent level of accuracy, our fine-grained approach reduces the minimum precision in the network up to 8 bits over a naive uniform assignment. Furthermore, we match the accuracy level of a state-of-the-art binary network while requiring up to ~ 3.5 × lower complexity. Similarly, when compared to a state-of-the-art fixed-point network, the complexity savings are even higher (up to ~ 14×) with no loss in accuracy.","PeriodicalId":6638,"journal":{"name":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"124 1","pages":"1090-1094"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"32","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICASSP.2018.8461702","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 32
Abstract
There has been growing interest in the deployment of deep learning systems onto resource-constrained platforms for fast and efficient inference. However, typical models are overwhelmingly complex, making such integration very challenging and requiring compression mechanisms such as reduced precision. We present a layer-wise granular precision analysis which allows us to efficiently quantize pre-trained deep neural networks at minimal cost in terms of accuracy degradation. Our results are consistent with recent findings that perturbations in earlier layers are most destructive and hence needing more precision than in later layers. Our approach allows for significant complexity reduction demonstrated by numerical results on the MNIST and CIFAR-10 datasets. Indeed, for an equivalent level of accuracy, our fine-grained approach reduces the minimum precision in the network up to 8 bits over a naive uniform assignment. Furthermore, we match the accuracy level of a state-of-the-art binary network while requiring up to ~ 3.5 × lower complexity. Similarly, when compared to a state-of-the-art fixed-point network, the complexity savings are even higher (up to ~ 14×) with no loss in accuracy.