Jongmin Park, Seungsik Moon, Younghoon Byun, Sunggu Lee, Youngjoo Lee
{"title":"记忆缩减卷积神经网络的多级权重索引方案","authors":"Jongmin Park, Seungsik Moon, Younghoon Byun, Sunggu Lee, Youngjoo Lee","doi":"10.1109/AICAS.2019.8771492","DOIUrl":null,"url":null,"abstract":"Targeting the resource-limited intelligent mobile systems, in this paper, we present a multi-level weight indexing method that relaxes the memory requirements for realizing the convolutional neural networks (CNNs). In contrast that the previous works are only focusing on the positions of unpruned weights, the proposed work considers the consecutive pruned positions to generate the group-level validations. Denoting the survived indices only for the valid groups, the proposed multi-level indexing scheme reduces the amount of indexing data. In addition, we introduce the indexing-aware multi-level pruning and indexing methods with variable group sizes, which can further optimize the memory overheads. For the same pruning factor, as a result, the memory size for storing the indexing information is remarkably reduced by up to 81%, leading to the practical CNN architecture for intelligent mobile devices.","PeriodicalId":273095,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Multi-level Weight Indexing Scheme for Memory-Reduced Convolutional Neural Network\",\"authors\":\"Jongmin Park, Seungsik Moon, Younghoon Byun, Sunggu Lee, Youngjoo Lee\",\"doi\":\"10.1109/AICAS.2019.8771492\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Targeting the resource-limited intelligent mobile systems, in this paper, we present a multi-level weight indexing method that relaxes the memory requirements for realizing the convolutional neural networks (CNNs). In contrast that the previous works are only focusing on the positions of unpruned weights, the proposed work considers the consecutive pruned positions to generate the group-level validations. Denoting the survived indices only for the valid groups, the proposed multi-level indexing scheme reduces the amount of indexing data. In addition, we introduce the indexing-aware multi-level pruning and indexing methods with variable group sizes, which can further optimize the memory overheads. For the same pruning factor, as a result, the memory size for storing the indexing information is remarkably reduced by up to 81%, leading to the practical CNN architecture for intelligent mobile devices.\",\"PeriodicalId\":273095,\"journal\":{\"name\":\"2019 IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS)\",\"volume\":\"27 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AICAS.2019.8771492\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AICAS.2019.8771492","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Multi-level Weight Indexing Scheme for Memory-Reduced Convolutional Neural Network
Targeting the resource-limited intelligent mobile systems, in this paper, we present a multi-level weight indexing method that relaxes the memory requirements for realizing the convolutional neural networks (CNNs). In contrast that the previous works are only focusing on the positions of unpruned weights, the proposed work considers the consecutive pruned positions to generate the group-level validations. Denoting the survived indices only for the valid groups, the proposed multi-level indexing scheme reduces the amount of indexing data. In addition, we introduce the indexing-aware multi-level pruning and indexing methods with variable group sizes, which can further optimize the memory overheads. For the same pruning factor, as a result, the memory size for storing the indexing information is remarkably reduced by up to 81%, leading to the practical CNN architecture for intelligent mobile devices.