{"title":"基于敏感层和强化学习的深度神经网络修剪方法","authors":"Wenchuan Yang, Haoran Yu, Baojiang Cui, Runqi Sui, Tianyu Gu","doi":"10.1007/s10462-023-10566-5","DOIUrl":null,"url":null,"abstract":"<div><p>It is of great significance to compress neural network models so that they can be deployed on resource-constrained embedded mobile devices. However, due to the lack of theoretical guidance for non-salient network components, existing model compression methods are inefficient and labor-intensive. In this paper, we propose a new pruning method to achieve model compression. By exploring the rank ordering of the feature maps of convolutional layers, we introduce the concept of sensitive layers and treat layers with more low-rank feature maps as sensitive layers. We propose a new algorithm for finding sensitive layers while using reinforcement learning deterministic strategies to automate pruning for insensitive layers. Experimental results show that our method achieves significant improvements over the state-of-the-art in floating-point operations and parameter reduction, with lower precision loss. For example, using ResNet-110 on CIFAR-10 achieves a 62.2% reduction in floating-point operations by removing 63.9% of parameters. When testing ResNet-50 on ImageNet, our method reduces floating-point operations by 53.8% by deleting 39.9% of the parameters.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"56 2","pages":"1897 - 1917"},"PeriodicalIF":10.7000,"publicationDate":"2023-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Deep neural network pruning method based on sensitive layers and reinforcement learning\",\"authors\":\"Wenchuan Yang, Haoran Yu, Baojiang Cui, Runqi Sui, Tianyu Gu\",\"doi\":\"10.1007/s10462-023-10566-5\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>It is of great significance to compress neural network models so that they can be deployed on resource-constrained embedded mobile devices. However, due to the lack of theoretical guidance for non-salient network components, existing model compression methods are inefficient and labor-intensive. In this paper, we propose a new pruning method to achieve model compression. By exploring the rank ordering of the feature maps of convolutional layers, we introduce the concept of sensitive layers and treat layers with more low-rank feature maps as sensitive layers. We propose a new algorithm for finding sensitive layers while using reinforcement learning deterministic strategies to automate pruning for insensitive layers. Experimental results show that our method achieves significant improvements over the state-of-the-art in floating-point operations and parameter reduction, with lower precision loss. For example, using ResNet-110 on CIFAR-10 achieves a 62.2% reduction in floating-point operations by removing 63.9% of parameters. When testing ResNet-50 on ImageNet, our method reduces floating-point operations by 53.8% by deleting 39.9% of the parameters.</p></div>\",\"PeriodicalId\":8449,\"journal\":{\"name\":\"Artificial Intelligence Review\",\"volume\":\"56 2\",\"pages\":\"1897 - 1917\"},\"PeriodicalIF\":10.7000,\"publicationDate\":\"2023-08-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence Review\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10462-023-10566-5\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-023-10566-5","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Deep neural network pruning method based on sensitive layers and reinforcement learning
It is of great significance to compress neural network models so that they can be deployed on resource-constrained embedded mobile devices. However, due to the lack of theoretical guidance for non-salient network components, existing model compression methods are inefficient and labor-intensive. In this paper, we propose a new pruning method to achieve model compression. By exploring the rank ordering of the feature maps of convolutional layers, we introduce the concept of sensitive layers and treat layers with more low-rank feature maps as sensitive layers. We propose a new algorithm for finding sensitive layers while using reinforcement learning deterministic strategies to automate pruning for insensitive layers. Experimental results show that our method achieves significant improvements over the state-of-the-art in floating-point operations and parameter reduction, with lower precision loss. For example, using ResNet-110 on CIFAR-10 achieves a 62.2% reduction in floating-point operations by removing 63.9% of parameters. When testing ResNet-50 on ImageNet, our method reduces floating-point operations by 53.8% by deleting 39.9% of the parameters.
期刊介绍:
Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.