一种基于动态频道排序策略的CNN压缩方法

IF 0.8 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE International Journal of Computational Intelligence and Applications Pub Date : 2023-08-22 DOI:10.1142/s1469026823500256
Ruiming Wen, Jian Wang, Yuanlun Xie, Wenhong Tian
{"title":"一种基于动态频道排序策略的CNN压缩方法","authors":"Ruiming Wen, Jian Wang, Yuanlun Xie, Wenhong Tian","doi":"10.1142/s1469026823500256","DOIUrl":null,"url":null,"abstract":"In recent years, the rapid development of mobile devices and embedded system raises a demand for intelligent models to address increasingly complicated problems. However, the complexity of the structure and extensive parameters press significantly on efficiency, storage space, and energy consumption. Additionally, the explosive growth of tasks with enormous model structures and parameters makes it impossible to compress models manually. Thus, a standardized and effective model compression solution achieving lightweight neural networks is established as an urgent demand by the industry. Accordingly, Dynamic Channel Ranking Strategy (DCRS) method is proposed to compress deep convolutional neural networks. DCRS selects channels with high contribution of each prunable layer according to compression ratio searched by reinforcement learning agent. Compared with current model compression methods, DCRS efficaciously applies various channel ranking strategies on prunable layers. Experiments indicate with a 50% compression ratio, compressed MobileNet achieved 70.62% top1 and 88.2% top5 accuracy on ImageNet, and compressed ResNet achieved 92.03% accuracy on CIFAR-10. DCRS reduces more FLOPS in these neural networks. The compressed model achieves the best Top-1 and Top-5 accuracy on ResNet50, the best Top-1 accuracy on MobilNetV1.","PeriodicalId":45994,"journal":{"name":"International Journal of Computational Intelligence and Applications","volume":" ","pages":""},"PeriodicalIF":0.8000,"publicationDate":"2023-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A CNN Compression Method via Dynamic Channel Ranking Strategy\",\"authors\":\"Ruiming Wen, Jian Wang, Yuanlun Xie, Wenhong Tian\",\"doi\":\"10.1142/s1469026823500256\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, the rapid development of mobile devices and embedded system raises a demand for intelligent models to address increasingly complicated problems. However, the complexity of the structure and extensive parameters press significantly on efficiency, storage space, and energy consumption. Additionally, the explosive growth of tasks with enormous model structures and parameters makes it impossible to compress models manually. Thus, a standardized and effective model compression solution achieving lightweight neural networks is established as an urgent demand by the industry. Accordingly, Dynamic Channel Ranking Strategy (DCRS) method is proposed to compress deep convolutional neural networks. DCRS selects channels with high contribution of each prunable layer according to compression ratio searched by reinforcement learning agent. Compared with current model compression methods, DCRS efficaciously applies various channel ranking strategies on prunable layers. Experiments indicate with a 50% compression ratio, compressed MobileNet achieved 70.62% top1 and 88.2% top5 accuracy on ImageNet, and compressed ResNet achieved 92.03% accuracy on CIFAR-10. DCRS reduces more FLOPS in these neural networks. The compressed model achieves the best Top-1 and Top-5 accuracy on ResNet50, the best Top-1 accuracy on MobilNetV1.\",\"PeriodicalId\":45994,\"journal\":{\"name\":\"International Journal of Computational Intelligence and Applications\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.8000,\"publicationDate\":\"2023-08-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Computational Intelligence and Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1142/s1469026823500256\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computational Intelligence and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/s1469026823500256","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

近年来,移动设备和嵌入式系统的快速发展提出了对智能模型的需求,以解决日益复杂的问题。然而,结构的复杂性和广泛的参数极大地影响了效率、存储空间和能耗。此外,具有庞大模型结构和参数的任务的爆炸性增长使得手动压缩模型变得不可能。因此,建立一种标准化、有效的模型压缩解决方案,实现轻量级神经网络,成为行业的迫切需求。因此,提出了动态信道排序策略(DCRS)方法来压缩深度卷积神经网络。DCRS根据增强学习代理搜索到的压缩比,选择每个可压缩层贡献率较高的通道。与现有的模型压缩方法相比,DCRS在可压缩层上有效地应用了各种信道排序策略。实验表明,在50%的压缩率下,压缩的MobileNet在ImageNet上实现了70.62%的top1和88.2%的top5准确率,压缩的ResNet在CIFAR-10上实现了92.03%的准确率。DCRS减少了这些神经网络中更多的FLOPS。压缩模型在ResNet50上实现了最佳的Top-1和Top-5精度,在MobilNetV1上实现了最好的Top-1精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A CNN Compression Method via Dynamic Channel Ranking Strategy
In recent years, the rapid development of mobile devices and embedded system raises a demand for intelligent models to address increasingly complicated problems. However, the complexity of the structure and extensive parameters press significantly on efficiency, storage space, and energy consumption. Additionally, the explosive growth of tasks with enormous model structures and parameters makes it impossible to compress models manually. Thus, a standardized and effective model compression solution achieving lightweight neural networks is established as an urgent demand by the industry. Accordingly, Dynamic Channel Ranking Strategy (DCRS) method is proposed to compress deep convolutional neural networks. DCRS selects channels with high contribution of each prunable layer according to compression ratio searched by reinforcement learning agent. Compared with current model compression methods, DCRS efficaciously applies various channel ranking strategies on prunable layers. Experiments indicate with a 50% compression ratio, compressed MobileNet achieved 70.62% top1 and 88.2% top5 accuracy on ImageNet, and compressed ResNet achieved 92.03% accuracy on CIFAR-10. DCRS reduces more FLOPS in these neural networks. The compressed model achieves the best Top-1 and Top-5 accuracy on ResNet50, the best Top-1 accuracy on MobilNetV1.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.90
自引率
0.00%
发文量
25
期刊介绍: The International Journal of Computational Intelligence and Applications, IJCIA, is a refereed journal dedicated to the theory and applications of computational intelligence (artificial neural networks, fuzzy systems, evolutionary computation and hybrid systems). The main goal of this journal is to provide the scientific community and industry with a vehicle whereby ideas using two or more conventional and computational intelligence based techniques could be discussed. The IJCIA welcomes original works in areas such as neural networks, fuzzy logic, evolutionary computation, pattern recognition, hybrid intelligent systems, symbolic machine learning, statistical models, image/audio/video compression and retrieval.
期刊最新文献
Software Effort Estimation Based on Ensemble Extreme Gradient Boosting Algorithm and Modified Jaya Optimization Algorithm Soybean Leaf Diseases Recognition Based on Generative Adversarial Network and Transfer Learning A Study of Digital Museum Collection Recommendation Algorithm Based on Improved Fuzzy Clustering Algorithm Efficiency in Orchid Species Classification: A Transfer Learning-Based Approach Research on Fault Detection for Microservices Based on Log Information and Social Network Mechanism Using BiLSTM-DCNN Model
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1