基于结构搜索和中间表示的网络剪枝

Dai Xuanhui, Chen Juan, Wen Quan
{"title":"基于结构搜索和中间表示的网络剪枝","authors":"Dai Xuanhui, Chen Juan, Wen Quan","doi":"10.1109/ICCWAMTIP53232.2021.9674132","DOIUrl":null,"url":null,"abstract":"Network pruning is widely used for compressing large neural networks to save computational resources. In traditional pruning methods, predefined hyperparameters are often required to determine the network structure of the target small network. However, too many hyperparameters are often undesirable. Therefore, we use the transformable architecture search (TAS) method to dynamically search the network structure of each layer when pruning the network width. In the TAS method, the channels number of the pruned network in each layer is represented by a learnable probability distribution. By minimizing computation cost, the probability distribution can be calculated and used to get the width configuration of the target pruned network. Then, the depth of the network was compressed based on the model get in the previous step. The method for compressing depth is block-wise intermediate representation training. This method is based on the hint training, where the network depth is compressed by comparing the intermediate representation of each layer of two equally wide teacher and student models. In the experiments, about 0.4% improvement over the existing method was viewed for the ResNet network on both CIFAR10 and CIFAR100 datasets.","PeriodicalId":358772,"journal":{"name":"2021 18th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Network Pruning Based On Architecture Search and Intermediate Representation\",\"authors\":\"Dai Xuanhui, Chen Juan, Wen Quan\",\"doi\":\"10.1109/ICCWAMTIP53232.2021.9674132\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Network pruning is widely used for compressing large neural networks to save computational resources. In traditional pruning methods, predefined hyperparameters are often required to determine the network structure of the target small network. However, too many hyperparameters are often undesirable. Therefore, we use the transformable architecture search (TAS) method to dynamically search the network structure of each layer when pruning the network width. In the TAS method, the channels number of the pruned network in each layer is represented by a learnable probability distribution. By minimizing computation cost, the probability distribution can be calculated and used to get the width configuration of the target pruned network. Then, the depth of the network was compressed based on the model get in the previous step. The method for compressing depth is block-wise intermediate representation training. This method is based on the hint training, where the network depth is compressed by comparing the intermediate representation of each layer of two equally wide teacher and student models. In the experiments, about 0.4% improvement over the existing method was viewed for the ResNet network on both CIFAR10 and CIFAR100 datasets.\",\"PeriodicalId\":358772,\"journal\":{\"name\":\"2021 18th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP)\",\"volume\":\"46 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 18th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCWAMTIP53232.2021.9674132\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 18th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCWAMTIP53232.2021.9674132","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

网络剪枝被广泛用于压缩大型神经网络以节省计算资源。在传统的剪枝方法中,通常需要预定义的超参数来确定目标小网络的网络结构。然而,过多的超参数通常是不可取的。因此,在对网络宽度进行剪枝时,采用可转换架构搜索(TAS)方法动态搜索各层的网络结构。在TAS方法中,每层修剪网络的通道数用一个可学习的概率分布表示。在计算代价最小的前提下,计算得到的概率分布可用于得到目标剪枝网络的宽度配置。然后,基于前一步得到的模型对网络深度进行压缩。压缩深度的方法是分块中间表示训练。该方法基于提示训练,通过比较两个同样宽的教师和学生模型的每层中间表示来压缩网络深度。在实验中,在CIFAR10和CIFAR100数据集上,ResNet网络比现有方法改进了约0.4%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Network Pruning Based On Architecture Search and Intermediate Representation
Network pruning is widely used for compressing large neural networks to save computational resources. In traditional pruning methods, predefined hyperparameters are often required to determine the network structure of the target small network. However, too many hyperparameters are often undesirable. Therefore, we use the transformable architecture search (TAS) method to dynamically search the network structure of each layer when pruning the network width. In the TAS method, the channels number of the pruned network in each layer is represented by a learnable probability distribution. By minimizing computation cost, the probability distribution can be calculated and used to get the width configuration of the target pruned network. Then, the depth of the network was compressed based on the model get in the previous step. The method for compressing depth is block-wise intermediate representation training. This method is based on the hint training, where the network depth is compressed by comparing the intermediate representation of each layer of two equally wide teacher and student models. In the experiments, about 0.4% improvement over the existing method was viewed for the ResNet network on both CIFAR10 and CIFAR100 datasets.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Joint Modulation and Coding Recognition Using Deep Learning Chinese Short Text Classification Based On Deep Learning Solving TPS by SA Based on Probabilistic Double Crossover Operator Personalized Federated Learning with Gradient Similarity Implicit Certificate Based Signcryption for a Secure Data Sharing in Clouds
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1