{"title":"cnn中模型压缩和知识转移的研究进展","authors":"Haoqian Xue, Keyu Ren","doi":"10.1109/CSAIEE54046.2021.9543192","DOIUrl":null,"url":null,"abstract":"Convolutional neural network (CNN) is the main tool for deep learning and computer vision, and it has many applications in face recognition, sign language recognition and speech recognition. As deep learning becomes more and more mature, the application of convolutional neural networks will become more and more widespread. As we know, the deeper a neural network is, the higher its memory and computational power overhead. Many neural networks used in medicine, autonomous driving, and language recognition have large model complexity, which makes it difficult to apply these CNNs to people's daily life. Therefore, the development of simple, lightweight and small neural networks has become the focus of researchers nowadays. In this paper, we summarize the development of convolutional neural networks in recent years, as well as various methods for compressing models and migrating data from large models to small ones. In general, the main convolutional neural network compression approaches are: pruning, knowledge distillation, aggregating neurons of different scales, proposing new structures, etc. We start from the structure of neural networks, introduce the major structural changes experienced from the development of convolutional neural networks, and then analyze various pruning, compression and knowledge distillation methods. For specific methods, we run different models and compare the improvements of the new methods with respect to the old ones. We also debugged models on adversarial generative pruning, teacher-student networks, and other compressed CNNs during this period, and drew some constructive conclusions. Finally, we summarize the trends in CNN development in recent years and the challenges we may face in the future.","PeriodicalId":376014,"journal":{"name":"2021 IEEE International Conference on Computer Science, Artificial Intelligence and Electronic Engineering (CSAIEE)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Recent research trends on Model Compression and Knowledge Transfer in CNNs\",\"authors\":\"Haoqian Xue, Keyu Ren\",\"doi\":\"10.1109/CSAIEE54046.2021.9543192\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Convolutional neural network (CNN) is the main tool for deep learning and computer vision, and it has many applications in face recognition, sign language recognition and speech recognition. As deep learning becomes more and more mature, the application of convolutional neural networks will become more and more widespread. As we know, the deeper a neural network is, the higher its memory and computational power overhead. Many neural networks used in medicine, autonomous driving, and language recognition have large model complexity, which makes it difficult to apply these CNNs to people's daily life. Therefore, the development of simple, lightweight and small neural networks has become the focus of researchers nowadays. In this paper, we summarize the development of convolutional neural networks in recent years, as well as various methods for compressing models and migrating data from large models to small ones. In general, the main convolutional neural network compression approaches are: pruning, knowledge distillation, aggregating neurons of different scales, proposing new structures, etc. We start from the structure of neural networks, introduce the major structural changes experienced from the development of convolutional neural networks, and then analyze various pruning, compression and knowledge distillation methods. For specific methods, we run different models and compare the improvements of the new methods with respect to the old ones. We also debugged models on adversarial generative pruning, teacher-student networks, and other compressed CNNs during this period, and drew some constructive conclusions. Finally, we summarize the trends in CNN development in recent years and the challenges we may face in the future.\",\"PeriodicalId\":376014,\"journal\":{\"name\":\"2021 IEEE International Conference on Computer Science, Artificial Intelligence and Electronic Engineering (CSAIEE)\",\"volume\":\"67 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-08-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Computer Science, Artificial Intelligence and Electronic Engineering (CSAIEE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CSAIEE54046.2021.9543192\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Computer Science, Artificial Intelligence and Electronic Engineering (CSAIEE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSAIEE54046.2021.9543192","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

卷积神经网络(CNN)是深度学习和计算机视觉的主要工具,在人脸识别、手语识别和语音识别等领域有着广泛的应用。随着深度学习的日益成熟,卷积神经网络的应用也将越来越广泛。正如我们所知,神经网络越深,它的内存和计算能力开销就越高。许多应用于医学、自动驾驶、语言识别等领域的神经网络都具有很大的模型复杂度,这使得这些神经网络很难应用到人们的日常生活中。因此,开发简单、轻量、小型的神经网络已成为当今研究人员关注的焦点。本文总结了近年来卷积神经网络的发展,以及各种压缩模型和将数据从大模型迁移到小模型的方法。一般来说,卷积神经网络压缩的主要方法有:剪枝、知识蒸馏、不同尺度的神经元聚合、提出新结构等。本文从神经网络的结构入手,介绍了卷积神经网络发展过程中所经历的主要结构变化,然后分析了各种修剪、压缩和知识蒸馏方法。对于具体的方法,我们运行了不同的模型,并比较了新方法相对于旧方法的改进。在此期间,我们还在对抗性生成修剪、师生网络和其他压缩cnn上调试了模型,并得出了一些建设性的结论。最后总结了近年来CNN的发展趋势以及未来可能面临的挑战。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Recent research trends on Model Compression and Knowledge Transfer in CNNs
Convolutional neural network (CNN) is the main tool for deep learning and computer vision, and it has many applications in face recognition, sign language recognition and speech recognition. As deep learning becomes more and more mature, the application of convolutional neural networks will become more and more widespread. As we know, the deeper a neural network is, the higher its memory and computational power overhead. Many neural networks used in medicine, autonomous driving, and language recognition have large model complexity, which makes it difficult to apply these CNNs to people's daily life. Therefore, the development of simple, lightweight and small neural networks has become the focus of researchers nowadays. In this paper, we summarize the development of convolutional neural networks in recent years, as well as various methods for compressing models and migrating data from large models to small ones. In general, the main convolutional neural network compression approaches are: pruning, knowledge distillation, aggregating neurons of different scales, proposing new structures, etc. We start from the structure of neural networks, introduce the major structural changes experienced from the development of convolutional neural networks, and then analyze various pruning, compression and knowledge distillation methods. For specific methods, we run different models and compare the improvements of the new methods with respect to the old ones. We also debugged models on adversarial generative pruning, teacher-student networks, and other compressed CNNs during this period, and drew some constructive conclusions. Finally, we summarize the trends in CNN development in recent years and the challenges we may face in the future.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Res-Attention Net: An Image Dehazing Network Teacher-Student Network for Low-quality Remote Sensing Ship Detection Optimization of GNSS Signals Acquisition Algorithm Complexity Using Comb Decimation Filter Basic Ensemble Learning of Encoder Representations from Transformer for Disaster-mentioning Tweets Classification Measuring Hilbert-Schmidt Independence Criterion with Different Kernels
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1