知识升华的渠道关注与渠道组合

C. Han, K. Lee
{"title":"知识升华的渠道关注与渠道组合","authors":"C. Han, K. Lee","doi":"10.1145/3400286.3418273","DOIUrl":null,"url":null,"abstract":"Knowledge distillation is a strategy to build machine learning models efficiently by making use of knowledge embedded in a pretrained model. Teacher-student framework is a well-known one to use knowledge distillation, where a teacher network usually contains knowledge for a specific task and a student network is constructed in a simpler architecture inheriting the knowledge of the teacher network. This paper proposes a new approach that uses an attention mechanism to extract knowledge from a teacher network. The attention function plays the role of determining which channels of feature maps in the teacher network to be used for training the student network so that the student network can only learn useful features. This approach allows a new model to learn useful features considering the model complexity.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"485 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Channel-Wise Attention and Channel Combination for Knowledge Distillation\",\"authors\":\"C. Han, K. Lee\",\"doi\":\"10.1145/3400286.3418273\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Knowledge distillation is a strategy to build machine learning models efficiently by making use of knowledge embedded in a pretrained model. Teacher-student framework is a well-known one to use knowledge distillation, where a teacher network usually contains knowledge for a specific task and a student network is constructed in a simpler architecture inheriting the knowledge of the teacher network. This paper proposes a new approach that uses an attention mechanism to extract knowledge from a teacher network. The attention function plays the role of determining which channels of feature maps in the teacher network to be used for training the student network so that the student network can only learn useful features. This approach allows a new model to learn useful features considering the model complexity.\",\"PeriodicalId\":326100,\"journal\":{\"name\":\"Proceedings of the International Conference on Research in Adaptive and Convergent Systems\",\"volume\":\"485 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the International Conference on Research in Adaptive and Convergent Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3400286.3418273\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3400286.3418273","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

知识蒸馏是一种利用预训练模型中嵌入的知识高效构建机器学习模型的策略。师生框架是一种著名的知识蒸馏框架,其中教师网络通常包含特定任务的知识,学生网络在继承教师网络知识的基础上以更简单的架构构建。本文提出了一种利用注意力机制从教师网络中提取知识的新方法。注意函数的作用是决定教师网络中哪些特征映射通道用于训练学生网络,使学生网络只学习有用的特征。这种方法允许新模型在考虑模型复杂性的情况下学习有用的特征。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Channel-Wise Attention and Channel Combination for Knowledge Distillation
Knowledge distillation is a strategy to build machine learning models efficiently by making use of knowledge embedded in a pretrained model. Teacher-student framework is a well-known one to use knowledge distillation, where a teacher network usually contains knowledge for a specific task and a student network is constructed in a simpler architecture inheriting the knowledge of the teacher network. This paper proposes a new approach that uses an attention mechanism to extract knowledge from a teacher network. The attention function plays the role of determining which channels of feature maps in the teacher network to be used for training the student network so that the student network can only learn useful features. This approach allows a new model to learn useful features considering the model complexity.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
An Extrinsic Depth Camera Calibration Method for Narrow Field of View Color Camera Motion Mode Recognition for Traffic Safety in Campus Guiding Application Failure Prediction by Utilizing Log Analysis: A Systematic Mapping Study PerfNet Road Surface Profiling based on Artificial-Neural Networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1