A survey on knowledge distillation: Recent advancements

Amir Moslemi , Anna Briskina , Zubeka Dang , Jason Li
{"title":"A survey on knowledge distillation: Recent advancements","authors":"Amir Moslemi ,&nbsp;Anna Briskina ,&nbsp;Zubeka Dang ,&nbsp;Jason Li","doi":"10.1016/j.mlwa.2024.100605","DOIUrl":null,"url":null,"abstract":"<div><div>Deep learning has achieved notable success across academia, medicine, and industry. Its ability to identify complex patterns in large-scale data and to manage millions of parameters has made it highly advantageous. However, deploying deep learning models presents a significant challenge due to their high computational demands. Knowledge distillation (KD) has emerged as a key technique for model compression and efficient knowledge transfer, enabling the deployment of deep learning models on resource-limited devices without compromising performance. This survey examines recent advancements in KD, highlighting key innovations in architectures, training paradigms, and application domains. We categorize contemporary KD methods into traditional approaches, such as response-based, feature-based, and relation-based knowledge distillation, and novel advanced paradigms, including self-distillation, cross-modal distillation, and adversarial distillation strategies. Additionally, we discuss emerging challenges, particularly in the context of distillation under limited data scenarios, privacy-preserving KD, and the interplay with other model compression techniques like quantization. Our survey also explores applications across computer vision, natural language processing, and multimodal tasks, where KD has driven performance improvements and enhanced model compression. This review aims to provide researchers and practitioners with a comprehensive understanding of the state-of-the-art in knowledge distillation, bridging foundational concepts with the latest methodologies and practical implications.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"18 ","pages":"Article 100605"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine learning with applications","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666827024000811","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Deep learning has achieved notable success across academia, medicine, and industry. Its ability to identify complex patterns in large-scale data and to manage millions of parameters has made it highly advantageous. However, deploying deep learning models presents a significant challenge due to their high computational demands. Knowledge distillation (KD) has emerged as a key technique for model compression and efficient knowledge transfer, enabling the deployment of deep learning models on resource-limited devices without compromising performance. This survey examines recent advancements in KD, highlighting key innovations in architectures, training paradigms, and application domains. We categorize contemporary KD methods into traditional approaches, such as response-based, feature-based, and relation-based knowledge distillation, and novel advanced paradigms, including self-distillation, cross-modal distillation, and adversarial distillation strategies. Additionally, we discuss emerging challenges, particularly in the context of distillation under limited data scenarios, privacy-preserving KD, and the interplay with other model compression techniques like quantization. Our survey also explores applications across computer vision, natural language processing, and multimodal tasks, where KD has driven performance improvements and enhanced model compression. This review aims to provide researchers and practitioners with a comprehensive understanding of the state-of-the-art in knowledge distillation, bridging foundational concepts with the latest methodologies and practical implications.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
知识提炼调查:最新进展
深度学习在学术界、医学界和工业界都取得了显著的成就。深度学习能够识别大规模数据中的复杂模式并管理数百万个参数,这使其具有极大的优势。然而,由于对计算的要求很高,部署深度学习模型是一项重大挑战。知识蒸馏(KD)已成为模型压缩和高效知识传输的一项关键技术,可在资源有限的设备上部署深度学习模型,同时不影响性能。本调查研究了知识蒸馏的最新进展,突出了架构、训练范式和应用领域的关键创新。我们将当代 KD 方法分为传统方法(如基于响应、基于特征和基于关系的知识蒸馏)和新型高级范式(包括自蒸馏、跨模态蒸馏和对抗性蒸馏策略)。此外,我们还讨论了新出现的挑战,特别是在有限数据场景下的蒸馏、保护隐私的 KD 以及与量化等其他模型压缩技术的相互作用等方面。我们的调查还探讨了计算机视觉、自然语言处理和多模态任务中的应用,在这些应用中,KD 推动了性能的提高并加强了模型压缩。本综述旨在让研究人员和从业人员全面了解知识蒸馏的最新进展,将基础概念与最新方法和实际影响联系起来。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Machine learning with applications
Machine learning with applications Management Science and Operations Research, Artificial Intelligence, Computer Science Applications
自引率
0.00%
发文量
0
审稿时长
98 days
期刊最新文献
Document Layout Error Rate (DLER) metric to evaluate image segmentation methods Supervised machine learning for microbiomics: Bridging the gap between current and best practices Playing with words: Comparing the vocabulary and lexical diversity of ChatGPT and humans A survey on knowledge distillation: Recent advancements Texas rural land market integration: A causal analysis using machine learning applications
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1