Why logit distillation works: A novel knowledge distillation technique by deriving target augmentation and logits distortion

IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Information Processing & Management Pub Date : 2025-05-01 Epub Date: 2025-01-13 DOI:10.1016/j.ipm.2024.104056
Md Imtiaz Hossain, Sharmen Akhter, Nosin Ibna Mahbub, Choong Seon Hong, Eui-Nam Huh
{"title":"Why logit distillation works: A novel knowledge distillation technique by deriving target augmentation and logits distortion","authors":"Md Imtiaz Hossain,&nbsp;Sharmen Akhter,&nbsp;Nosin Ibna Mahbub,&nbsp;Choong Seon Hong,&nbsp;Eui-Nam Huh","doi":"10.1016/j.ipm.2024.104056","DOIUrl":null,"url":null,"abstract":"<div><div>Although logit distillation aims to transfer knowledge from a large teacher network to a student, the underlying mechanisms and reasons for its effectiveness are unclear. This article explains the effectiveness of knowledge distillation (KD). Based on the observations, this paper proposes a novel distillation technique called TALD-KD that performs through Target Augmentation and a novel concept of dynamic Logits Distortion technique. The proposed TALD-KD unraveled the intricate relationships of dark knowledge semantics, randomness, flexibility, and augmentation with logits-level KD via three different investigations, hypotheses, and observations. TALD-KD improved student generalization through the linear combination of the teacher logits and random noise. Among the three versions assessed (TALD-A, TALD-B, and TALD-C), TALD-B improved the performance of KD on a large-scale ImageNet-1K dataset from 68.87% to 69.58% for top-1 accuracy, and from 88.76% to 90.13% for top-5 accuracy. Similarly, for the state-of-the-art approach, DKD, the performance improvements by the TALD-B ranged from 72.05% to 72.81% for top-1 accuracy and from 91.05% to 92.04% for top-5 accuracy. The other versions revealed the secrets of logit-level KD. Extensive ablation studies confirmed the superiority of the proposed approach over existing state-of-the-art approaches in diverse scenarios.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 3","pages":"Article 104056"},"PeriodicalIF":6.9000,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Processing & Management","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0306457324004151","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/13 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Although logit distillation aims to transfer knowledge from a large teacher network to a student, the underlying mechanisms and reasons for its effectiveness are unclear. This article explains the effectiveness of knowledge distillation (KD). Based on the observations, this paper proposes a novel distillation technique called TALD-KD that performs through Target Augmentation and a novel concept of dynamic Logits Distortion technique. The proposed TALD-KD unraveled the intricate relationships of dark knowledge semantics, randomness, flexibility, and augmentation with logits-level KD via three different investigations, hypotheses, and observations. TALD-KD improved student generalization through the linear combination of the teacher logits and random noise. Among the three versions assessed (TALD-A, TALD-B, and TALD-C), TALD-B improved the performance of KD on a large-scale ImageNet-1K dataset from 68.87% to 69.58% for top-1 accuracy, and from 88.76% to 90.13% for top-5 accuracy. Similarly, for the state-of-the-art approach, DKD, the performance improvements by the TALD-B ranged from 72.05% to 72.81% for top-1 accuracy and from 91.05% to 92.04% for top-5 accuracy. The other versions revealed the secrets of logit-level KD. Extensive ablation studies confirmed the superiority of the proposed approach over existing state-of-the-art approaches in diverse scenarios.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
logit蒸馏为何起作用:一种新的知识蒸馏技术,通过得到目标增强和logit失真
虽然logit蒸馏旨在将知识从一个大的教师网络转移到学生身上,但其有效性的潜在机制和原因尚不清楚。本文解释了知识蒸馏(KD)的有效性。在此基础上,本文提出了一种新的蒸馏技术TALD-KD,该技术通过目标增强和动态Logits失真技术的新概念来实现。提出的TALD-KD通过三种不同的调查、假设和观察,揭示了暗知识语义、随机性、灵活性和逻辑学水平KD增强之间的复杂关系。TALD-KD通过教师对数和随机噪声的线性组合提高了学生的泛化能力。在评估的三个版本(TALD-A, TALD-B和TALD-C)中,TALD-B将KD在大规模ImageNet-1K数据集上的前1精度从68.87%提高到69.58%,前5精度从88.76%提高到90.13%。同样,对于最先进的方法DKD, TALD-B的性能改进范围为前1精度的72.05%至72.81%,前5精度的91.05%至92.04%。其他版本揭示了对数级KD的秘密。广泛的消融研究证实了该方法在不同情况下优于现有的最先进方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Information Processing & Management
Information Processing & Management 工程技术-计算机:信息系统
CiteScore
17.00
自引率
11.60%
发文量
276
审稿时长
39 days
期刊介绍: Information Processing and Management is dedicated to publishing cutting-edge original research at the convergence of computing and information science. Our scope encompasses theory, methods, and applications across various domains, including advertising, business, health, information science, information technology marketing, and social computing. We aim to cater to the interests of both primary researchers and practitioners by offering an effective platform for the timely dissemination of advanced and topical issues in this interdisciplinary field. The journal places particular emphasis on original research articles, research survey articles, research method articles, and articles addressing critical applications of research. Join us in advancing knowledge and innovation at the intersection of computing and information science.
期刊最新文献
PhiMark: watermarking relational data robustly with zero distortion A self-guided few-shot semantic segmentation model based on query foreground-background similarity Emotion and noise-robust speaker identification via filter-free self-supervised learning TemFRC: Enterprise financial risk prediction with temporal folding and risk contrast A dual-source knowledge distillation framework for hate speech detection based on cognitive distortion awareness
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1