Distilling vision-language pre-training models with modality-specific meta-learning

IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Knowledge-Based Systems Pub Date : 2025-04-22 Epub Date: 2025-03-12 DOI:10.1016/j.knosys.2025.113300
Xinge Ma, Jin Wang, Xuejie Zhang
{"title":"Distilling vision-language pre-training models with modality-specific meta-learning","authors":"Xinge Ma,&nbsp;Jin Wang,&nbsp;Xuejie Zhang","doi":"10.1016/j.knosys.2025.113300","DOIUrl":null,"url":null,"abstract":"<div><div>Vision-language pre-training (VLP) models have exhibited excellent performance on diverse vision-language tasks, while the ensuing large-scale model parameters greatly limit their application. Knowledge distillation (KD) makes it possible to apply a VLP model to scenarios with limited resources and real-time responses by transferring the knowledge from it (<em>i.e.</em>, teacher) into a lightweight one (<em>i.e.</em>, student). However, existing KD methods are primarily designed for unimodal models and thus fail to realize their full potential when migrating them to distill VLP models considering the presence of multiple modalities. Moreover, these KD strategies only unilaterally force the student model to approach the output feature maps generated by the teacher model while ignoring the deeper correlations between them, which may hinder effective knowledge transfer. To tackle these issues, we propose MMKD, a multimodal <strong>K</strong>nowledge <strong>D</strong>istillation method with <strong>M</strong>odality-specific <strong>M</strong>eta-learning, in which the training objective of the teacher model is converted to optimize the teaching ability for knowledge transfer through feedback from the student model. Meanwhile, to disentangle mutual interference between different modalities when applying KD, the modality-specific distillation objective is designed to encourage the student model to learn the teacher’s knowledge from different modalities. By progressively optimizing the teacher model towards the direction of maximizing the student’s performance, more appropriate soft labels are generated to help the student model learn across different modalities, leading to improved performance. Experiments on three types of VLP models across different downstream tasks demonstrate that the superiority of the proposed method in compressing VLP models.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"315 ","pages":"Article 113300"},"PeriodicalIF":7.6000,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705125003478","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/3/12 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Vision-language pre-training (VLP) models have exhibited excellent performance on diverse vision-language tasks, while the ensuing large-scale model parameters greatly limit their application. Knowledge distillation (KD) makes it possible to apply a VLP model to scenarios with limited resources and real-time responses by transferring the knowledge from it (i.e., teacher) into a lightweight one (i.e., student). However, existing KD methods are primarily designed for unimodal models and thus fail to realize their full potential when migrating them to distill VLP models considering the presence of multiple modalities. Moreover, these KD strategies only unilaterally force the student model to approach the output feature maps generated by the teacher model while ignoring the deeper correlations between them, which may hinder effective knowledge transfer. To tackle these issues, we propose MMKD, a multimodal Knowledge Distillation method with Modality-specific Meta-learning, in which the training objective of the teacher model is converted to optimize the teaching ability for knowledge transfer through feedback from the student model. Meanwhile, to disentangle mutual interference between different modalities when applying KD, the modality-specific distillation objective is designed to encourage the student model to learn the teacher’s knowledge from different modalities. By progressively optimizing the teacher model towards the direction of maximizing the student’s performance, more appropriate soft labels are generated to help the student model learn across different modalities, leading to improved performance. Experiments on three types of VLP models across different downstream tasks demonstrate that the superiority of the proposed method in compressing VLP models.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用特定模态元学习提取视觉语言预训练模型
视觉语言预训练(VLP)模型在各种视觉语言任务中表现出优异的性能,但随之而来的大规模模型参数限制了其应用。知识蒸馏(Knowledge distillation, KD)通过将知识从VLP模型(即教师)转移到轻量级模型(即学生)中,使得将VLP模型应用于资源有限和实时响应的场景成为可能。然而,现有的KD方法主要是为单峰模型设计的,因此在考虑到多模态的存在,将它们迁移到提取VLP模型时,无法充分发挥其潜力。此外,这些KD策略只是单方面地迫使学生模型接近教师模型生成的输出特征图,而忽略了它们之间更深层次的相关性,这可能会阻碍有效的知识转移。为了解决这些问题,我们提出了MMKD,这是一种具有模态特定元学习的多模态知识蒸馏方法,其中将教师模型的培训目标转换为通过学生模型的反馈优化知识转移的教学能力。同时,为了在运用KD时理清不同模式之间的相互干扰,设计了特定于模式的提炼目标,鼓励学生模式从不同模式中学习教师的知识。通过朝着学生成绩最大化的方向逐步优化教师模型,生成更合适的软标签,帮助学生模型跨不同模式学习,从而提高成绩。对三种不同下游任务的VLP模型进行了实验,验证了该方法在压缩VLP模型方面的优越性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Knowledge-Based Systems
Knowledge-Based Systems 工程技术-计算机:人工智能
CiteScore
14.80
自引率
12.50%
发文量
1245
审稿时长
7.8 months
期刊介绍: Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.
期刊最新文献
Efficient intrusion detection in internet of vehicles through optimized node-level capsule graph neural networks for advanced security CG-CGSL: Clustering and graph topological properties co-guided graph structure learning Enhancing fairness and privacy in federated graph neural networks via macro-level restructuring HAQ-ViT: A hardware-aware post-training quantization for efficient vision transformer inference Polarization information restoration for visual reflection removal via cross dual-stream network
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1