基于提示自由方向知识蒸馏的图神经网络共享增长

Kaituo Feng;Yikun Miao;Changsheng Li;Ye Yuan;Guoren Wang
{"title":"基于提示自由方向知识蒸馏的图神经网络共享增长","authors":"Kaituo Feng;Yikun Miao;Changsheng Li;Ye Yuan;Guoren Wang","doi":"10.1109/TPAMI.2025.3543211","DOIUrl":null,"url":null,"abstract":"Knowledge distillation (KD) has shown to be effective to boost the performance of graph neural networks (GNNs), where the typical objective is to distill knowledge from a deeper teacher GNN into a shallower student GNN. However, it is often quite challenging to train a satisfactory deeper GNN due to the well-known over-parametrized and over-smoothing issues, leading to invalid knowledge transfer in practical applications. In this paper, we propose the first <bold>Free</b>-direction <bold>K</b>nowledge <bold>D</b>istillation framework via reinforcement learning for GNNs, called <bold>FreeKD</b>, which is no longer required to provide a deeper well-optimized teacher GNN. Our core idea is to collaboratively learn two shallower GNNs in an effort to exchange knowledge between them via reinforcement learning in a hierarchical way. As we observe that one typical GNN model often exhibits better and worse performances at different nodes during training, we devise a dynamic and free-direction knowledge transfer strategy that involves two levels of actions: 1) node-level action determines the directions of knowledge transfer between the corresponding nodes of two networks; and then 2) structure-level action determines which of the local structures generated by the node-level actions to be propagated. Additionally, considering that different augmented graphs can potentially capture distinct perspectives or representations of the graph data, we propose FreeKD-Prompt that learns undistorted and diverse augmentations based on prompt learning for exchanging varied knowledge. Furthermore, instead of confining knowledge exchange within two GNNs, we develop FreeKD++ and FreeKD-Prompt++ to enable free-direction knowledge transfer among multiple shallow GNNs. Extensive experiments on five benchmark datasets demonstrate our approaches outperform the base GNNs by a large margin, and show their efficacy to various GNNs. More surprisingly, our FreeKD has comparable or even better performance than traditional KD algorithms that distill knowledge from a deeper and stronger teacher GNN.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 6","pages":"4377-4394"},"PeriodicalIF":18.6000,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Shared Growth of Graph Neural Networks via Prompted Free-Direction Knowledge Distillation\",\"authors\":\"Kaituo Feng;Yikun Miao;Changsheng Li;Ye Yuan;Guoren Wang\",\"doi\":\"10.1109/TPAMI.2025.3543211\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Knowledge distillation (KD) has shown to be effective to boost the performance of graph neural networks (GNNs), where the typical objective is to distill knowledge from a deeper teacher GNN into a shallower student GNN. However, it is often quite challenging to train a satisfactory deeper GNN due to the well-known over-parametrized and over-smoothing issues, leading to invalid knowledge transfer in practical applications. In this paper, we propose the first <bold>Free</b>-direction <bold>K</b>nowledge <bold>D</b>istillation framework via reinforcement learning for GNNs, called <bold>FreeKD</b>, which is no longer required to provide a deeper well-optimized teacher GNN. Our core idea is to collaboratively learn two shallower GNNs in an effort to exchange knowledge between them via reinforcement learning in a hierarchical way. As we observe that one typical GNN model often exhibits better and worse performances at different nodes during training, we devise a dynamic and free-direction knowledge transfer strategy that involves two levels of actions: 1) node-level action determines the directions of knowledge transfer between the corresponding nodes of two networks; and then 2) structure-level action determines which of the local structures generated by the node-level actions to be propagated. Additionally, considering that different augmented graphs can potentially capture distinct perspectives or representations of the graph data, we propose FreeKD-Prompt that learns undistorted and diverse augmentations based on prompt learning for exchanging varied knowledge. Furthermore, instead of confining knowledge exchange within two GNNs, we develop FreeKD++ and FreeKD-Prompt++ to enable free-direction knowledge transfer among multiple shallow GNNs. Extensive experiments on five benchmark datasets demonstrate our approaches outperform the base GNNs by a large margin, and show their efficacy to various GNNs. More surprisingly, our FreeKD has comparable or even better performance than traditional KD algorithms that distill knowledge from a deeper and stronger teacher GNN.\",\"PeriodicalId\":94034,\"journal\":{\"name\":\"IEEE transactions on pattern analysis and machine intelligence\",\"volume\":\"47 6\",\"pages\":\"4377-4394\"},\"PeriodicalIF\":18.6000,\"publicationDate\":\"2025-02-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on pattern analysis and machine intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10891755/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10891755/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

知识蒸馏(KD)已被证明可以有效地提高图神经网络(GNN)的性能,其中典型的目标是将知识从较深的教师GNN提取到较浅的学生GNN。然而,由于众所周知的过度参数化和过度平滑问题,在实际应用中往往很难训练出令人满意的更深层次GNN,从而导致无效的知识转移。在本文中,我们提出了第一个通过强化学习用于GNN的自由方向知识蒸馏框架,称为FreeKD,它不再需要提供更深层次的优化的教师GNN。我们的核心思想是协同学习两个较浅的gnn,并通过分层强化学习的方式在它们之间交换知识。鉴于一个典型的GNN模型在训练过程中在不同节点上表现出不同的性能,我们设计了一种动态的、自由方向的知识转移策略,该策略涉及两个层面的行为:1)节点层面的行为决定两个网络对应节点之间知识转移的方向;然后2)结构级动作决定哪些由节点级动作生成的局部结构被传播。此外,考虑到不同的增强图可以潜在地捕获不同的视角或图形数据的表示,我们提出了FreeKD-Prompt,它可以基于快速学习来学习不扭曲的和不同的增强,以交换不同的知识。此外,我们开发了FreeKD++和FreeKD- prompt ++,以实现多个浅层gnn之间的自由方向知识转移,而不是将知识交换限制在两个gnn内。在五个基准数据集上的大量实验表明,我们的方法在很大程度上优于基本gnn,并显示了它们对各种gnn的有效性。更令人惊讶的是,我们的FreeKD具有与传统KD算法相当甚至更好的性能,传统KD算法从更深更强的教师GNN中提取知识。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Shared Growth of Graph Neural Networks via Prompted Free-Direction Knowledge Distillation
Knowledge distillation (KD) has shown to be effective to boost the performance of graph neural networks (GNNs), where the typical objective is to distill knowledge from a deeper teacher GNN into a shallower student GNN. However, it is often quite challenging to train a satisfactory deeper GNN due to the well-known over-parametrized and over-smoothing issues, leading to invalid knowledge transfer in practical applications. In this paper, we propose the first Free-direction Knowledge Distillation framework via reinforcement learning for GNNs, called FreeKD, which is no longer required to provide a deeper well-optimized teacher GNN. Our core idea is to collaboratively learn two shallower GNNs in an effort to exchange knowledge between them via reinforcement learning in a hierarchical way. As we observe that one typical GNN model often exhibits better and worse performances at different nodes during training, we devise a dynamic and free-direction knowledge transfer strategy that involves two levels of actions: 1) node-level action determines the directions of knowledge transfer between the corresponding nodes of two networks; and then 2) structure-level action determines which of the local structures generated by the node-level actions to be propagated. Additionally, considering that different augmented graphs can potentially capture distinct perspectives or representations of the graph data, we propose FreeKD-Prompt that learns undistorted and diverse augmentations based on prompt learning for exchanging varied knowledge. Furthermore, instead of confining knowledge exchange within two GNNs, we develop FreeKD++ and FreeKD-Prompt++ to enable free-direction knowledge transfer among multiple shallow GNNs. Extensive experiments on five benchmark datasets demonstrate our approaches outperform the base GNNs by a large margin, and show their efficacy to various GNNs. More surprisingly, our FreeKD has comparable or even better performance than traditional KD algorithms that distill knowledge from a deeper and stronger teacher GNN.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Learning-Based Multi-View Stereo: A Survey. Active Adversarial Noise Suppression for Image Forgery Localization. Abstracting Concept-Changing Rules for Solving Raven's Progressive Matrix Problems. Wasserstein Distances Made Explainable: Insights Into Dataset Shifts and Transport Phenomena. Parameter-Efficient Fine-Tuning Methods for Pretrained Language Models: A Critical Review and Assessment.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1