One Teacher is Enough: A Server-Clueless Federated Learning With Knowledge Distillation

IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS IEEE Transactions on Services Computing Pub Date : 2024-06-13 DOI:10.1109/TSC.2024.3414372
Wanyi Ning;Qi Qi;Jingyu Wang;Mengde Zhu;Shaolong Li;Guang Yang;Jianxin Liao
{"title":"One Teacher is Enough: A Server-Clueless Federated Learning With Knowledge Distillation","authors":"Wanyi Ning;Qi Qi;Jingyu Wang;Mengde Zhu;Shaolong Li;Guang Yang;Jianxin Liao","doi":"10.1109/TSC.2024.3414372","DOIUrl":null,"url":null,"abstract":"Machine learning-based services offer intelligent solutions with powerful models. To enhance model robustness, Federated Learning (FL) emerges as a promising collaborative learning paradigm, which iteratively trains a global model through parameter exchange among multiple clients based on their local data. Generally, the local data are heterogeneous, which slows down convergence. Knowledge distillation is an effective technique against data heterogeneity while existing works distill the ensemble knowledge from local models, ignoring the natural global knowledge from the aggregated model. This places limitations on their algorithms, such as the need for proxy data or the necessary exposure of local models to the server, which is prohibited in most privacy-preserving FL with a clueless server. In this work, we propose FedDGT, a novel knowledge distillation method for industrial server-clueless FL. FedDGT regards the aggregated model as the only one teacher to impart its global knowledge into a generator and then regularizes the drifted local models through the generator, overcoming previous limitations and providing better privacy and scalability support. Extensive experiments demonstrate that FedDGT can achieve highly-competitive model performance while greatly reducing the communication rounds in a server-clueless scenario.","PeriodicalId":13255,"journal":{"name":"IEEE Transactions on Services Computing","volume":"17 5","pages":"2704-2718"},"PeriodicalIF":5.8000,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Services Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10556806/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Machine learning-based services offer intelligent solutions with powerful models. To enhance model robustness, Federated Learning (FL) emerges as a promising collaborative learning paradigm, which iteratively trains a global model through parameter exchange among multiple clients based on their local data. Generally, the local data are heterogeneous, which slows down convergence. Knowledge distillation is an effective technique against data heterogeneity while existing works distill the ensemble knowledge from local models, ignoring the natural global knowledge from the aggregated model. This places limitations on their algorithms, such as the need for proxy data or the necessary exposure of local models to the server, which is prohibited in most privacy-preserving FL with a clueless server. In this work, we propose FedDGT, a novel knowledge distillation method for industrial server-clueless FL. FedDGT regards the aggregated model as the only one teacher to impart its global knowledge into a generator and then regularizes the drifted local models through the generator, overcoming previous limitations and providing better privacy and scalability support. Extensive experiments demonstrate that FedDGT can achieve highly-competitive model performance while greatly reducing the communication rounds in a server-clueless scenario.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
一个老师就够了具有知识蒸馏功能的无服务器联合学习
基于机器学习的服务提供了具有强大模型的智能解决方案。为了增强模型的鲁棒性,联合学习(FL)作为一种有前途的协作学习范例应运而生,它通过多个客户端之间基于本地数据的参数交换来迭代训练全局模型。一般来说,本地数据都是异构的,这会减缓收敛速度。知识提炼是应对数据异构性的有效技术,而现有的工作是从本地模型中提炼集合知识,忽略了聚合模型中的自然全局知识。这就给它们的算法带来了限制,例如需要代理数据或必须将本地模型暴露给服务器,而这在大多数使用无线索服务器的隐私保护 FL 中是被禁止的。在这项工作中,我们提出了 FedDGT,一种适用于工业无服务器 FL 的新型知识蒸馏方法。FedDGT 将聚合模型视为唯一的教师,将其全局知识传授给生成器,然后通过生成器正则化漂移的局部模型,从而克服了以往的局限性,并提供更好的隐私保护和可扩展性支持。大量实验证明,FedDGT 可以在无服务器的情况下实现极具竞争力的模型性能,同时大大减少通信轮数。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Services Computing
IEEE Transactions on Services Computing COMPUTER SCIENCE, INFORMATION SYSTEMS-COMPUTER SCIENCE, SOFTWARE ENGINEERING
CiteScore
11.50
自引率
6.20%
发文量
278
审稿时长
>12 weeks
期刊介绍: IEEE Transactions on Services Computing encompasses the computing and software aspects of the science and technology of services innovation research and development. It places emphasis on algorithmic, mathematical, statistical, and computational methods central to services computing. Topics covered include Service Oriented Architecture, Web Services, Business Process Integration, Solution Performance Management, and Services Operations and Management. The transactions address mathematical foundations, security, privacy, agreement, contract, discovery, negotiation, collaboration, and quality of service for web services. It also covers areas like composite web service creation, business and scientific applications, standards, utility models, business process modeling, integration, collaboration, and more in the realm of Services Computing.
期刊最新文献
Combating Free-Riding in AIGC Service System: a Decentralized Reputation-based Model Management Approach Privacy-Preserving Service Migration for Multi-User Metaverse Environments Collaborative Orchestration of Microservices and AI Services in Edges: A Dual-Time-Scale Reinforcement Learning Approach Latency Uncertainty-aware User Allocation in Mobile Edge Computing LEMON: LLM-Enabled Monitoring for Microservices Orchestration
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1