PGFed: Personalize Each Client's Global Objective for Federated Learning.

Jun Luo, Matias Mendieta, Chen Chen, Shandong Wu
{"title":"PGFed: Personalize Each Client's Global Objective for Federated Learning.","authors":"Jun Luo, Matias Mendieta, Chen Chen, Shandong Wu","doi":"10.1109/iccv51070.2023.00365","DOIUrl":null,"url":null,"abstract":"<p><p>Personalized federated learning has received an upsurge of attention due to the mediocre performance of conventional federated learning (FL) over heterogeneous data. Unlike conventional FL which trains a single global consensus model, personalized FL allows different models for different clients. However, existing personalized FL algorithms only <b>implicitly</b> transfer the collaborative knowledge across the federation by embedding the knowledge into the aggregated model or regularization. We observed that this implicit knowledge transfer fails to maximize the potential of each client's empirical risk toward other clients. Based on our observation, in this work, we propose <b>P</b>ersonalized <b>G</b>lobal <b>Fed</b>erated Learning (PGFed), a novel personalized FL framework that enables each client to <b>personalize</b> its own <b>global</b> objective by <b>explicitly</b> and adaptively aggregating the empirical risks of itself and other clients. To avoid massive <math><mrow><mrow><mo>(</mo><mrow><mi>O</mi><mrow><mo>(</mo><mrow><msup><mi>N</mi><mn>2</mn></msup></mrow><mo>)</mo></mrow></mrow><mo>)</mo></mrow></mrow></math> communication overhead and potential privacy leakage while achieving this, each client's risk is estimated through a first-order approximation for other clients' adaptive risk aggregation. On top of PGFed, we develop a momentum upgrade, dubbed PGFedMo, to more efficiently utilize clients' empirical risks. Our extensive experiments on four datasets under different federated settings show consistent improvements of PGFed over previous state-of-the-art methods. The code is publicly available at https://github.com/ljaiverson/pgfed.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2023 ","pages":"3923-3933"},"PeriodicalIF":0.0000,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11024864/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. IEEE International Conference on Computer Vision","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/iccv51070.2023.00365","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/15 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Personalized federated learning has received an upsurge of attention due to the mediocre performance of conventional federated learning (FL) over heterogeneous data. Unlike conventional FL which trains a single global consensus model, personalized FL allows different models for different clients. However, existing personalized FL algorithms only implicitly transfer the collaborative knowledge across the federation by embedding the knowledge into the aggregated model or regularization. We observed that this implicit knowledge transfer fails to maximize the potential of each client's empirical risk toward other clients. Based on our observation, in this work, we propose Personalized Global Federated Learning (PGFed), a novel personalized FL framework that enables each client to personalize its own global objective by explicitly and adaptively aggregating the empirical risks of itself and other clients. To avoid massive (O(N2)) communication overhead and potential privacy leakage while achieving this, each client's risk is estimated through a first-order approximation for other clients' adaptive risk aggregation. On top of PGFed, we develop a momentum upgrade, dubbed PGFedMo, to more efficiently utilize clients' empirical risks. Our extensive experiments on four datasets under different federated settings show consistent improvements of PGFed over previous state-of-the-art methods. The code is publicly available at https://github.com/ljaiverson/pgfed.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
PGFed:个性化每个客户的全球目标,实现联合学习。
由于传统的联合学习(FL)在异构数据上表现平平,个性化联合学习受到了越来越多的关注。传统的联合学习只训练一个全局共识模型,与之不同的是,个性化联合学习允许不同的客户使用不同的模型。然而,现有的个性化联合学习算法只是通过将知识嵌入到聚合模型或正则化中来隐式地在联合学习中传递协作知识。我们发现,这种隐式知识转移无法最大限度地发挥每个客户对其他客户的经验风险潜力。基于我们的观察,在这项工作中,我们提出了个性化全局联盟学习(PGFed),这是一种新型的个性化 FL 框架,它能让每个客户端通过明确、自适应地聚合自身和其他客户端的经验风险来个性化自己的全局目标。为了避免大量(O(N2))通信开销和潜在的隐私泄露,每个客户端的风险都是通过对其他客户端的自适应风险聚合进行一阶近似来估算的。在 PGFed 的基础上,我们还开发了一种势头升级版,称为 PGFedMo,以更有效地利用客户的经验风险。我们在不同联盟设置下对四个数据集进行了广泛的实验,结果表明 PGFed 与之前最先进的方法相比有了持续的改进。代码可在 https://github.com/ljaiverson/pgfed 公开获取。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
PGFed: Personalize Each Client's Global Objective for Federated Learning. The Devil is in the Upsampling: Architectural Decisions Made Simpler for Denoising with Deep Image Prior. Enhancing Modality-Agnostic Representations via Meta-learning for Brain Tumor Segmentation. SimpleClick: Interactive Image Segmentation with Simple Vision Transformers. Improving Representation Learning for Histopathologic Images with Cluster Constraints.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1