FedEem: a fairness-based asynchronous federated learning mechanism

IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Journal of Cloud Computing-Advances Systems and Applications Pub Date : 2023-11-09 DOI:10.1186/s13677-023-00535-2
Wei Gu, Yifan Zhang
{"title":"FedEem: a fairness-based asynchronous federated learning mechanism","authors":"Wei Gu, Yifan Zhang","doi":"10.1186/s13677-023-00535-2","DOIUrl":null,"url":null,"abstract":"Abstract Federated learning is a mechanism for model training in distributed systems, aiming to protect data privacy while achieving collective intelligence. In traditional synchronous federated learning, all participants must update the model synchronously, which may result in a decrease in the overall model update frequency due to lagging participants. In order to solve this problem, asynchronous federated learning introduces an asynchronous aggregation mechanism, allowing participants to update models at their own time and rate, and then aggregate each updated edge model on the cloud, thus speeding up the training process. However, under the asynchronous aggregation mechanism, federated learning faces new challenges such as convergence difficulties and unfair model accuracy. This paper first proposes a fairness-based asynchronous federated learning mechanism, which reduces the adverse effects of device and data heterogeneity on the convergence process by using outdatedness and interference-aware weight aggregation, and promotes model personalization and fairness through an early exit mechanism. Mathematical analysis derives the upper bound of convergence speed and the necessary conditions for hyperparameters. Experimental results demonstrate the advantages of the proposed method compared to baseline algorithms, indicating the effectiveness of the proposed method in promoting convergence speed and fairness in federated learning.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":" 33","pages":"0"},"PeriodicalIF":3.7000,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Cloud Computing-Advances Systems and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s13677-023-00535-2","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Abstract Federated learning is a mechanism for model training in distributed systems, aiming to protect data privacy while achieving collective intelligence. In traditional synchronous federated learning, all participants must update the model synchronously, which may result in a decrease in the overall model update frequency due to lagging participants. In order to solve this problem, asynchronous federated learning introduces an asynchronous aggregation mechanism, allowing participants to update models at their own time and rate, and then aggregate each updated edge model on the cloud, thus speeding up the training process. However, under the asynchronous aggregation mechanism, federated learning faces new challenges such as convergence difficulties and unfair model accuracy. This paper first proposes a fairness-based asynchronous federated learning mechanism, which reduces the adverse effects of device and data heterogeneity on the convergence process by using outdatedness and interference-aware weight aggregation, and promotes model personalization and fairness through an early exit mechanism. Mathematical analysis derives the upper bound of convergence speed and the necessary conditions for hyperparameters. Experimental results demonstrate the advantages of the proposed method compared to baseline algorithms, indicating the effectiveness of the proposed method in promoting convergence speed and fairness in federated learning.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
FedEem:基于公平性的异步联邦学习机制
联邦学习是分布式系统中模型训练的一种机制,旨在保护数据隐私的同时实现集体智能。在传统的同步联邦学习中,所有参与者必须同步更新模型,这可能会由于参与者滞后而导致整体模型更新频率降低。为了解决这个问题,异步联邦学习引入了异步聚合机制,允许参与者以自己的时间和速率更新模型,然后将每个更新的边缘模型聚合到云上,从而加快了训练过程。然而,在异步聚合机制下,联邦学习面临着新的挑战,如收敛困难和模型精度不公平。本文首先提出了一种基于公平性的异步联邦学习机制,该机制通过使用过时性和干扰感知权值聚合来减少设备和数据异构对收敛过程的不利影响,并通过早期退出机制促进模型的个性化和公平性。数学分析导出了超参数的收敛速度上界和必要条件。实验结果表明,该方法与基准算法相比具有一定的优势,能够有效地提高联邦学习的收敛速度和公平性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Journal of Cloud Computing-Advances Systems and Applications
Journal of Cloud Computing-Advances Systems and Applications Computer Science-Computer Networks and Communications
CiteScore
6.80
自引率
7.50%
发文量
76
审稿时长
75 days
期刊介绍: The Journal of Cloud Computing: Advances, Systems and Applications (JoCCASA) will publish research articles on all aspects of Cloud Computing. Principally, articles will address topics that are core to Cloud Computing, focusing on the Cloud applications, the Cloud systems, and the advances that will lead to the Clouds of the future. Comprehensive review and survey articles that offer up new insights, and lay the foundations for further exploratory and experimental work, are also relevant.
期刊最新文献
Research on electromagnetic vibration energy harvester for cloud-edge-end collaborative architecture in power grid FedEem: a fairness-based asynchronous federated learning mechanism Adaptive device sampling and deadline determination for cloud-based heterogeneous federated learning Review on the application of cloud computing in the sports industry Improving cloud storage and privacy security for digital twin based medical records
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1