Advancing Federated Learning by Addressing Data and System Heterogeneity

Yiran Chen
{"title":"Advancing Federated Learning by Addressing Data and System Heterogeneity","authors":"Yiran Chen","doi":"10.1609/aaaiss.v3i1.31214","DOIUrl":null,"url":null,"abstract":"In the emerging field of federated learning (FL), the challenge of heterogeneity, both in data and systems, presents significant obstacles to efficient and effective model training. This talk focuses on the latest advancements and solutions addressing these challenges.\n\nThe first part of the talk delves into data heterogeneity, a core issue in FL, where data distributions across different clients vary widely and affect FL convergence. We will introduce the FedCor framework addressing this by modeling loss correlations between clients using Gaussian Process and reducing expected global loss. External covariate shift in FL is uncovered, demonstrating that normalization layers are crucial, and layer normalization proves effective. Additionally, class imbalance in FL degrades performance, but our proposed Federated Class-balanced Sampling (Fed-CBS) mechanism reduces this imbalance by employing homomorphic encryption for privacy preservation.\n\nThe second part of the talk shifts focus to system heterogeneity, an equally critical challenge in FL. System heterogeneity involves the varying computational capabilities, network speeds, and other resource-related constraints of participating devices in FL. To address this, we introduce FedSEA, which is a semi-asynchronous FL framework that addresses accuracy drops by balancing aggregation frequency and predicting local update arrival. Additionally, we discuss FedRepre, a framework specifically designed to enhance FL in real-world environments by addressing challenges including unbalanced local dataset distributions, uneven computational capabilities, and fluctuating network speeds. By introducing a client selection mechanism and a specialized server architecture, FedRepre notably improves the efficiency, scalability, and performance of FL systems.\n\nOur talk aims to provide a comprehensive overview of the current research and advancements in tackling both data and system heterogeneity in federated learning. We hope to highlight the path forward for FL, underlining its potential in diverse real-world applications while maintaining data privacy and optimizing resource usage.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the AAAI Symposium Series","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1609/aaaiss.v3i1.31214","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In the emerging field of federated learning (FL), the challenge of heterogeneity, both in data and systems, presents significant obstacles to efficient and effective model training. This talk focuses on the latest advancements and solutions addressing these challenges. The first part of the talk delves into data heterogeneity, a core issue in FL, where data distributions across different clients vary widely and affect FL convergence. We will introduce the FedCor framework addressing this by modeling loss correlations between clients using Gaussian Process and reducing expected global loss. External covariate shift in FL is uncovered, demonstrating that normalization layers are crucial, and layer normalization proves effective. Additionally, class imbalance in FL degrades performance, but our proposed Federated Class-balanced Sampling (Fed-CBS) mechanism reduces this imbalance by employing homomorphic encryption for privacy preservation. The second part of the talk shifts focus to system heterogeneity, an equally critical challenge in FL. System heterogeneity involves the varying computational capabilities, network speeds, and other resource-related constraints of participating devices in FL. To address this, we introduce FedSEA, which is a semi-asynchronous FL framework that addresses accuracy drops by balancing aggregation frequency and predicting local update arrival. Additionally, we discuss FedRepre, a framework specifically designed to enhance FL in real-world environments by addressing challenges including unbalanced local dataset distributions, uneven computational capabilities, and fluctuating network speeds. By introducing a client selection mechanism and a specialized server architecture, FedRepre notably improves the efficiency, scalability, and performance of FL systems. Our talk aims to provide a comprehensive overview of the current research and advancements in tackling both data and system heterogeneity in federated learning. We hope to highlight the path forward for FL, underlining its potential in diverse real-world applications while maintaining data privacy and optimizing resource usage.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过解决数据和系统异构问题推进联盟学习
在新兴的联合学习(FL)领域,数据和系统的异构性对高效和有效的模型训练构成了重大障碍。本讲座将重点介绍应对这些挑战的最新进展和解决方案。讲座的第一部分将深入探讨数据异构性,这是联合学习的一个核心问题,不同客户端的数据分布差异很大,会影响联合学习的收敛性。我们将介绍 FedCor 框架,该框架通过使用高斯过程(Gaussian Process)对客户端之间的损失相关性进行建模,并降低预期全局损失,从而解决这一问题。我们将揭示 FL 中的外部协变量偏移,从而证明归一化层是至关重要的,而层归一化证明是有效的。此外,FL 中的类不平衡会降低性能,但我们提出的联邦类平衡采样(Fed-CBS)机制通过采用同态加密来保护隐私,从而减少了这种不平衡。系统异构性涉及 FL 中参与设备的不同计算能力、网络速度和其他资源相关限制。为了解决这个问题,我们介绍了 FedSEA,这是一个半异步 FL 框架,通过平衡聚合频率和预测本地更新到达来解决精度下降问题。此外,我们还讨论了 FedRepre,这是一个专门用于在真实世界环境中增强 FL 的框架,可应对包括不平衡的本地数据集分布、不均衡的计算能力和波动的网络速度等挑战。通过引入客户端选择机制和专门的服务器架构,FedRepre 显著提高了联合学习系统的效率、可扩展性和性能。我们希望在维护数据隐私和优化资源使用的同时,强调FL在各种现实世界应用中的潜力,从而为FL指明前进的道路。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Modes of Tracking Mal-Info in Social Media with AI/ML Tools to Help Mitigate Harmful GenAI for Improved Societal Well Being Embodying Human-Like Modes of Balance Control Through Human-In-the-Loop Dyadic Learning Constructing Deep Concepts through Shallow Search Implications of Identity in AI: Creators, Creations, and Consequences ASMR: Aggregated Semantic Matching Retrieval Unleashing Commonsense Ability of LLM through Open-Ended Question Answering
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1