Tackling data-heterogeneity variations in federated learning via adaptive aggregate weights

IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Knowledge-Based Systems Pub Date : 2024-09-06 DOI:10.1016/j.knosys.2024.112484
{"title":"Tackling data-heterogeneity variations in federated learning via adaptive aggregate weights","authors":"","doi":"10.1016/j.knosys.2024.112484","DOIUrl":null,"url":null,"abstract":"<div><p>In federated learning (FL), ensuring the efficiency of global models generated from the weighted aggregation of local models with data heterogeneity remains challenging. Moreover, the contradiction between imprecise aggregation weights and changing data distributions leads to aggregation errors that increase in an accelerated manner throughout the process. Therefore, we present federated learning using adaptive aggregate weights (FedAAW) to change the optimization direction in steps, including local training and global aggregation, and reduce inefficiencies in the global model due to the accelerated growth of aggregation errors resulting from changes in heterogeneity. In each round, the global- and local-model information is dynamically combined to generate an initial model at the beginning of the local training. The key module in FedAAW is adaptive aggregate weights (AAW), which are used to update the aggregation weight by sharing an optimization objective with global training and using the gradient information from other clients to accurately compute the updated aggregation weight direction. AAW guarantee consistency between weight update and global optimization, theoretically demonstrating convergence. The results of our comprehensive experiments on public datasets demonstrate that the test accuracy metrics of FedAAW are higher than those of six state-of-the-art algorithms and that FedAAW is capable of up to 50% improvement. FedAAW also results in an improvement of 14% on CIFAR100, a complex dataset, when compared with the best-performing baseline. FedAAW is faster than other algorithms in attaining the specified accuracy in experiments; in particular, it is approximately three times faster than federated learning with adaptive local aggregation. In addition, the results obtained in experimental environments with different client sizes and heterogeneous data confirm that FedAAW is robust and scalable.</p></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":null,"pages":null},"PeriodicalIF":7.2000,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705124011183","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In federated learning (FL), ensuring the efficiency of global models generated from the weighted aggregation of local models with data heterogeneity remains challenging. Moreover, the contradiction between imprecise aggregation weights and changing data distributions leads to aggregation errors that increase in an accelerated manner throughout the process. Therefore, we present federated learning using adaptive aggregate weights (FedAAW) to change the optimization direction in steps, including local training and global aggregation, and reduce inefficiencies in the global model due to the accelerated growth of aggregation errors resulting from changes in heterogeneity. In each round, the global- and local-model information is dynamically combined to generate an initial model at the beginning of the local training. The key module in FedAAW is adaptive aggregate weights (AAW), which are used to update the aggregation weight by sharing an optimization objective with global training and using the gradient information from other clients to accurately compute the updated aggregation weight direction. AAW guarantee consistency between weight update and global optimization, theoretically demonstrating convergence. The results of our comprehensive experiments on public datasets demonstrate that the test accuracy metrics of FedAAW are higher than those of six state-of-the-art algorithms and that FedAAW is capable of up to 50% improvement. FedAAW also results in an improvement of 14% on CIFAR100, a complex dataset, when compared with the best-performing baseline. FedAAW is faster than other algorithms in attaining the specified accuracy in experiments; in particular, it is approximately three times faster than federated learning with adaptive local aggregation. In addition, the results obtained in experimental environments with different client sizes and heterogeneous data confirm that FedAAW is robust and scalable.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过自适应聚合权重应对联合学习中的数据异质性变化
在联合学习(FL)中,确保由具有数据异质性的局部模型加权聚合生成的全局模型的效率仍然具有挑战性。此外,不精确的聚合权重与不断变化的数据分布之间的矛盾会导致聚合误差在整个过程中加速增加。因此,我们提出了使用自适应聚合权重的联合学习(FedAAW),以分步改变优化方向,包括局部训练和全局聚合,减少因异质性变化导致聚合误差加速增长而导致的全局模型效率低下。在每一轮局部训练开始时,全局模型和局部模型信息会动态结合生成一个初始模型。FedAAW 的关键模块是自适应聚合权重(AAW),它通过与全局训练共享一个优化目标来更新聚合权重,并利用来自其他客户端的梯度信息来精确计算更新后的聚合权重方向。AAW 保证了权重更新与全局优化之间的一致性,从理论上证明了收敛性。我们在公共数据集上进行的综合实验结果表明,FedAAW 的测试准确度指标高于六种最先进算法的测试准确度指标,FedAAW 的改进幅度可达 50%。在复杂数据集 CIFAR100 上,FedAAW 也比表现最好的基线算法提高了 14%。在实验中,FedAAW 在达到指定准确率方面比其他算法更快;特别是,它比具有自适应局部聚合功能的联合学习快约三倍。此外,在不同客户规模和异构数据的实验环境中获得的结果证实,FedAAW 具有鲁棒性和可扩展性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Knowledge-Based Systems
Knowledge-Based Systems 工程技术-计算机:人工智能
CiteScore
14.80
自引率
12.50%
发文量
1245
审稿时长
7.8 months
期刊介绍: Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.
期刊最新文献
Convolutional long-short term memory network for space debris detection and tracking Adaptive class token knowledge distillation for efficient vision transformer Progressively global–local fusion with explicit guidance for accurate and robust 3d hand pose reconstruction A privacy-preserving framework with multi-modal data for cross-domain recommendation DCTracker: Rethinking MOT in soccer events under dual views via cascade association
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1