{"title":"利用NSGA-II优化联邦学习中的通信开销","authors":"José Á. Morell, Z. Dahi, F. Chicano, Gabriel Luque, E. Alba","doi":"10.48550/arXiv.2204.02183","DOIUrl":null,"url":null,"abstract":"Federated learning is a training paradigm according to which a server-based model is cooperatively trained using local models running on edge devices and ensuring data privacy. These devices exchange information that induces a substantial communication load, which jeopardises the functioning efficiency. The difficulty of reducing this overhead stands in achieving this without decreasing the model's efficiency (contradictory relation). To do so, many works investigated the compression of the pre/mid/post-trained models and the communication rounds, separately, although they jointly contribute to the communication overload. Our work aims at optimising communication overhead in federated learning by (I) modelling it as a multi-objective problem and (II) applying a multi-objective optimization algorithm (NSGA-II) to solve it. To the best of the author's knowledge, this is the first work that \\texttt{(I)} explores the add-in that evolutionary computation could bring for solving such a problem, and \\texttt{(II)} considers both the neuron and devices features together. We perform the experimentation by simulating a server/client architecture with 4 slaves. We investigate both convolutional and fully-connected neural networks with 12 and 3 layers, 887,530 and 33,400 weights, respectively. We conducted the validation on the \\texttt{MNIST} dataset containing 70,000 images. The experiments have shown that our proposal could reduce communication by 99% and maintain an accuracy equal to the one obtained by the FedAvg Algorithm that uses 100% of communications.","PeriodicalId":91839,"journal":{"name":"Applications of Evolutionary Computation : 17th European Conference, EvoApplications 2014, Granada, Spain, April 23-25, 2014 : revised selected papers. EvoApplications (Conference) (17th : 2014 : Granada, Spain)","volume":"55 1","pages":"317-333"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Optimising Communication Overhead in Federated Learning Using NSGA-II\",\"authors\":\"José Á. Morell, Z. Dahi, F. Chicano, Gabriel Luque, E. Alba\",\"doi\":\"10.48550/arXiv.2204.02183\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning is a training paradigm according to which a server-based model is cooperatively trained using local models running on edge devices and ensuring data privacy. These devices exchange information that induces a substantial communication load, which jeopardises the functioning efficiency. The difficulty of reducing this overhead stands in achieving this without decreasing the model's efficiency (contradictory relation). To do so, many works investigated the compression of the pre/mid/post-trained models and the communication rounds, separately, although they jointly contribute to the communication overload. Our work aims at optimising communication overhead in federated learning by (I) modelling it as a multi-objective problem and (II) applying a multi-objective optimization algorithm (NSGA-II) to solve it. To the best of the author's knowledge, this is the first work that \\\\texttt{(I)} explores the add-in that evolutionary computation could bring for solving such a problem, and \\\\texttt{(II)} considers both the neuron and devices features together. We perform the experimentation by simulating a server/client architecture with 4 slaves. We investigate both convolutional and fully-connected neural networks with 12 and 3 layers, 887,530 and 33,400 weights, respectively. We conducted the validation on the \\\\texttt{MNIST} dataset containing 70,000 images. The experiments have shown that our proposal could reduce communication by 99% and maintain an accuracy equal to the one obtained by the FedAvg Algorithm that uses 100% of communications.\",\"PeriodicalId\":91839,\"journal\":{\"name\":\"Applications of Evolutionary Computation : 17th European Conference, EvoApplications 2014, Granada, Spain, April 23-25, 2014 : revised selected papers. EvoApplications (Conference) (17th : 2014 : Granada, Spain)\",\"volume\":\"55 1\",\"pages\":\"317-333\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applications of Evolutionary Computation : 17th European Conference, EvoApplications 2014, Granada, Spain, April 23-25, 2014 : revised selected papers. EvoApplications (Conference) (17th : 2014 : Granada, Spain)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.48550/arXiv.2204.02183\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applications of Evolutionary Computation : 17th European Conference, EvoApplications 2014, Granada, Spain, April 23-25, 2014 : revised selected papers. EvoApplications (Conference) (17th : 2014 : Granada, Spain)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2204.02183","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

摘要

联邦学习是一种训练范例,根据该范例,使用在边缘设备上运行的本地模型协作训练基于服务器的模型,并确保数据隐私。这些设备交换信息,导致大量的通信负载,从而危及功能效率。减少这种开销的困难在于在不降低模型效率(矛盾关系)的情况下实现这一点。为此,许多研究分别研究了训练前/训练中/训练后模型和通信回合的压缩,尽管它们共同导致了通信过载。我们的工作旨在通过(I)将其建模为多目标问题和(II)应用多目标优化算法(NSGA-II)来优化联邦学习中的通信开销。据作者所知,这是第一次\texttt{(1)}探索了进化计算可以为解决这类问题带来的附加组件,\texttt{(2)}同时考虑了神经元和设备的特征。我们通过模拟具有4个slave的服务器/客户机体系结构来执行实验。我们研究了卷积神经网络和全连接神经网络,它们分别有12层和3层,权重分别为887,530和33,400。我们在包含70,000张图像的\texttt{MNIST}数据集上进行了验证。实验表明,我们的建议可以减少99%的通信% and maintain an accuracy equal to the one obtained by the FedAvg Algorithm that uses 100% of communications.
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Optimising Communication Overhead in Federated Learning Using NSGA-II
Federated learning is a training paradigm according to which a server-based model is cooperatively trained using local models running on edge devices and ensuring data privacy. These devices exchange information that induces a substantial communication load, which jeopardises the functioning efficiency. The difficulty of reducing this overhead stands in achieving this without decreasing the model's efficiency (contradictory relation). To do so, many works investigated the compression of the pre/mid/post-trained models and the communication rounds, separately, although they jointly contribute to the communication overload. Our work aims at optimising communication overhead in federated learning by (I) modelling it as a multi-objective problem and (II) applying a multi-objective optimization algorithm (NSGA-II) to solve it. To the best of the author's knowledge, this is the first work that \texttt{(I)} explores the add-in that evolutionary computation could bring for solving such a problem, and \texttt{(II)} considers both the neuron and devices features together. We perform the experimentation by simulating a server/client architecture with 4 slaves. We investigate both convolutional and fully-connected neural networks with 12 and 3 layers, 887,530 and 33,400 weights, respectively. We conducted the validation on the \texttt{MNIST} dataset containing 70,000 images. The experiments have shown that our proposal could reduce communication by 99% and maintain an accuracy equal to the one obtained by the FedAvg Algorithm that uses 100% of communications.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Applications of Evolutionary Computation: 26th European Conference, EvoApplications 2023, Held as Part of EvoStar 2023, Brno, Czech Republic, April 12–14, 2023, Proceedings Optimising Communication Overhead in Federated Learning Using NSGA-II The Asteroid Routing Problem: A Benchmark for Expensive Black-Box Permutation Optimization Explainable Landscape Analysis in Automated Algorithm Performance Prediction Search Trajectories Networks of Multiobjective Evolutionary Algorithms
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1