稀疏通信用于联邦学习

Kundjanasith Thonglek, Keichi Takahashi, Koheix Ichikawa, Chawanat Nakasan, P. Leelaprute, Hajimu Iida
{"title":"稀疏通信用于联邦学习","authors":"Kundjanasith Thonglek, Keichi Takahashi, Koheix Ichikawa, Chawanat Nakasan, P. Leelaprute, Hajimu Iida","doi":"10.1109/icfec54809.2022.00008","DOIUrl":null,"url":null,"abstract":"Federated learning trains a model on a centralized server using datasets distributed over a massive amount of edge devices. Since federated learning does not send local data from edge devices to the server, it preserves data privacy. It transfers the local models from edge devices instead of the local data. However, communication costs are frequently a problem in federated learning. This paper proposes a novel method to reduce the required communication cost for federated learning by transferring only top updated parameters in neural network models. The proposed method allows adjusting the criteria of updated parameters to trade-off the reduction of communication costs and the loss of model accuracy. We evaluated the proposed method using diverse models and datasets and found that it can achieve comparable performance to transfer original models for federated learning. As a result, the proposed method has achieved a reduction of the required communication costs around 90% when compared to the conventional method for VGG16. Furthermore, we found out that the proposed method is able to reduce the communication cost of a large model more than of a small model due to the different threshold of updated parameters in each model architecture.","PeriodicalId":423599,"journal":{"name":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Sparse Communication for Federated Learning\",\"authors\":\"Kundjanasith Thonglek, Keichi Takahashi, Koheix Ichikawa, Chawanat Nakasan, P. Leelaprute, Hajimu Iida\",\"doi\":\"10.1109/icfec54809.2022.00008\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning trains a model on a centralized server using datasets distributed over a massive amount of edge devices. Since federated learning does not send local data from edge devices to the server, it preserves data privacy. It transfers the local models from edge devices instead of the local data. However, communication costs are frequently a problem in federated learning. This paper proposes a novel method to reduce the required communication cost for federated learning by transferring only top updated parameters in neural network models. The proposed method allows adjusting the criteria of updated parameters to trade-off the reduction of communication costs and the loss of model accuracy. We evaluated the proposed method using diverse models and datasets and found that it can achieve comparable performance to transfer original models for federated learning. As a result, the proposed method has achieved a reduction of the required communication costs around 90% when compared to the conventional method for VGG16. Furthermore, we found out that the proposed method is able to reduce the communication cost of a large model more than of a small model due to the different threshold of updated parameters in each model architecture.\",\"PeriodicalId\":423599,\"journal\":{\"name\":\"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)\",\"volume\":\"53 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/icfec54809.2022.00008\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icfec54809.2022.00008","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

联邦学习使用分布在大量边缘设备上的数据集在集中式服务器上训练模型。由于联邦学习不会将本地数据从边缘设备发送到服务器,因此它保护了数据隐私。它从边缘设备传输本地模型,而不是本地数据。然而,在联邦学习中,通信成本经常是一个问题。本文提出了一种新的方法,通过只传递神经网络模型中最上层更新的参数来降低联邦学习所需的通信成本。该方法允许调整更新参数的准则,以权衡通信成本的降低和模型精度的损失。我们使用不同的模型和数据集对所提出的方法进行了评估,发现它可以达到与转移原始模型进行联邦学习相当的性能。结果表明,与VGG16的传统方法相比,所提出的方法可将所需的通信成本降低约90%。此外,我们发现,由于每种模型架构中更新参数的阈值不同,所提出的方法能够比小模型更有效地降低大模型的通信成本。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Sparse Communication for Federated Learning
Federated learning trains a model on a centralized server using datasets distributed over a massive amount of edge devices. Since federated learning does not send local data from edge devices to the server, it preserves data privacy. It transfers the local models from edge devices instead of the local data. However, communication costs are frequently a problem in federated learning. This paper proposes a novel method to reduce the required communication cost for federated learning by transferring only top updated parameters in neural network models. The proposed method allows adjusting the criteria of updated parameters to trade-off the reduction of communication costs and the loss of model accuracy. We evaluated the proposed method using diverse models and datasets and found that it can achieve comparable performance to transfer original models for federated learning. As a result, the proposed method has achieved a reduction of the required communication costs around 90% when compared to the conventional method for VGG16. Furthermore, we found out that the proposed method is able to reduce the communication cost of a large model more than of a small model due to the different threshold of updated parameters in each model architecture.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
High-Level Metrics for Service Level Objective-aware Autoscaling in Polaris: a Performance Evaluation Optimal Timing for Bandwidth Reservation for Time-Sensitive Vehicular Applications FaDO: FaaS Functions and Data Orchestrator for Multiple Serverless Edge-Cloud Clusters SIMORA: SIMulating Open Routing protocols for Application interoperability on edge devices SDN-based Service Discovery and Assignment Framework to Preserve Service Availability in Telco-based Multi-Access Edge Computing
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1