FedFT: Improving Communication Performance for Federated Learning with Frequency Space Transformation

Chamath Palihawadana, Nirmalie Wiratunga, Anjana Wijekoon, Harsha Kalutarage
{"title":"FedFT: Improving Communication Performance for Federated Learning with Frequency Space Transformation","authors":"Chamath Palihawadana, Nirmalie Wiratunga, Anjana Wijekoon, Harsha Kalutarage","doi":"arxiv-2409.05242","DOIUrl":null,"url":null,"abstract":"Communication efficiency is a widely recognised research problem in Federated\nLearning (FL), with recent work focused on developing techniques for efficient\ncompression, distribution and aggregation of model parameters between clients\nand the server. Particularly within distributed systems, it is important to\nbalance the need for computational cost and communication efficiency. However,\nexisting methods are often constrained to specific applications and are less\ngeneralisable. In this paper, we introduce FedFT (federated frequency-space\ntransformation), a simple yet effective methodology for communicating model\nparameters in a FL setting. FedFT uses Discrete Cosine Transform (DCT) to\nrepresent model parameters in frequency space, enabling efficient compression\nand reducing communication overhead. FedFT is compatible with various existing\nFL methodologies and neural architectures, and its linear property eliminates\nthe need for multiple transformations during federated aggregation. This\nmethodology is vital for distributed solutions, tackling essential challenges\nlike data privacy, interoperability, and energy efficiency inherent to these\nenvironments. We demonstrate the generalisability of the FedFT methodology on\nfour datasets using comparative studies with three state-of-the-art FL\nbaselines (FedAvg, FedProx, FedSim). Our results demonstrate that using FedFT\nto represent the differences in model parameters between communication rounds\nin frequency space results in a more compact representation compared to\nrepresenting the entire model in frequency space. This leads to a reduction in\ncommunication overhead, while keeping accuracy levels comparable and in some\ncases even improving it. Our results suggest that this reduction can range from\n5% to 30% per client, depending on dataset.","PeriodicalId":501422,"journal":{"name":"arXiv - CS - Distributed, Parallel, and Cluster Computing","volume":"106 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Distributed, Parallel, and Cluster Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.05242","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Communication efficiency is a widely recognised research problem in Federated Learning (FL), with recent work focused on developing techniques for efficient compression, distribution and aggregation of model parameters between clients and the server. Particularly within distributed systems, it is important to balance the need for computational cost and communication efficiency. However, existing methods are often constrained to specific applications and are less generalisable. In this paper, we introduce FedFT (federated frequency-space transformation), a simple yet effective methodology for communicating model parameters in a FL setting. FedFT uses Discrete Cosine Transform (DCT) to represent model parameters in frequency space, enabling efficient compression and reducing communication overhead. FedFT is compatible with various existing FL methodologies and neural architectures, and its linear property eliminates the need for multiple transformations during federated aggregation. This methodology is vital for distributed solutions, tackling essential challenges like data privacy, interoperability, and energy efficiency inherent to these environments. We demonstrate the generalisability of the FedFT methodology on four datasets using comparative studies with three state-of-the-art FL baselines (FedAvg, FedProx, FedSim). Our results demonstrate that using FedFT to represent the differences in model parameters between communication rounds in frequency space results in a more compact representation compared to representing the entire model in frequency space. This leads to a reduction in communication overhead, while keeping accuracy levels comparable and in some cases even improving it. Our results suggest that this reduction can range from 5% to 30% per client, depending on dataset.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
FedFT:利用频率空间变换提高联盟学习的通信性能
通信效率是联邦学习(FL)领域公认的一个研究问题,最近的工作重点是开发在客户端和服务器之间高效压缩、分发和聚合模型参数的技术。特别是在分布式系统中,平衡计算成本和通信效率的需求非常重要。然而,现有方法往往局限于特定应用,通用性较差。在本文中,我们介绍了 FedFT(联合频率-截面变换),这是一种简单而有效的方法,用于在 FL 设置中通信模型参数。FedFT 使用离散余弦变换(DCT)来表示频率空间中的模型参数,从而实现高效压缩并减少通信开销。FedFT 与现有的各种 FL 方法和神经架构兼容,其线性特性消除了在联合聚合过程中进行多重变换的需要。这种方法对于分布式解决方案至关重要,它能解决这些环境固有的数据隐私、互操作性和能效等基本挑战。我们通过与三种最先进的 FLbaseline(FedAvg、FedProx 和 FedSim)进行比较研究,在四个数据集上证明了 FedFT 方法的通用性。我们的结果表明,与在频率空间中表示整个模型相比,使用 FedFT 在频率空间中表示通信轮次之间的模型参数差异,能获得更紧凑的表示。这就减少了通信开销,同时保持了相当的精度水平,在某些情况下甚至有所提高。我们的研究结果表明,根据数据集的不同,每个客户端的通信开销可减少 5% 到 30%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Massively parallel CMA-ES with increasing population Communication Lower Bounds and Optimal Algorithms for Symmetric Matrix Computations Energy Efficiency Support for Software Defined Networks: a Serverless Computing Approach CountChain: A Decentralized Oracle Network for Counting Systems Delay Analysis of EIP-4844
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1