联邦学习的动态稀疏化

Mahdi Beitollahi, Mingrui Liu, Ning Lu
{"title":"联邦学习的动态稀疏化","authors":"Mahdi Beitollahi, Mingrui Liu, Ning Lu","doi":"10.1109/ICCSPA55860.2022.10019204","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) is considered the key, enabling approach for privacy-preserving, distributed machine learning (ML) systems. FL requires the periodic transmission of ML models from users to the server. Therefore, communication via resource-constrained networks is currently a fundamental bottleneck in FL, which is restricting the ML model complexity and user participation. One of the notable trends to reduce the communication cost of FL systems is gradient compression, in which techniques in the form of sparsification are utilized. However, these methods utilize a single compression rate for all users and do not consider communication heterogeneity in a real-world FL system. Therefore, these methods are bottlenecked by the worst communication capacity across users. Further, sparsification methods are non-adaptive and do not utilize the redundant, similar information across users' ML models for compression. In this paper, we introduce a novel Dynamic Sparsification for Federated Learning (DSFL) approach that enables users to compress their local models based on their communication capacity at each iteration by using two novel sparsification methods: layer-wise similarity sparsification (LSS) and extended top- $K$ sparsification. LSS enables DSFL to utilize the global redundant information in users' models by using the Centralized Kernel Alignment (CKA) similarity for sparsification. The extended top-$K$ model sparsification method empowers DSFL to accommodate the heterogeneous communication capacity of user devices by allowing different values of sparsification rate $K$ for each user at each iteration. Our extensive experimental results11All code and experiments are publicly available at: https://github.com/mahdibeit/DSFL. on three datasets show that DSFL has a faster convergence rate than fixed sparsification, and as the communication heterogeneity increases, this gap increases. Further, our thorough experimental investigations uncover the similarities of user models across the FL system.","PeriodicalId":106639,"journal":{"name":"2022 5th International Conference on Communications, Signal Processing, and their Applications (ICCSPA)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"DSFL: Dynamic Sparsification for Federated Learning\",\"authors\":\"Mahdi Beitollahi, Mingrui Liu, Ning Lu\",\"doi\":\"10.1109/ICCSPA55860.2022.10019204\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated Learning (FL) is considered the key, enabling approach for privacy-preserving, distributed machine learning (ML) systems. FL requires the periodic transmission of ML models from users to the server. Therefore, communication via resource-constrained networks is currently a fundamental bottleneck in FL, which is restricting the ML model complexity and user participation. One of the notable trends to reduce the communication cost of FL systems is gradient compression, in which techniques in the form of sparsification are utilized. However, these methods utilize a single compression rate for all users and do not consider communication heterogeneity in a real-world FL system. Therefore, these methods are bottlenecked by the worst communication capacity across users. Further, sparsification methods are non-adaptive and do not utilize the redundant, similar information across users' ML models for compression. In this paper, we introduce a novel Dynamic Sparsification for Federated Learning (DSFL) approach that enables users to compress their local models based on their communication capacity at each iteration by using two novel sparsification methods: layer-wise similarity sparsification (LSS) and extended top- $K$ sparsification. LSS enables DSFL to utilize the global redundant information in users' models by using the Centralized Kernel Alignment (CKA) similarity for sparsification. The extended top-$K$ model sparsification method empowers DSFL to accommodate the heterogeneous communication capacity of user devices by allowing different values of sparsification rate $K$ for each user at each iteration. Our extensive experimental results11All code and experiments are publicly available at: https://github.com/mahdibeit/DSFL. on three datasets show that DSFL has a faster convergence rate than fixed sparsification, and as the communication heterogeneity increases, this gap increases. Further, our thorough experimental investigations uncover the similarities of user models across the FL system.\",\"PeriodicalId\":106639,\"journal\":{\"name\":\"2022 5th International Conference on Communications, Signal Processing, and their Applications (ICCSPA)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 5th International Conference on Communications, Signal Processing, and their Applications (ICCSPA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCSPA55860.2022.10019204\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 5th International Conference on Communications, Signal Processing, and their Applications (ICCSPA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCSPA55860.2022.10019204","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

联邦学习(FL)被认为是保护隐私、分布式机器学习(ML)系统的关键方法。FL需要定期将ML模型从用户传输到服务器。因此,通过资源受限的网络进行通信是目前FL的一个基本瓶颈,它限制了ML模型的复杂性和用户参与。降低FL系统通信成本的一个显著趋势是梯度压缩,其中使用了以稀疏化形式的技术。然而,这些方法对所有用户使用单一压缩率,并且不考虑真实FL系统中的通信异构性。因此,这些方法受到用户间最差通信容量的瓶颈。此外,稀疏化方法是非自适应的,不会利用用户ML模型中的冗余、相似信息进行压缩。在本文中,我们介绍了一种新的用于联邦学习的动态稀疏化(DSFL)方法,该方法使用户能够使用两种新的稀疏化方法:分层相似稀疏化(LSS)和扩展的top- K稀疏化,从而根据每次迭代的通信容量压缩他们的局部模型。LSS通过集中式内核对齐(CKA)相似性进行稀疏化,使DSFL能够利用用户模型中的全局冗余信息。扩展的top-$K$模型稀疏化方法允许每个用户在每次迭代中使用不同的稀疏化率$K$值,从而使DSFL能够适应用户设备的异构通信容量。我们广泛的实验结果11所有的代码和实验都是公开的:https://github.com/mahdibeit/DSFL。结果表明,DSFL比固定稀疏化具有更快的收敛速度,并且随着通信异构性的增加,这种差距也在增加。此外,我们彻底的实验调查揭示了整个FL系统中用户模型的相似性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
DSFL: Dynamic Sparsification for Federated Learning
Federated Learning (FL) is considered the key, enabling approach for privacy-preserving, distributed machine learning (ML) systems. FL requires the periodic transmission of ML models from users to the server. Therefore, communication via resource-constrained networks is currently a fundamental bottleneck in FL, which is restricting the ML model complexity and user participation. One of the notable trends to reduce the communication cost of FL systems is gradient compression, in which techniques in the form of sparsification are utilized. However, these methods utilize a single compression rate for all users and do not consider communication heterogeneity in a real-world FL system. Therefore, these methods are bottlenecked by the worst communication capacity across users. Further, sparsification methods are non-adaptive and do not utilize the redundant, similar information across users' ML models for compression. In this paper, we introduce a novel Dynamic Sparsification for Federated Learning (DSFL) approach that enables users to compress their local models based on their communication capacity at each iteration by using two novel sparsification methods: layer-wise similarity sparsification (LSS) and extended top- $K$ sparsification. LSS enables DSFL to utilize the global redundant information in users' models by using the Centralized Kernel Alignment (CKA) similarity for sparsification. The extended top-$K$ model sparsification method empowers DSFL to accommodate the heterogeneous communication capacity of user devices by allowing different values of sparsification rate $K$ for each user at each iteration. Our extensive experimental results11All code and experiments are publicly available at: https://github.com/mahdibeit/DSFL. on three datasets show that DSFL has a faster convergence rate than fixed sparsification, and as the communication heterogeneity increases, this gap increases. Further, our thorough experimental investigations uncover the similarities of user models across the FL system.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Optimal Power Allocation in NOMA-Based Diamond Relaying Networks Improved Bayesian learning Algorithms for recovering Block Sparse Signals With Known and Unknown Borders A Computer-Aided Brain Tumor Detection Integrating Ensemble Classifiers with Data Augmentation and VGG16 Feature Extraction A Generic Real Time Autoencoder-Based Lossy Image Compression An Efficient Patient-Independent Epileptic Seizure Assistive Integrated Model in Human Brain-Computer Interface Applications
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1