Fed-RAC: Resource-Aware Clustering for Tackling Heterogeneity of Participants in Federated Learning

IF 5.6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS IEEE Transactions on Parallel and Distributed Systems Pub Date : 2024-03-20 DOI:10.1109/TPDS.2024.3379933
Rahul Mishra;Hari Prabhat Gupta;Garvit Banga;Sajal K. Das
{"title":"Fed-RAC: Resource-Aware Clustering for Tackling Heterogeneity of Participants in Federated Learning","authors":"Rahul Mishra;Hari Prabhat Gupta;Garvit Banga;Sajal K. Das","doi":"10.1109/TPDS.2024.3379933","DOIUrl":null,"url":null,"abstract":"Federated Learning is a training framework that enables multiple participants to collaboratively train a shared model while preserving data privacy. The heterogeneity of devices and networking resources of the participants delay the training and aggregation. The paper introduces a novel approach to federated learning by incorporating resource-aware clustering. This method addresses the challenges posed by the diverse devices and networking resources among participants. Unlike static clustering approaches, this paper proposes a dynamic method to determine the optimal number of clusters using Dunn Indices. It enables adaptability to the varying heterogeneity levels among participants, ensuring a responsive and customized approach to clustering. Next, the paper goes beyond empirical observations by providing a mathematical derivation of the communication rounds for convergence within each cluster. Further, the participant assignment mechanism adds a layer of sophistication and ensures that devices and networking resources are allocated optimally. Afterwards, we incorporate a leader-follower technique, particularly through knowledge distillation, which improves the performance of lightweight models within clusters. Finally, experiments are conducted to validate the approach and to compare it with state-of-the-art. The results demonstrated an accuracy improvement of over 3% compared to its closest competitor and a reduction in communication rounds of around 10%.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":null,"pages":null},"PeriodicalIF":5.6000,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Parallel and Distributed Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10476717/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Federated Learning is a training framework that enables multiple participants to collaboratively train a shared model while preserving data privacy. The heterogeneity of devices and networking resources of the participants delay the training and aggregation. The paper introduces a novel approach to federated learning by incorporating resource-aware clustering. This method addresses the challenges posed by the diverse devices and networking resources among participants. Unlike static clustering approaches, this paper proposes a dynamic method to determine the optimal number of clusters using Dunn Indices. It enables adaptability to the varying heterogeneity levels among participants, ensuring a responsive and customized approach to clustering. Next, the paper goes beyond empirical observations by providing a mathematical derivation of the communication rounds for convergence within each cluster. Further, the participant assignment mechanism adds a layer of sophistication and ensures that devices and networking resources are allocated optimally. Afterwards, we incorporate a leader-follower technique, particularly through knowledge distillation, which improves the performance of lightweight models within clusters. Finally, experiments are conducted to validate the approach and to compare it with state-of-the-art. The results demonstrated an accuracy improvement of over 3% compared to its closest competitor and a reduction in communication rounds of around 10%.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
为应对联合学习中参与者的异质性而进行资源感知聚类
联合学习(Federated Learning)是一种训练框架,它能让多个参与者在保护数据隐私的同时协作训练一个共享模型。参与者的设备和网络资源的异质性会延迟训练和聚合。本文介绍了一种结合资源感知聚类的联合学习新方法。这种方法解决了参与者之间不同设备和网络资源带来的挑战。与静态聚类方法不同,本文提出了一种动态方法,利用邓恩指数确定最佳聚类数量。该方法可适应参与者之间不同的异质性水平,确保采用反应灵敏、量身定制的聚类方法。接下来,本文超越了经验观察,对每个聚类内收敛的通信轮数进行了数学推导。此外,参与者分配机制还增加了一层复杂性,确保设备和网络资源得到最优分配。之后,我们采用了领导者-追随者技术,特别是通过知识提炼,提高了集群内轻量级模型的性能。最后,我们进行了实验来验证这种方法,并将其与最先进的方法进行比较。结果表明,与最接近的竞争对手相比,该方法的准确率提高了 3%,通信回合数减少了约 10%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Parallel and Distributed Systems
IEEE Transactions on Parallel and Distributed Systems 工程技术-工程:电子与电气
CiteScore
11.00
自引率
9.40%
发文量
281
审稿时长
5.6 months
期刊介绍: IEEE Transactions on Parallel and Distributed Systems (TPDS) is published monthly. It publishes a range of papers, comments on previously published papers, and survey articles that deal with the parallel and distributed systems research areas of current importance to our readers. Particular areas of interest include, but are not limited to: a) Parallel and distributed algorithms, focusing on topics such as: models of computation; numerical, combinatorial, and data-intensive parallel algorithms, scalability of algorithms and data structures for parallel and distributed systems, communication and synchronization protocols, network algorithms, scheduling, and load balancing. b) Applications of parallel and distributed computing, including computational and data-enabled science and engineering, big data applications, parallel crowd sourcing, large-scale social network analysis, management of big data, cloud and grid computing, scientific and biomedical applications, mobile computing, and cyber-physical systems. c) Parallel and distributed architectures, including architectures for instruction-level and thread-level parallelism; design, analysis, implementation, fault resilience and performance measurements of multiple-processor systems; multicore processors, heterogeneous many-core systems; petascale and exascale systems designs; novel big data architectures; special purpose architectures, including graphics processors, signal processors, network processors, media accelerators, and other special purpose processors and accelerators; impact of technology on architecture; network and interconnect architectures; parallel I/O and storage systems; architecture of the memory hierarchy; power-efficient and green computing architectures; dependable architectures; and performance modeling and evaluation. d) Parallel and distributed software, including parallel and multicore programming languages and compilers, runtime systems, operating systems, Internet computing and web services, resource management including green computing, middleware for grids, clouds, and data centers, libraries, performance modeling and evaluation, parallel programming paradigms, and programming environments and tools.
期刊最新文献
DyLaClass: Dynamic Labeling Based Classification for Optimal Sparse Matrix Format Selection in Accelerating SpMV Design and Performance Evaluation of Linearly Extensible Cube-Triangle Network for Multicore Systems PeakFS: An Ultra-High Performance Parallel File System via Computing-Network-Storage Co-Optimization for HPC Applications Mitosis: A Scalable Sharding System Featuring Multiple Dynamic Relay Chains Breaking the Memory Wall for Heterogeneous Federated Learning via Model Splitting
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1