Syed Zawad , Xiaolong Ma , Jun Yi , Cheng Li , Minjia Zhang , Lei Yang , Feng Yan , Yuxiong He
{"title":"FedCust: Offloading hyperparameter customization for federated learning","authors":"Syed Zawad , Xiaolong Ma , Jun Yi , Cheng Li , Minjia Zhang , Lei Yang , Feng Yan , Yuxiong He","doi":"10.1016/j.peva.2024.102450","DOIUrl":null,"url":null,"abstract":"<div><div>Federated Learning (FL) is a new machine learning paradigm that enables training models collaboratively across clients without sharing private data. In FL, data is non-uniformly distributed among clients (i.e., data heterogeneity) and cannot be redistributed nor monitored like in conventional machine learning due to privacy constraints. Such data heterogeneity and privacy requirements bring new challenges for learning hyperparameter optimization as the training dynamics change across clients even within the same training round and they are difficult to be measured due to privacy. The state-of-the-art in hyperparameter customization can greatly improve FL model accuracy but also incur significant computing overheads and power consumption on client devices, and slowdown the training process. To address the prohibitively expensive cost challenge, we explore the possibility of offloading hyperparameter customization to servers. We propose <em>FedCust</em>, a framework that offloads expensive hyperparameter customization cost from the client devices to the central server without violating privacy constraints. Our key discovery is that it is not necessary to do hyperparameter customization for every client, and clients with similar data heterogeneity can use the same hyperparameters to achieve good training performance. We propose heterogeneity measurement metrics for clustering clients into groups such that clients within the same group share hyperparameters. <em>FedCust</em> uses the proxy data from initial model design to emulate different heterogeneity groups and perform hyperparameter customization on the server side without accessing client data nor information. To make the hyperparameter customization scalable, <em>FedCust</em> further employs a Bayesian-strengthened tuner to significantly accelerates the hyperparameter customization speed. Extensive evaluation demonstrates that <em>FedCust</em> achieves up to 7/2/4/4/6% better accuracy than the widely adopted one-size-fits-all approach on popular FL benchmarks FEMNIST, Shakespeare, Cifar100, Cifar10, and Fashion-MNIST respectively, while being scalable and reducing computation, memory, and energy consumption on the client devices, without compromising privacy constraints.</div></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"167 ","pages":"Article 102450"},"PeriodicalIF":1.0000,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Performance Evaluation","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0166531624000555","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Federated Learning (FL) is a new machine learning paradigm that enables training models collaboratively across clients without sharing private data. In FL, data is non-uniformly distributed among clients (i.e., data heterogeneity) and cannot be redistributed nor monitored like in conventional machine learning due to privacy constraints. Such data heterogeneity and privacy requirements bring new challenges for learning hyperparameter optimization as the training dynamics change across clients even within the same training round and they are difficult to be measured due to privacy. The state-of-the-art in hyperparameter customization can greatly improve FL model accuracy but also incur significant computing overheads and power consumption on client devices, and slowdown the training process. To address the prohibitively expensive cost challenge, we explore the possibility of offloading hyperparameter customization to servers. We propose FedCust, a framework that offloads expensive hyperparameter customization cost from the client devices to the central server without violating privacy constraints. Our key discovery is that it is not necessary to do hyperparameter customization for every client, and clients with similar data heterogeneity can use the same hyperparameters to achieve good training performance. We propose heterogeneity measurement metrics for clustering clients into groups such that clients within the same group share hyperparameters. FedCust uses the proxy data from initial model design to emulate different heterogeneity groups and perform hyperparameter customization on the server side without accessing client data nor information. To make the hyperparameter customization scalable, FedCust further employs a Bayesian-strengthened tuner to significantly accelerates the hyperparameter customization speed. Extensive evaluation demonstrates that FedCust achieves up to 7/2/4/4/6% better accuracy than the widely adopted one-size-fits-all approach on popular FL benchmarks FEMNIST, Shakespeare, Cifar100, Cifar10, and Fashion-MNIST respectively, while being scalable and reducing computation, memory, and energy consumption on the client devices, without compromising privacy constraints.
期刊介绍:
Performance Evaluation functions as a leading journal in the area of modeling, measurement, and evaluation of performance aspects of computing and communication systems. As such, it aims to present a balanced and complete view of the entire Performance Evaluation profession. Hence, the journal is interested in papers that focus on one or more of the following dimensions:
-Define new performance evaluation tools, including measurement and monitoring tools as well as modeling and analytic techniques
-Provide new insights into the performance of computing and communication systems
-Introduce new application areas where performance evaluation tools can play an important role and creative new uses for performance evaluation tools.
More specifically, common application areas of interest include the performance of:
-Resource allocation and control methods and algorithms (e.g. routing and flow control in networks, bandwidth allocation, processor scheduling, memory management)
-System architecture, design and implementation
-Cognitive radio
-VANETs
-Social networks and media
-Energy efficient ICT
-Energy harvesting
-Data centers
-Data centric networks
-System reliability
-System tuning and capacity planning
-Wireless and sensor networks
-Autonomic and self-organizing systems
-Embedded systems
-Network science