{"title":"FedQClip: Accelerating Federated Learning via Quantized Clipped SGD","authors":"Zhihao Qu;Ninghui Jia;Baoliu Ye;Shihong Hu;Song Guo","doi":"10.1109/TC.2024.3477972","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) has emerged as a promising technique for collaboratively training machine learning models among multiple participants while preserving privacy-sensitive data. However, the conventional parameter server architecture presents challenges in terms of communication overhead when employing iterative optimization methods such as Stochastic Gradient Descent (SGD). Although communication compression techniques can reduce the traffic cost of FL during each training round, they often lead to degraded convergence rates, mainly due to compression errors and data heterogeneity. To address these issues, this paper presents FedQClip, an innovative approach that combines quantization and Clipped SGD. FedQClip leverages an adaptive step size inversely proportional to the <inline-formula><tex-math>$\\ell_{2}$</tex-math></inline-formula> norm of the gradient, effectively mitigating the negative impacts of quantized errors. Additionally, clipped operations can be applied locally and globally to further expedite training. Theoretical analyses provide evidence that, even under the settings of Non-IID (non-independent and identically distributed) data, FedQClip achieves a convergence rate of <inline-formula><tex-math>$\\mathcal{O}(\\frac{1}{\\sqrt{T}})$</tex-math></inline-formula>, effectively addressing the convergence degradation caused by compression errors. Furthermore, our theoretical analysis highlights the importance of selecting an appropriate number of local updates to enhance the convergence of FL training. Through extensive experiments, we demonstrate that FedQClip outperforms state-of-the-art methods in terms of communication efficiency and convergence rate.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 2","pages":"717-730"},"PeriodicalIF":3.6000,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computers","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10713249/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Federated Learning (FL) has emerged as a promising technique for collaboratively training machine learning models among multiple participants while preserving privacy-sensitive data. However, the conventional parameter server architecture presents challenges in terms of communication overhead when employing iterative optimization methods such as Stochastic Gradient Descent (SGD). Although communication compression techniques can reduce the traffic cost of FL during each training round, they often lead to degraded convergence rates, mainly due to compression errors and data heterogeneity. To address these issues, this paper presents FedQClip, an innovative approach that combines quantization and Clipped SGD. FedQClip leverages an adaptive step size inversely proportional to the $\ell_{2}$ norm of the gradient, effectively mitigating the negative impacts of quantized errors. Additionally, clipped operations can be applied locally and globally to further expedite training. Theoretical analyses provide evidence that, even under the settings of Non-IID (non-independent and identically distributed) data, FedQClip achieves a convergence rate of $\mathcal{O}(\frac{1}{\sqrt{T}})$, effectively addressing the convergence degradation caused by compression errors. Furthermore, our theoretical analysis highlights the importance of selecting an appropriate number of local updates to enhance the convergence of FL training. Through extensive experiments, we demonstrate that FedQClip outperforms state-of-the-art methods in terms of communication efficiency and convergence rate.
期刊介绍:
The IEEE Transactions on Computers is a monthly publication with a wide distribution to researchers, developers, technical managers, and educators in the computer field. It publishes papers on research in areas of current interest to the readers. These areas include, but are not limited to, the following: a) computer organizations and architectures; b) operating systems, software systems, and communication protocols; c) real-time systems and embedded systems; d) digital devices, computer components, and interconnection networks; e) specification, design, prototyping, and testing methods and tools; f) performance, fault tolerance, reliability, security, and testability; g) case studies and experimental and theoretical evaluations; and h) new and important applications and trends.