{"title":"Accelerating Fair Federated Learning: Adaptive Federated Adam","authors":"Li Ju;Tianru Zhang;Salman Toor;Andreas Hellander","doi":"10.1109/TMLCN.2024.3423648","DOIUrl":null,"url":null,"abstract":"Federated learning is a distributed and privacy-preserving approach to train a statistical model collaboratively from decentralized data held by different parties. However, when the datasets are not independent and identically distributed, models trained by naive federated algorithms may be biased towards certain participants, and model performance across participants is non-uniform. This is known as the fairness problem in federated learning. In this paper, we formulate fairness-controlled federated learning as a dynamical multi-objective optimization problem to ensure the fairness and convergence with theoretical guarantee. To solve the problem efficiently, we study the convergence and bias of \n<monospace>Adam</monospace>\n as the server optimizer in federated learning, and propose Adaptive Federated Adam (\n<monospace>AdaFedAdam</monospace>\n) to accelerate fair federated learning with alleviated bias. We validated the effectiveness, Pareto optimality and robustness of \n<monospace>AdaFedAdam</monospace>\n with numerical experiments and show that \n<monospace>AdaFedAdam</monospace>\n outperforms existing algorithms, providing better convergence and fairness properties of the federated scheme.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1017-1032"},"PeriodicalIF":0.0000,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10584508","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Machine Learning in Communications and Networking","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10584508/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Federated learning is a distributed and privacy-preserving approach to train a statistical model collaboratively from decentralized data held by different parties. However, when the datasets are not independent and identically distributed, models trained by naive federated algorithms may be biased towards certain participants, and model performance across participants is non-uniform. This is known as the fairness problem in federated learning. In this paper, we formulate fairness-controlled federated learning as a dynamical multi-objective optimization problem to ensure the fairness and convergence with theoretical guarantee. To solve the problem efficiently, we study the convergence and bias of
Adam
as the server optimizer in federated learning, and propose Adaptive Federated Adam (
AdaFedAdam
) to accelerate fair federated learning with alleviated bias. We validated the effectiveness, Pareto optimality and robustness of
AdaFedAdam
with numerical experiments and show that
AdaFedAdam
outperforms existing algorithms, providing better convergence and fairness properties of the federated scheme.