Haibo Liu;Jianfeng Lu;Xiong Wang;Chen Wang;Riheng Jia;Minglu Li
{"title":"FedUP: Bridging Fairness and Efficiency in Cross-Silo Federated Learning","authors":"Haibo Liu;Jianfeng Lu;Xiong Wang;Chen Wang;Riheng Jia;Minglu Li","doi":"10.1109/TSC.2024.3489437","DOIUrl":null,"url":null,"abstract":"Although federated learning (FL) enables collaborative training across multiple data silos in a privacy-protected manner, naively minimizing the aggregated loss to facilitate an efficient federation may compromise its fairness. Many efforts have been devoted to maintaining similar average accuracy across clients by reweighing the loss function while clients’ potential contributions are largely ignored. This, however, is often detrimental since treating all clients equally will harm the interests of those clients with more contribution. To tackle this issue, we introduce utopian fairness to expound the relationship between individual earning and collaborative productivity, and propose \n<underline>Fed</u>\nerated-\n<underline>U</u>\nto\n<underline>P</u>\nia (FedUP), a novel FL framework that balances both efficient collaboration and fair aggregation. For the distributed collaboration, we model the training process among strategic clients as a supermodular game, which facilitates a rational incentive design through the optimal reward. As for the model aggregation, we design a weight attention mechanism to compute the fair aggregation weights by minimizing the performance bias among heterogeneous clients. Particularly, we utilize the alternating optimization theory to bridge the gap between collaboration efficiency and utopian fairness, and theoretically prove that FedUP has fair model performance with fast-rate training convergence. Extensive experiments using both synthetic and real datasets demonstrate the superiority of FedUP.","PeriodicalId":13255,"journal":{"name":"IEEE Transactions on Services Computing","volume":"17 6","pages":"3672-3684"},"PeriodicalIF":5.5000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Services Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10740334/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Although federated learning (FL) enables collaborative training across multiple data silos in a privacy-protected manner, naively minimizing the aggregated loss to facilitate an efficient federation may compromise its fairness. Many efforts have been devoted to maintaining similar average accuracy across clients by reweighing the loss function while clients’ potential contributions are largely ignored. This, however, is often detrimental since treating all clients equally will harm the interests of those clients with more contribution. To tackle this issue, we introduce utopian fairness to expound the relationship between individual earning and collaborative productivity, and propose
Fed
erated-
U
to
P
ia (FedUP), a novel FL framework that balances both efficient collaboration and fair aggregation. For the distributed collaboration, we model the training process among strategic clients as a supermodular game, which facilitates a rational incentive design through the optimal reward. As for the model aggregation, we design a weight attention mechanism to compute the fair aggregation weights by minimizing the performance bias among heterogeneous clients. Particularly, we utilize the alternating optimization theory to bridge the gap between collaboration efficiency and utopian fairness, and theoretically prove that FedUP has fair model performance with fast-rate training convergence. Extensive experiments using both synthetic and real datasets demonstrate the superiority of FedUP.
期刊介绍:
IEEE Transactions on Services Computing encompasses the computing and software aspects of the science and technology of services innovation research and development. It places emphasis on algorithmic, mathematical, statistical, and computational methods central to services computing. Topics covered include Service Oriented Architecture, Web Services, Business Process Integration, Solution Performance Management, and Services Operations and Management. The transactions address mathematical foundations, security, privacy, agreement, contract, discovery, negotiation, collaboration, and quality of service for web services. It also covers areas like composite web service creation, business and scientific applications, standards, utility models, business process modeling, integration, collaboration, and more in the realm of Services Computing.