{"title":"多目标联合学习:平衡全局绩效与个体公平","authors":"","doi":"10.1016/j.future.2024.07.046","DOIUrl":null,"url":null,"abstract":"<div><p>In federated learning, non-iid data not only diminishes the performance of the global model but also gives rise to the fairness problem which manifests as an increase in the variance of the global model’s accuracy across clients. Fairness issues can result in the global model performing poorly or even failing on certain clients. Existing methods addressing the fairness problem in federated learning tend to neglect the comprehensive improvement of both the average performance and fairness of the global model. In addressing it, the multi-objective optimization method for fine-tuning global gradients, FedMC algorithm is introduced in this paper. The primary objective is the average loss function of all clients, and the sub-objective involves fine-tuning the global gradient by reducing the gradient conflict between the global gradient and the local gradients. Specifically, we refine the global gradient by incorporating a sub-optimization objective aimed at alleviating conflicts between the global gradient and the local gradient with the largest deviation, denoted as FedMC. FedMC can enhance the performance and convergence rate of clients with initially poor performance, albeit at the cost of the earlier convergence rate of clients with initially good performance. Nevertheless, it enables the latter to reach the accuracy level achieved before fine-tuning. In addition, we also propose FedMC+ algorithm, owning three additional optimization mechanisms built upon the FedMC optimization objective which includes the decay of hyperparameter, the sliding window mechanism, and data-balanced client selection. Besides, we present a theoretical analysis of the convergence rate of FedMC, demonstrating its convergence to a Pareto stationary solution. Our combined experimental results confirm that FedMC+ achieves an average 4.5% improvement in accuracy and a 22% reduction in the degree of dispersion compared to state-of-the-art federated learning (FL) methods.</p></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2000,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi-objective federated learning: Balancing global performance and individual fairness\",\"authors\":\"\",\"doi\":\"10.1016/j.future.2024.07.046\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>In federated learning, non-iid data not only diminishes the performance of the global model but also gives rise to the fairness problem which manifests as an increase in the variance of the global model’s accuracy across clients. Fairness issues can result in the global model performing poorly or even failing on certain clients. Existing methods addressing the fairness problem in federated learning tend to neglect the comprehensive improvement of both the average performance and fairness of the global model. In addressing it, the multi-objective optimization method for fine-tuning global gradients, FedMC algorithm is introduced in this paper. The primary objective is the average loss function of all clients, and the sub-objective involves fine-tuning the global gradient by reducing the gradient conflict between the global gradient and the local gradients. Specifically, we refine the global gradient by incorporating a sub-optimization objective aimed at alleviating conflicts between the global gradient and the local gradient with the largest deviation, denoted as FedMC. FedMC can enhance the performance and convergence rate of clients with initially poor performance, albeit at the cost of the earlier convergence rate of clients with initially good performance. Nevertheless, it enables the latter to reach the accuracy level achieved before fine-tuning. In addition, we also propose FedMC+ algorithm, owning three additional optimization mechanisms built upon the FedMC optimization objective which includes the decay of hyperparameter, the sliding window mechanism, and data-balanced client selection. Besides, we present a theoretical analysis of the convergence rate of FedMC, demonstrating its convergence to a Pareto stationary solution. Our combined experimental results confirm that FedMC+ achieves an average 4.5% improvement in accuracy and a 22% reduction in the degree of dispersion compared to state-of-the-art federated learning (FL) methods.</p></div>\",\"PeriodicalId\":55132,\"journal\":{\"name\":\"Future Generation Computer Systems-The International Journal of Escience\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":6.2000,\"publicationDate\":\"2024-07-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Future Generation Computer Systems-The International Journal of Escience\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0167739X24004199\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Future Generation Computer Systems-The International Journal of Escience","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167739X24004199","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
Multi-objective federated learning: Balancing global performance and individual fairness
In federated learning, non-iid data not only diminishes the performance of the global model but also gives rise to the fairness problem which manifests as an increase in the variance of the global model’s accuracy across clients. Fairness issues can result in the global model performing poorly or even failing on certain clients. Existing methods addressing the fairness problem in federated learning tend to neglect the comprehensive improvement of both the average performance and fairness of the global model. In addressing it, the multi-objective optimization method for fine-tuning global gradients, FedMC algorithm is introduced in this paper. The primary objective is the average loss function of all clients, and the sub-objective involves fine-tuning the global gradient by reducing the gradient conflict between the global gradient and the local gradients. Specifically, we refine the global gradient by incorporating a sub-optimization objective aimed at alleviating conflicts between the global gradient and the local gradient with the largest deviation, denoted as FedMC. FedMC can enhance the performance and convergence rate of clients with initially poor performance, albeit at the cost of the earlier convergence rate of clients with initially good performance. Nevertheless, it enables the latter to reach the accuracy level achieved before fine-tuning. In addition, we also propose FedMC+ algorithm, owning three additional optimization mechanisms built upon the FedMC optimization objective which includes the decay of hyperparameter, the sliding window mechanism, and data-balanced client selection. Besides, we present a theoretical analysis of the convergence rate of FedMC, demonstrating its convergence to a Pareto stationary solution. Our combined experimental results confirm that FedMC+ achieves an average 4.5% improvement in accuracy and a 22% reduction in the degree of dispersion compared to state-of-the-art federated learning (FL) methods.
期刊介绍:
Computing infrastructures and systems are constantly evolving, resulting in increasingly complex and collaborative scientific applications. To cope with these advancements, there is a growing need for collaborative tools that can effectively map, control, and execute these applications.
Furthermore, with the explosion of Big Data, there is a requirement for innovative methods and infrastructures to collect, analyze, and derive meaningful insights from the vast amount of data generated. This necessitates the integration of computational and storage capabilities, databases, sensors, and human collaboration.
Future Generation Computer Systems aims to pioneer advancements in distributed systems, collaborative environments, high-performance computing, and Big Data analytics. It strives to stay at the forefront of developments in grids, clouds, and the Internet of Things (IoT) to effectively address the challenges posed by these wide-area, fully distributed sensing and computing systems.