{"title":"以个性化和公平性加速具有通信效率的联合多任务学习","authors":"Renyou Xie;Chaojie Li;Xiaojun Zhou;Zhaoyang Dong","doi":"10.1109/TPDS.2024.3411815","DOIUrl":null,"url":null,"abstract":"Federated learning techniques provide a promising framework for collaboratively training a machine learning model without sharing users’ data, and delivering a security solution to guarantee privacy during the model training of IoT devices. Nonetheless, challenges posed by data heterogeneity and communication resource constraints make it difficult to develop an efficient federated learning algorithm in terms of the low order of convergence rate. It could significantly deteriorate the quality of service for critical machine learning tasks, e.g., facial recognition, which requires an edge-ready, low-power, low-latency training algorithm. To address these challenges, a communication-efficient federated learning approach is proposed in this paper where the momentum technique is leveraged to accelerate the convergence rate while largely reducing the communication requirements. First, a federated multi-task learning framework by which the learning tasks are reformulated by the multi-objective optimization problem is introduced to address the data heterogeneity. The multiple gradient descent algorithm is harnessed to find the common gradient descending direction for all participants so that the common features can be learned and no sacrifice on each clients’ performance. Second, to reduce communication costs, a local momentum technique with global information is developed to speed up the convergence rate, where the convergence analysis of the proposed method under non-convex case is studied. It is proved that the proposed local momentum can actually achieve the same acceleration as the global momentum, whereas it is more robust than algorithms that solely rely on the acceleration by the global momentum. Third, the generalization of the proposed acceleration approach is investigated which is demonstrated by the accelerated variation of FedAvg. Finally, the performance of the proposed method on the learning model accuracy, convergence rate, and robustness to data heterogeneity, is investigated by empirical experiments on four public datasets, while a real-world IoT platform is constructed to demonstrate the communication efficiency of the proposed method.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"35 11","pages":"2239-2253"},"PeriodicalIF":5.6000,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Accelerating Communication-Efficient Federated Multi-Task Learning With Personalization and Fairness\",\"authors\":\"Renyou Xie;Chaojie Li;Xiaojun Zhou;Zhaoyang Dong\",\"doi\":\"10.1109/TPDS.2024.3411815\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning techniques provide a promising framework for collaboratively training a machine learning model without sharing users’ data, and delivering a security solution to guarantee privacy during the model training of IoT devices. Nonetheless, challenges posed by data heterogeneity and communication resource constraints make it difficult to develop an efficient federated learning algorithm in terms of the low order of convergence rate. It could significantly deteriorate the quality of service for critical machine learning tasks, e.g., facial recognition, which requires an edge-ready, low-power, low-latency training algorithm. To address these challenges, a communication-efficient federated learning approach is proposed in this paper where the momentum technique is leveraged to accelerate the convergence rate while largely reducing the communication requirements. First, a federated multi-task learning framework by which the learning tasks are reformulated by the multi-objective optimization problem is introduced to address the data heterogeneity. The multiple gradient descent algorithm is harnessed to find the common gradient descending direction for all participants so that the common features can be learned and no sacrifice on each clients’ performance. Second, to reduce communication costs, a local momentum technique with global information is developed to speed up the convergence rate, where the convergence analysis of the proposed method under non-convex case is studied. It is proved that the proposed local momentum can actually achieve the same acceleration as the global momentum, whereas it is more robust than algorithms that solely rely on the acceleration by the global momentum. Third, the generalization of the proposed acceleration approach is investigated which is demonstrated by the accelerated variation of FedAvg. Finally, the performance of the proposed method on the learning model accuracy, convergence rate, and robustness to data heterogeneity, is investigated by empirical experiments on four public datasets, while a real-world IoT platform is constructed to demonstrate the communication efficiency of the proposed method.\",\"PeriodicalId\":13257,\"journal\":{\"name\":\"IEEE Transactions on Parallel and Distributed Systems\",\"volume\":\"35 11\",\"pages\":\"2239-2253\"},\"PeriodicalIF\":5.6000,\"publicationDate\":\"2024-06-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Parallel and Distributed Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10552428/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Parallel and Distributed Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10552428/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
Accelerating Communication-Efficient Federated Multi-Task Learning With Personalization and Fairness
Federated learning techniques provide a promising framework for collaboratively training a machine learning model without sharing users’ data, and delivering a security solution to guarantee privacy during the model training of IoT devices. Nonetheless, challenges posed by data heterogeneity and communication resource constraints make it difficult to develop an efficient federated learning algorithm in terms of the low order of convergence rate. It could significantly deteriorate the quality of service for critical machine learning tasks, e.g., facial recognition, which requires an edge-ready, low-power, low-latency training algorithm. To address these challenges, a communication-efficient federated learning approach is proposed in this paper where the momentum technique is leveraged to accelerate the convergence rate while largely reducing the communication requirements. First, a federated multi-task learning framework by which the learning tasks are reformulated by the multi-objective optimization problem is introduced to address the data heterogeneity. The multiple gradient descent algorithm is harnessed to find the common gradient descending direction for all participants so that the common features can be learned and no sacrifice on each clients’ performance. Second, to reduce communication costs, a local momentum technique with global information is developed to speed up the convergence rate, where the convergence analysis of the proposed method under non-convex case is studied. It is proved that the proposed local momentum can actually achieve the same acceleration as the global momentum, whereas it is more robust than algorithms that solely rely on the acceleration by the global momentum. Third, the generalization of the proposed acceleration approach is investigated which is demonstrated by the accelerated variation of FedAvg. Finally, the performance of the proposed method on the learning model accuracy, convergence rate, and robustness to data heterogeneity, is investigated by empirical experiments on four public datasets, while a real-world IoT platform is constructed to demonstrate the communication efficiency of the proposed method.
期刊介绍:
IEEE Transactions on Parallel and Distributed Systems (TPDS) is published monthly. It publishes a range of papers, comments on previously published papers, and survey articles that deal with the parallel and distributed systems research areas of current importance to our readers. Particular areas of interest include, but are not limited to:
a) Parallel and distributed algorithms, focusing on topics such as: models of computation; numerical, combinatorial, and data-intensive parallel algorithms, scalability of algorithms and data structures for parallel and distributed systems, communication and synchronization protocols, network algorithms, scheduling, and load balancing.
b) Applications of parallel and distributed computing, including computational and data-enabled science and engineering, big data applications, parallel crowd sourcing, large-scale social network analysis, management of big data, cloud and grid computing, scientific and biomedical applications, mobile computing, and cyber-physical systems.
c) Parallel and distributed architectures, including architectures for instruction-level and thread-level parallelism; design, analysis, implementation, fault resilience and performance measurements of multiple-processor systems; multicore processors, heterogeneous many-core systems; petascale and exascale systems designs; novel big data architectures; special purpose architectures, including graphics processors, signal processors, network processors, media accelerators, and other special purpose processors and accelerators; impact of technology on architecture; network and interconnect architectures; parallel I/O and storage systems; architecture of the memory hierarchy; power-efficient and green computing architectures; dependable architectures; and performance modeling and evaluation.
d) Parallel and distributed software, including parallel and multicore programming languages and compilers, runtime systems, operating systems, Internet computing and web services, resource management including green computing, middleware for grids, clouds, and data centers, libraries, performance modeling and evaluation, parallel programming paradigms, and programming environments and tools.