{"title":"Adaptive Client Model Update with Reinforcement Learning in Synchronous Federated Learning","authors":"Zirou Pan, Huan Geng, Linna Wei, Wei Zhao","doi":"10.1109/ITNAC55475.2022.9998360","DOIUrl":null,"url":null,"abstract":"Federated learning is widely applied in green wireless communication, mobile technologies and daily life. It allows multiple parties to jointly train a model on their combined data without revealing any of their local data to a centralized server. However, in practical applications, federated learning requires frequent communication between clients and servers, which brings a considerable burden. In this work, we propose a Federated Learning Deep Q-Learning (FL-DQL) method to reduce the communication frequency between clients and servers in federated learning. FL-DQL selects the local-self-update times of a client adaptively and finds the best trade-off between local update and global parameter aggregation. The performance of FL-DQL is evaluated via extensive experiments with real datasets on a networked prototype system. Results show that FL-DQL effectively reduces the communication overhead among the nodes in our experiments which conforms to the green initiative.","PeriodicalId":205731,"journal":{"name":"2022 32nd International Telecommunication Networks and Applications Conference (ITNAC)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 32nd International Telecommunication Networks and Applications Conference (ITNAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITNAC55475.2022.9998360","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Federated learning is widely applied in green wireless communication, mobile technologies and daily life. It allows multiple parties to jointly train a model on their combined data without revealing any of their local data to a centralized server. However, in practical applications, federated learning requires frequent communication between clients and servers, which brings a considerable burden. In this work, we propose a Federated Learning Deep Q-Learning (FL-DQL) method to reduce the communication frequency between clients and servers in federated learning. FL-DQL selects the local-self-update times of a client adaptively and finds the best trade-off between local update and global parameter aggregation. The performance of FL-DQL is evaluated via extensive experiments with real datasets on a networked prototype system. Results show that FL-DQL effectively reduces the communication overhead among the nodes in our experiments which conforms to the green initiative.