{"title":"Differentially-Private Federated Learning with Long-Term Budget Constraints Using Online Lagrangian Descent","authors":"O. Odeyomi, G. Záruba","doi":"10.1109/AIIoT52608.2021.9454170","DOIUrl":null,"url":null,"abstract":"This paper addresses the problem of time-varying data distribution in a fully decentralized federated learning setting with budget constraints. Most existing work cover only fixed data distribution in the centralized setting, which is not applicable when the data becomes time-varying, such as in realtime traffic monitoring. More so, a lot of existing work do not address budget constraint problem common in practical federated learning settings. To address these problems, we propose an online Lagrangian descent algorithm. To provide privacy to the local model updates of the clients, local differential privacy is introduced. We show that our algorithm incurs the best regret bound when compared to other similar algorithms, while satisfying the budget constraints in the long term.","PeriodicalId":443405,"journal":{"name":"2021 IEEE World AI IoT Congress (AIIoT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE World AI IoT Congress (AIIoT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIIoT52608.2021.9454170","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
This paper addresses the problem of time-varying data distribution in a fully decentralized federated learning setting with budget constraints. Most existing work cover only fixed data distribution in the centralized setting, which is not applicable when the data becomes time-varying, such as in realtime traffic monitoring. More so, a lot of existing work do not address budget constraint problem common in practical federated learning settings. To address these problems, we propose an online Lagrangian descent algorithm. To provide privacy to the local model updates of the clients, local differential privacy is introduced. We show that our algorithm incurs the best regret bound when compared to other similar algorithms, while satisfying the budget constraints in the long term.