David Byrd, Vaikkunth Mugunthan, Antigoni Polychroniadou, T. Balch
{"title":"Collusion Resistant Federated Learning with Oblivious Distributed Differential Privacy","authors":"David Byrd, Vaikkunth Mugunthan, Antigoni Polychroniadou, T. Balch","doi":"10.1145/3533271.3561754","DOIUrl":null,"url":null,"abstract":"Federated learning enables a population of distributed clients to jointly train a shared machine learning model with the assistance of a central server. The finance community has shown interest in its potential to allow inter-firm and cross-silo collaborative models for problems of common interest (e.g. fraud detection), even when customer data use is heavily regulated. Prior works on federated learning have employed cryptographic techniques to keep individual client model parameters private even when the central server is not trusted. However, there is an important gap in the literature: efficient protection against attacks in which other parties collude to expose an honest client’s model parameters, and therefore potentially protected customer data. We aim to close this collusion gap by presenting an efficient mechanism based on oblivious distributed differential privacy that is the first to protect against such client collusion, including the “Sybil” attack in which a server generates or selects compromised client devices to gain additional information. We leverage this novel privacy mechanism to construct an improved secure federated learning protocol and prove the security of that protocol. We conclude with empirical analysis of the protocol’s execution speed, learning accuracy, and privacy performance on two data sets within a realistic simulation of 5,000 distributed network clients.","PeriodicalId":134888,"journal":{"name":"Proceedings of the Third ACM International Conference on AI in Finance","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Third ACM International Conference on AI in Finance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3533271.3561754","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Federated learning enables a population of distributed clients to jointly train a shared machine learning model with the assistance of a central server. The finance community has shown interest in its potential to allow inter-firm and cross-silo collaborative models for problems of common interest (e.g. fraud detection), even when customer data use is heavily regulated. Prior works on federated learning have employed cryptographic techniques to keep individual client model parameters private even when the central server is not trusted. However, there is an important gap in the literature: efficient protection against attacks in which other parties collude to expose an honest client’s model parameters, and therefore potentially protected customer data. We aim to close this collusion gap by presenting an efficient mechanism based on oblivious distributed differential privacy that is the first to protect against such client collusion, including the “Sybil” attack in which a server generates or selects compromised client devices to gain additional information. We leverage this novel privacy mechanism to construct an improved secure federated learning protocol and prove the security of that protocol. We conclude with empirical analysis of the protocol’s execution speed, learning accuracy, and privacy performance on two data sets within a realistic simulation of 5,000 distributed network clients.