基于遗忘分布式差分隐私的抗合谋联邦学习

David Byrd, Vaikkunth Mugunthan, Antigoni Polychroniadou, T. Balch
{"title":"基于遗忘分布式差分隐私的抗合谋联邦学习","authors":"David Byrd, Vaikkunth Mugunthan, Antigoni Polychroniadou, T. Balch","doi":"10.1145/3533271.3561754","DOIUrl":null,"url":null,"abstract":"Federated learning enables a population of distributed clients to jointly train a shared machine learning model with the assistance of a central server. The finance community has shown interest in its potential to allow inter-firm and cross-silo collaborative models for problems of common interest (e.g. fraud detection), even when customer data use is heavily regulated. Prior works on federated learning have employed cryptographic techniques to keep individual client model parameters private even when the central server is not trusted. However, there is an important gap in the literature: efficient protection against attacks in which other parties collude to expose an honest client’s model parameters, and therefore potentially protected customer data. We aim to close this collusion gap by presenting an efficient mechanism based on oblivious distributed differential privacy that is the first to protect against such client collusion, including the “Sybil” attack in which a server generates or selects compromised client devices to gain additional information. We leverage this novel privacy mechanism to construct an improved secure federated learning protocol and prove the security of that protocol. We conclude with empirical analysis of the protocol’s execution speed, learning accuracy, and privacy performance on two data sets within a realistic simulation of 5,000 distributed network clients.","PeriodicalId":134888,"journal":{"name":"Proceedings of the Third ACM International Conference on AI in Finance","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Collusion Resistant Federated Learning with Oblivious Distributed Differential Privacy\",\"authors\":\"David Byrd, Vaikkunth Mugunthan, Antigoni Polychroniadou, T. Balch\",\"doi\":\"10.1145/3533271.3561754\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning enables a population of distributed clients to jointly train a shared machine learning model with the assistance of a central server. The finance community has shown interest in its potential to allow inter-firm and cross-silo collaborative models for problems of common interest (e.g. fraud detection), even when customer data use is heavily regulated. Prior works on federated learning have employed cryptographic techniques to keep individual client model parameters private even when the central server is not trusted. However, there is an important gap in the literature: efficient protection against attacks in which other parties collude to expose an honest client’s model parameters, and therefore potentially protected customer data. We aim to close this collusion gap by presenting an efficient mechanism based on oblivious distributed differential privacy that is the first to protect against such client collusion, including the “Sybil” attack in which a server generates or selects compromised client devices to gain additional information. We leverage this novel privacy mechanism to construct an improved secure federated learning protocol and prove the security of that protocol. We conclude with empirical analysis of the protocol’s execution speed, learning accuracy, and privacy performance on two data sets within a realistic simulation of 5,000 distributed network clients.\",\"PeriodicalId\":134888,\"journal\":{\"name\":\"Proceedings of the Third ACM International Conference on AI in Finance\",\"volume\":\"21 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-02-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Third ACM International Conference on AI in Finance\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3533271.3561754\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Third ACM International Conference on AI in Finance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3533271.3561754","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

联邦学习使分布式客户机能够在中央服务器的帮助下联合训练共享的机器学习模型。即使在客户数据使用受到严格监管的情况下,金融界也对其潜力表示出兴趣,即允许跨公司和跨孤岛协作模型解决共同关心的问题(例如欺诈检测)。先前关于联邦学习的工作使用了加密技术来保持单个客户端模型参数的私密性,即使在中央服务器不受信任的情况下也是如此。然而,在文献中有一个重要的空白:有效地防止攻击,其中其他各方串通暴露诚实客户的模型参数,因此可能保护客户数据。我们的目标是通过提出一种基于遗忘分布式差异隐私的有效机制来缩小这种共谋差距,这是第一个防止此类客户端共谋的机制,包括服务器生成或选择受损客户端设备以获得额外信息的“Sybil”攻击。我们利用这种新的隐私机制构建了一个改进的安全联邦学习协议,并证明了该协议的安全性。最后,我们在5000个分布式网络客户端的现实模拟中对两个数据集的协议执行速度、学习准确性和隐私性能进行了实证分析。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Collusion Resistant Federated Learning with Oblivious Distributed Differential Privacy
Federated learning enables a population of distributed clients to jointly train a shared machine learning model with the assistance of a central server. The finance community has shown interest in its potential to allow inter-firm and cross-silo collaborative models for problems of common interest (e.g. fraud detection), even when customer data use is heavily regulated. Prior works on federated learning have employed cryptographic techniques to keep individual client model parameters private even when the central server is not trusted. However, there is an important gap in the literature: efficient protection against attacks in which other parties collude to expose an honest client’s model parameters, and therefore potentially protected customer data. We aim to close this collusion gap by presenting an efficient mechanism based on oblivious distributed differential privacy that is the first to protect against such client collusion, including the “Sybil” attack in which a server generates or selects compromised client devices to gain additional information. We leverage this novel privacy mechanism to construct an improved secure federated learning protocol and prove the security of that protocol. We conclude with empirical analysis of the protocol’s execution speed, learning accuracy, and privacy performance on two data sets within a realistic simulation of 5,000 distributed network clients.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Core Matrix Regression and Prediction with Regularization Risk-Aware Linear Bandits with Application in Smart Order Routing Addressing Extreme Market Responses Using Secure Aggregation Addressing Non-Stationarity in FX Trading with Online Model Selection of Offline RL Experts Objective Driven Portfolio Construction Using Reinforcement Learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1