基于合约的分层安全聚合方案,用于增强联合学习中的隐私保护

IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Journal of Information Security and Applications Pub Date : 2024-08-13 DOI:10.1016/j.jisa.2024.103857
Qianjin Wei , Gang Rao , Xuanjing Wu
{"title":"基于合约的分层安全聚合方案,用于增强联合学习中的隐私保护","authors":"Qianjin Wei ,&nbsp;Gang Rao ,&nbsp;Xuanjing Wu","doi":"10.1016/j.jisa.2024.103857","DOIUrl":null,"url":null,"abstract":"<div><p>Federated learning ensures the privacy of participant data by uploading gradients rather than private data. However, it has yet to address the issue of untrusted aggregators using gradient inference attacks to obtain user privacy data. Current research introduces encryption, blockchain, or secure multi-party computation to address these issues, but these solutions suffer from significant computational and communication overhead, often requiring a trusted third party. To address these challenges, this paper proposes a contract-based hierarchical secure aggregation scheme to enhance the privacy of federated learning. Firstly, the paper designs a general hierarchical federated learning model that distinguishes among training, aggregation, and consensus layers, replacing the need for a trusted third party with smart contracts. Secondly, to prevent untrusted aggregators from inferring the privacy data of each participant, the paper proposes a novel aggregation scheme based on Paillier and secret sharing. This scheme forces aggregators to aggregate participants’ model parameters, thereby preserving the privacy of gradients. Additionally, secret sharing ensures robustness for participants dynamically joining or exiting. Furthermore, at the consensus layer, the paper proposes an accuracy-based update algorithm to mitigate the impact of Byzantine attacks and allows for the introduction of other consensus methods to ensure scalability. Experimental results demonstrate that our scheme enhances privacy protection, maintains model accuracy without loss, and exhibits robustness against Byzantine attacks. The proposed scheme effectively protects participant privacy in practical federated learning scenarios.</p></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"85 ","pages":"Article 103857"},"PeriodicalIF":3.8000,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Contract-based hierarchical security aggregation scheme for enhancing privacy in federated learning\",\"authors\":\"Qianjin Wei ,&nbsp;Gang Rao ,&nbsp;Xuanjing Wu\",\"doi\":\"10.1016/j.jisa.2024.103857\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Federated learning ensures the privacy of participant data by uploading gradients rather than private data. However, it has yet to address the issue of untrusted aggregators using gradient inference attacks to obtain user privacy data. Current research introduces encryption, blockchain, or secure multi-party computation to address these issues, but these solutions suffer from significant computational and communication overhead, often requiring a trusted third party. To address these challenges, this paper proposes a contract-based hierarchical secure aggregation scheme to enhance the privacy of federated learning. Firstly, the paper designs a general hierarchical federated learning model that distinguishes among training, aggregation, and consensus layers, replacing the need for a trusted third party with smart contracts. Secondly, to prevent untrusted aggregators from inferring the privacy data of each participant, the paper proposes a novel aggregation scheme based on Paillier and secret sharing. This scheme forces aggregators to aggregate participants’ model parameters, thereby preserving the privacy of gradients. Additionally, secret sharing ensures robustness for participants dynamically joining or exiting. Furthermore, at the consensus layer, the paper proposes an accuracy-based update algorithm to mitigate the impact of Byzantine attacks and allows for the introduction of other consensus methods to ensure scalability. Experimental results demonstrate that our scheme enhances privacy protection, maintains model accuracy without loss, and exhibits robustness against Byzantine attacks. The proposed scheme effectively protects participant privacy in practical federated learning scenarios.</p></div>\",\"PeriodicalId\":48638,\"journal\":{\"name\":\"Journal of Information Security and Applications\",\"volume\":\"85 \",\"pages\":\"Article 103857\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2024-08-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Information Security and Applications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2214212624001595\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Information Security and Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2214212624001595","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

联合学习通过上传梯度而非隐私数据来确保参与者数据的隐私性。然而,它尚未解决不受信任的聚合者利用梯度推理攻击获取用户隐私数据的问题。目前的研究引入了加密、区块链或安全的多方计算来解决这些问题,但这些解决方案都存在巨大的计算和通信开销,通常需要一个可信的第三方。为了应对这些挑战,本文提出了一种基于合约的分层安全聚合方案,以增强联合学习的隐私性。首先,本文设计了一种通用的分层联合学习模型,区分了训练层、聚合层和共识层,用智能合约取代了对可信第三方的需求。其次,为了防止不受信任的聚合者推断出每个参与者的隐私数据,本文提出了一种基于 Paillier 和秘密共享的新型聚合方案。该方案迫使聚合者聚合参与者的模型参数,从而保护梯度隐私。此外,秘密共享还能确保动态加入或退出的参与者的稳健性。此外,在共识层,本文提出了一种基于准确性的更新算法,以减轻拜占庭攻击的影响,并允许引入其他共识方法,以确保可扩展性。实验结果表明,我们的方案增强了隐私保护,无损地保持了模型的准确性,并对拜占庭攻击表现出鲁棒性。在实际的联合学习场景中,所提出的方案能有效保护参与者的隐私。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Contract-based hierarchical security aggregation scheme for enhancing privacy in federated learning

Federated learning ensures the privacy of participant data by uploading gradients rather than private data. However, it has yet to address the issue of untrusted aggregators using gradient inference attacks to obtain user privacy data. Current research introduces encryption, blockchain, or secure multi-party computation to address these issues, but these solutions suffer from significant computational and communication overhead, often requiring a trusted third party. To address these challenges, this paper proposes a contract-based hierarchical secure aggregation scheme to enhance the privacy of federated learning. Firstly, the paper designs a general hierarchical federated learning model that distinguishes among training, aggregation, and consensus layers, replacing the need for a trusted third party with smart contracts. Secondly, to prevent untrusted aggregators from inferring the privacy data of each participant, the paper proposes a novel aggregation scheme based on Paillier and secret sharing. This scheme forces aggregators to aggregate participants’ model parameters, thereby preserving the privacy of gradients. Additionally, secret sharing ensures robustness for participants dynamically joining or exiting. Furthermore, at the consensus layer, the paper proposes an accuracy-based update algorithm to mitigate the impact of Byzantine attacks and allows for the introduction of other consensus methods to ensure scalability. Experimental results demonstrate that our scheme enhances privacy protection, maintains model accuracy without loss, and exhibits robustness against Byzantine attacks. The proposed scheme effectively protects participant privacy in practical federated learning scenarios.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Information Security and Applications
Journal of Information Security and Applications Computer Science-Computer Networks and Communications
CiteScore
10.90
自引率
5.40%
发文量
206
审稿时长
56 days
期刊介绍: Journal of Information Security and Applications (JISA) focuses on the original research and practice-driven applications with relevance to information security and applications. JISA provides a common linkage between a vibrant scientific and research community and industry professionals by offering a clear view on modern problems and challenges in information security, as well as identifying promising scientific and "best-practice" solutions. JISA issues offer a balance between original research work and innovative industrial approaches by internationally renowned information security experts and researchers.
期刊最新文献
Multi-ciphertext equality test heterogeneous signcryption scheme based on location privacy Towards an intelligent and automatic irrigation system based on internet of things with authentication feature in VANET A novel blockchain-based anonymous roaming authentication scheme for VANET Efficient quantum algorithms to break group ring cryptosystems IDPriU: A two-party ID-private data union protocol for privacy-preserving machine learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1