PriVeriFL: Privacy-Preserving and Aggregation-Verifiable Federated Learning

IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS IEEE Transactions on Services Computing Pub Date : 2024-08-28 DOI:10.1109/TSC.2024.3451183
Lulu Wang;Mirko Polato;Alessandro Brighente;Mauro Conti;Lei Zhang;Lin Xu
{"title":"PriVeriFL: Privacy-Preserving and Aggregation-Verifiable Federated Learning","authors":"Lulu Wang;Mirko Polato;Alessandro Brighente;Mauro Conti;Lei Zhang;Lin Xu","doi":"10.1109/TSC.2024.3451183","DOIUrl":null,"url":null,"abstract":"Federated learning provides a collaborative way to build machine learning models without sharing private data. However, attackers might infer private information from model updates submitted by participants, and the aggregator might maliciously forge the final aggregation results. Federated learning still faces data privacy and aggregation integrity challenges. In this paper, we combine inference attacks and information theory to analyze the sensitivity of different bits of model parameters. We conclude that not all bits of model parameters will leak privacy. This realization inspires us to propose a novel low-expansion homomorphic aggregation scheme based on Paillier homomorphic encryption (PHE) for safeguarding participants’ data privacy. Building upon this, we develop PriVeriFL-A, a privacy-preserving and aggregation-verifiable federated learning scheme that combines homomorphic hash function and signature. To prevent collusion attacks between the aggregator and malicious participants, we further improve our PHE-based scheme into a threshold PHE-based one, named PriVeriFL-B. Compared with the privacy-preserving federated learning scheme based on classic PHE, PriVeriFL-A reduces the communication overhead to 1.65%, and the encryption/decryption computation overhead to 0.88%. Both PriVeriFL-A and PriVeriFL-B can effectively verify the integrity of the global model, while maintaining an almost negligible communication overhead for integrity verification and protecting the privacy of participants’ data.","PeriodicalId":13255,"journal":{"name":"IEEE Transactions on Services Computing","volume":"18 2","pages":"998-1011"},"PeriodicalIF":5.8000,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Services Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10654462/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Federated learning provides a collaborative way to build machine learning models without sharing private data. However, attackers might infer private information from model updates submitted by participants, and the aggregator might maliciously forge the final aggregation results. Federated learning still faces data privacy and aggregation integrity challenges. In this paper, we combine inference attacks and information theory to analyze the sensitivity of different bits of model parameters. We conclude that not all bits of model parameters will leak privacy. This realization inspires us to propose a novel low-expansion homomorphic aggregation scheme based on Paillier homomorphic encryption (PHE) for safeguarding participants’ data privacy. Building upon this, we develop PriVeriFL-A, a privacy-preserving and aggregation-verifiable federated learning scheme that combines homomorphic hash function and signature. To prevent collusion attacks between the aggregator and malicious participants, we further improve our PHE-based scheme into a threshold PHE-based one, named PriVeriFL-B. Compared with the privacy-preserving federated learning scheme based on classic PHE, PriVeriFL-A reduces the communication overhead to 1.65%, and the encryption/decryption computation overhead to 0.88%. Both PriVeriFL-A and PriVeriFL-B can effectively verify the integrity of the global model, while maintaining an almost negligible communication overhead for integrity verification and protecting the privacy of participants’ data.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
PriVeriFL:隐私保护和聚合可验证的联合学习
联邦学习提供了一种协作的方式来构建机器学习模型,而不需要共享私有数据。然而,攻击者可能会从参与者提交的模型更新中推断出私有信息,聚合器可能会恶意伪造最终的聚合结果。联邦学习仍然面临数据隐私和聚合完整性的挑战。本文将推理攻击与信息论相结合,分析了不同位模型参数的敏感性。我们得出结论,不是所有的模型参数都会泄露隐私。基于这种认识,我们提出了一种基于Paillier同态加密(PHE)的低展开同态聚合方案,以保护参与者的数据隐私。在此基础上,我们开发了PriVeriFL-A,这是一种结合了同态哈希函数和签名的隐私保护和聚合可验证的联邦学习方案。为了防止聚合器和恶意参与者之间的合谋攻击,我们进一步将基于phe的方案改进为基于阈值的phe方案,命名为PriVeriFL-B。与基于经典PHE的隐私保护联邦学习方案相比,PriVeriFL-A将通信开销降低到1.65%,加密/解密计算开销降低到0.88%。PriVeriFL-A和PriVeriFL-B都可以有效地验证全局模型的完整性,同时保持几乎可以忽略不计的完整性验证通信开销,并保护参与者数据的隐私。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Services Computing
IEEE Transactions on Services Computing COMPUTER SCIENCE, INFORMATION SYSTEMS-COMPUTER SCIENCE, SOFTWARE ENGINEERING
CiteScore
11.50
自引率
6.20%
发文量
278
审稿时长
>12 weeks
期刊介绍: IEEE Transactions on Services Computing encompasses the computing and software aspects of the science and technology of services innovation research and development. It places emphasis on algorithmic, mathematical, statistical, and computational methods central to services computing. Topics covered include Service Oriented Architecture, Web Services, Business Process Integration, Solution Performance Management, and Services Operations and Management. The transactions address mathematical foundations, security, privacy, agreement, contract, discovery, negotiation, collaboration, and quality of service for web services. It also covers areas like composite web service creation, business and scientific applications, standards, utility models, business process modeling, integration, collaboration, and more in the realm of Services Computing.
期刊最新文献
Two-Phase Account Group Migration Service with Dynamic Load Awareness for Optimizing Sharding Blockchain Privacy-Protected Joint Service placement and Task Offloading for Knowledge-Defined Cloud-Edge Networking QoS-Aware Deep Reinforcement Learning for Dynamic CPU Pinning of Co-located Cloud Workloads Did I Vet You Before? Assessing the Chrome Web Store Vetting Process through Browser Extension Similarity MARS: A Multi-Agent Collaborative Reasoning Framework for Service Recommendation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1