Privacy-Preserving Coded Schemes for Multi-Server Federated Learning With Straggling Links

IF 8 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS IEEE Transactions on Information Forensics and Security Pub Date : 2024-12-30 DOI:10.1109/TIFS.2024.3524160
Kai Liang;Songze Li;Ming Ding;Feng Tian;Youlong Wu
{"title":"Privacy-Preserving Coded Schemes for Multi-Server Federated Learning With Straggling Links","authors":"Kai Liang;Songze Li;Ming Ding;Feng Tian;Youlong Wu","doi":"10.1109/TIFS.2024.3524160","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) has emerged as an unparalleled machine learning paradigm where multiple edge clients jointly train a global model without sharing the raw data. However, sharing local models or gradients still compromises clients’ privacy and could be susceptible to delivery failures due to unreliable communication links. To address these issues, this paper considers a multi-server FL where E edge clients wish to jointly train the global model with the help of H servers while guaranteeing data privacy and meanwhile combating <inline-formula> <tex-math>$s\\leq H$ </tex-math></inline-formula> unreliable links per client. We first propose a hybrid coding scheme based on repetition coding and MDS Coding, such that any <inline-formula> <tex-math>$T_{s}$ </tex-math></inline-formula> colluding servers cannot deduce any client data besides the aggregated model, and any <inline-formula> <tex-math>$T_{e}$ </tex-math></inline-formula> colluding clients remain unaware of honest clients’ data. Furthermore, we propose a Lagrange coding with mask (LCM) to ensure more stringent privacy protection that additionally demands that colluding servers possess no knowledge about either the local or global models. Furthermore, we establish lower bounds for both the uplink and downlink communication loads and theoretically prove that the hybrid scheme and LCM scheme can achieve the optimal uplink communication loads under the first and second threat models, respectively. For the second threat model with no straggling link, the LCM scheme is optimal. These demonstrate the communication efficiency, robustness, and privacy guarantee of our schemes.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"1222-1236"},"PeriodicalIF":8.0000,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10818498/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Federated Learning (FL) has emerged as an unparalleled machine learning paradigm where multiple edge clients jointly train a global model without sharing the raw data. However, sharing local models or gradients still compromises clients’ privacy and could be susceptible to delivery failures due to unreliable communication links. To address these issues, this paper considers a multi-server FL where E edge clients wish to jointly train the global model with the help of H servers while guaranteeing data privacy and meanwhile combating $s\leq H$ unreliable links per client. We first propose a hybrid coding scheme based on repetition coding and MDS Coding, such that any $T_{s}$ colluding servers cannot deduce any client data besides the aggregated model, and any $T_{e}$ colluding clients remain unaware of honest clients’ data. Furthermore, we propose a Lagrange coding with mask (LCM) to ensure more stringent privacy protection that additionally demands that colluding servers possess no knowledge about either the local or global models. Furthermore, we establish lower bounds for both the uplink and downlink communication loads and theoretically prove that the hybrid scheme and LCM scheme can achieve the optimal uplink communication loads under the first and second threat models, respectively. For the second threat model with no straggling link, the LCM scheme is optimal. These demonstrate the communication efficiency, robustness, and privacy guarantee of our schemes.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
离散链路多服务器联邦学习的隐私保护编码方案
联邦学习(FL)已经成为一种无与伦比的机器学习范例,其中多个边缘客户端在不共享原始数据的情况下共同训练全局模型。然而,共享本地模型或梯度仍然会损害客户的隐私,并且可能由于不可靠的通信链路而容易导致交付失败。为了解决这些问题,本文考虑了一个多服务器FL,其中E边缘客户端希望在H服务器的帮助下共同训练全局模型,同时保证数据隐私,同时打击$s\leq H$每个客户端不可靠的链接。我们首先提出了一种基于重复编码和MDS编码的混合编码方案,使得任何$T_{s}$串谋服务器都不能推断出除聚合模型之外的任何客户端数据,并且任何$T_{e}$串谋客户端都不知道诚实客户端的数据。此外,我们提出了一种拉格朗日掩码编码(LCM),以确保更严格的隐私保护,另外要求串合服务器不知道本地或全局模型。此外,我们建立了上行和下行通信负载的下界,并从理论上证明了混合方案和LCM方案在第一种和第二种威胁模型下分别可以实现最优上行通信负载。对于不存在离散链路的第二种威胁模型,LCM方案是最优的。验证了方案的通信效率、鲁棒性和保密性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Information Forensics and Security
IEEE Transactions on Information Forensics and Security 工程技术-工程:电子与电气
CiteScore
14.40
自引率
7.40%
发文量
234
审稿时长
6.5 months
期刊介绍: The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features
期刊最新文献
RIRplay: Generation of a Replay Stereo Corpus for Voice Biometrics Anti-Spoofing Authentication With Passports for Deep RF Sensing Model Protection DiffMI: Breaking Face Recognition Privacy via Diffusion-Driven Training-Free Model Inversion Query-Efficient Hard-Label Attacks against Black-Box Image Forgery Localization Model via Reinforcement Learning Practical Private Set Operation via Secret Sharing for Lightweight Clients
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1