Secure and Efficient Federated Learning Against Model Poisoning Attacks in Horizontal and Vertical Data Partitioning.

IF 10.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE transactions on neural networks and learning systems Pub Date : 2024-11-05 DOI:10.1109/TNNLS.2024.3486028
Chong Yu, Zhenyu Meng, Wenmiao Zhang, Lei Lei, Jianbing Ni, Kuan Zhang, Hai Zhao
{"title":"Secure and Efficient Federated Learning Against Model Poisoning Attacks in Horizontal and Vertical Data Partitioning.","authors":"Chong Yu, Zhenyu Meng, Wenmiao Zhang, Lei Lei, Jianbing Ni, Kuan Zhang, Hai Zhao","doi":"10.1109/TNNLS.2024.3486028","DOIUrl":null,"url":null,"abstract":"<p><p>In distributed systems, data may partially overlap in sample and feature spaces, that is, horizontal and vertical data partitioning. By combining horizontal and vertical federated learning (FL), hybrid FL emerges as a promising solution to simultaneously deal with data overlapping in both sample and feature spaces. Due to its decentralized nature, hybrid FL is vulnerable to model poisoning attacks, where malicious devices corrupt the global model by sending crafted model updates to the server. Existing work usually analyzes the statistical characteristics of all updates to resist model poisoning attacks. However, training local models in hybrid FL requires additional communication and computation steps, increasing the detection cost. In addition, due to data diversity in hybrid FL, solutions based on the assumption that malicious models are distinct from honest models may incorrectly classify honest ones as malicious, resulting in low accuracy. To this end, we propose a secure and efficient hybrid FL against model poisoning attacks. Specifically, we first identify two attacks to define how attackers manipulate local models in a harmful yet covert way. Then, we analyze the execution time and energy consumption in hybrid FL. Based on the analysis, we formulate an optimization problem to minimize training costs while guaranteeing accuracy considering the effect of attacks. To solve the formulated problem, we transform it into a Markov decision process and model it as a multiagent reinforcement learning (MARL) problem. Then, we propose a malicious device detection (MDD) method based on MARL to select honest devices to participate in training and improve efficiency. In addition, we propose an alternative poisoned model detection (PMD) method considering model change consistency. This method aims to prevent poisoned models from being used in the model aggregation. Experimental results validate that under the random local model poisoning attack, the proposed MDD method can save over 50% training costs while guaranteeing accuracy. When facing the advanced adaptive local model poisoning (ALMP) attack, utilizing both the proposed MDD and PMD methods achieves the desired accuracy while reducing execution time and energy consumption.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.2000,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/TNNLS.2024.3486028","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In distributed systems, data may partially overlap in sample and feature spaces, that is, horizontal and vertical data partitioning. By combining horizontal and vertical federated learning (FL), hybrid FL emerges as a promising solution to simultaneously deal with data overlapping in both sample and feature spaces. Due to its decentralized nature, hybrid FL is vulnerable to model poisoning attacks, where malicious devices corrupt the global model by sending crafted model updates to the server. Existing work usually analyzes the statistical characteristics of all updates to resist model poisoning attacks. However, training local models in hybrid FL requires additional communication and computation steps, increasing the detection cost. In addition, due to data diversity in hybrid FL, solutions based on the assumption that malicious models are distinct from honest models may incorrectly classify honest ones as malicious, resulting in low accuracy. To this end, we propose a secure and efficient hybrid FL against model poisoning attacks. Specifically, we first identify two attacks to define how attackers manipulate local models in a harmful yet covert way. Then, we analyze the execution time and energy consumption in hybrid FL. Based on the analysis, we formulate an optimization problem to minimize training costs while guaranteeing accuracy considering the effect of attacks. To solve the formulated problem, we transform it into a Markov decision process and model it as a multiagent reinforcement learning (MARL) problem. Then, we propose a malicious device detection (MDD) method based on MARL to select honest devices to participate in training and improve efficiency. In addition, we propose an alternative poisoned model detection (PMD) method considering model change consistency. This method aims to prevent poisoned models from being used in the model aggregation. Experimental results validate that under the random local model poisoning attack, the proposed MDD method can save over 50% training costs while guaranteeing accuracy. When facing the advanced adaptive local model poisoning (ALMP) attack, utilizing both the proposed MDD and PMD methods achieves the desired accuracy while reducing execution time and energy consumption.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在横向和纵向数据分区中对抗模型中毒攻击的安全高效联合学习。
在分布式系统中,数据可能在样本空间和特征空间中部分重叠,即横向和纵向数据分割。通过结合水平和垂直联合学习(FL),混合联合学习成为一种有前途的解决方案,可同时处理样本空间和特征空间中重叠的数据。由于其分散性,混合联合学习容易受到模型中毒攻击,即恶意设备通过向服务器发送伪造的模型更新来破坏全局模型。现有工作通常会分析所有更新的统计特征,以抵御模型中毒攻击。但是,在混合 FL 中训练本地模型需要额外的通信和计算步骤,从而增加了检测成本。此外,由于混合 FL 中数据的多样性,基于恶意模型有别于诚实模型这一假设的解决方案可能会错误地将诚实模型归类为恶意模型,从而导致准确率较低。为此,我们提出了一种针对模型中毒攻击的安全高效的混合 FL。具体来说,我们首先确定了两种攻击,以界定攻击者如何以有害但隐蔽的方式操纵本地模型。然后,我们分析混合 FL 的执行时间和能耗。在分析的基础上,我们提出了一个优化问题,即在考虑到攻击影响的同时保证准确性,使训练成本最小化。为解决该问题,我们将其转化为马尔可夫决策过程,并将其建模为多代理强化学习(MARL)问题。然后,我们提出了一种基于 MARL 的恶意设备检测(MDD)方法,以选择诚实的设备参与训练并提高效率。此外,我们还提出了另一种考虑模型变化一致性的中毒模型检测(PMD)方法。该方法旨在防止中毒模型被用于模型聚合。实验结果验证了在随机局部模型中毒攻击下,所提出的 MDD 方法可以在保证准确性的同时节省 50% 以上的训练成本。在面对高级自适应局部模型中毒(ALMP)攻击时,利用所提出的 MDD 和 PMD 方法可以在减少执行时间和能耗的同时达到预期的精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE transactions on neural networks and learning systems
IEEE transactions on neural networks and learning systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
CiteScore
23.80
自引率
9.60%
发文量
2102
审稿时长
3-8 weeks
期刊介绍: The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.
期刊最新文献
Adaptive Graph Convolutional Network for Unsupervised Generalizable Tabular Representation Learning Gently Sloped and Extended Classification Margin for Overconfidence Relaxation of Out-of-Distribution Samples NNG-Mix: Improving Semi-Supervised Anomaly Detection With Pseudo-Anomaly Generation Alleviate the Impact of Heterogeneity in Network Alignment From Community View Hierarchical Contrastive Learning for Semantic Segmentation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1