Secure Federated Training: Detecting Compromised Nodes and Identifying the Type of Attacks

Pretom Roy Ovi, A. Gangopadhyay, R. Erbacher, Carl E. Busart
{"title":"Secure Federated Training: Detecting Compromised Nodes and Identifying the Type of Attacks","authors":"Pretom Roy Ovi, A. Gangopadhyay, R. Erbacher, Carl E. Busart","doi":"10.1109/ICMLA55696.2022.00183","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) allows a set of clients to collaboratively train a model without sharing private data. As a result, FL has limited control over the local data and corresponding training process. Therefore, it is susceptible to poisoning attacks in which malicious clients use malicious training data or local updates to poison the global model. In this work, we first studied the data level and model level poisoning attacks. We simulated model poisoning attacks by tampering the local model updates during each round of communication and data poisoning attacks by training a few clients on malicious data. And clients under such attacks carry faulty information to the server, poison the global model, and restrict it from convergence. Therefore, detecting clients under attacks as well as identifying the type of attacks are required to recover the clients from their malicious status. To address these issues, we proposed a way under federated framework that enables the detection of malicious clients and attack types while ensuring data privacy. Our clustering-based approach utilizes the neuron’s activations from the local models to identify the type of poisoning attacks. We also proposed to check the weight distribution of local model updates among the participating clients to detect malicious clients. Our experimental results validated the robustness of the proposed framework against the attacks mentioned above by successfully detecting the compromised clients and the attack types. Moreover, the global model trained on MNIST data couldn’t reach the optimal point even after 75 rounds because of malicious clients, whereas the proposed approach by detecting the malicious clients ensured convergence within only 30 rounds and 40 rounds in independent and identically distributed (IID) and non- independent and identically distributed (non-IID) setup respectively.","PeriodicalId":128160,"journal":{"name":"2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA55696.2022.00183","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Federated learning (FL) allows a set of clients to collaboratively train a model without sharing private data. As a result, FL has limited control over the local data and corresponding training process. Therefore, it is susceptible to poisoning attacks in which malicious clients use malicious training data or local updates to poison the global model. In this work, we first studied the data level and model level poisoning attacks. We simulated model poisoning attacks by tampering the local model updates during each round of communication and data poisoning attacks by training a few clients on malicious data. And clients under such attacks carry faulty information to the server, poison the global model, and restrict it from convergence. Therefore, detecting clients under attacks as well as identifying the type of attacks are required to recover the clients from their malicious status. To address these issues, we proposed a way under federated framework that enables the detection of malicious clients and attack types while ensuring data privacy. Our clustering-based approach utilizes the neuron’s activations from the local models to identify the type of poisoning attacks. We also proposed to check the weight distribution of local model updates among the participating clients to detect malicious clients. Our experimental results validated the robustness of the proposed framework against the attacks mentioned above by successfully detecting the compromised clients and the attack types. Moreover, the global model trained on MNIST data couldn’t reach the optimal point even after 75 rounds because of malicious clients, whereas the proposed approach by detecting the malicious clients ensured convergence within only 30 rounds and 40 rounds in independent and identically distributed (IID) and non- independent and identically distributed (non-IID) setup respectively.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
安全联合训练:检测受损节点和识别攻击类型
联邦学习(FL)允许一组客户在不共享私有数据的情况下协作训练模型。因此,FL对本地数据和相应的训练过程的控制有限。因此,它很容易受到恶意客户端使用恶意训练数据或本地更新来毒害全局模型的中毒攻击。在这项工作中,我们首先研究了数据级和模型级投毒攻击。我们通过篡改每轮通信期间的本地模型更新来模拟模型中毒攻击,并通过训练一些客户端恶意数据来模拟数据中毒攻击。受到这种攻击的客户端将错误的信息传递给服务器,毒害全局模型,并限制其收敛。因此,需要检测受到攻击的客户端,识别攻击类型,使客户端从恶意状态中恢复过来。为了解决这些问题,我们在联邦框架下提出了一种方法,可以在确保数据隐私的同时检测恶意客户机和攻击类型。我们基于聚类的方法利用来自局部模型的神经元激活来识别中毒攻击的类型。我们还提出检查本地模型更新在参与客户端的权重分布,以检测恶意客户端。我们的实验结果通过成功检测到受损客户端和攻击类型,验证了所提出框架对上述攻击的鲁棒性。此外,由于恶意客户端的存在,在MNIST数据上训练的全局模型在75轮后仍无法达到最优点,而通过检测恶意客户端的方法分别保证了在独立与同分布(IID)和非独立与同分布(non-IID)设置下仅在30轮和40轮内收敛。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Approximate Orthogonal Spectral Autoencoders for Community Analysis in Social Networks DeepReject and DeepRoad: Road Condition Recognition and Classification Under Adversarial Conditions Improving Aquaculture Systems using AI: Employing predictive models for Biomass Estimation on Sonar Images ICDARTS: Improving the Stability of Cyclic DARTS Symbolic Semantic Memory in Transformer Language Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1