Kai Zhao, Lina Wang, Fangchao Yu, Bo Zeng, Zhi Pang
{"title":"FedMP: A multi-pronged defense algorithm against Byzantine poisoning attacks in federated learning","authors":"Kai Zhao, Lina Wang, Fangchao Yu, Bo Zeng, Zhi Pang","doi":"10.1016/j.comnet.2024.110990","DOIUrl":null,"url":null,"abstract":"<div><div>Federated learning (FL) is an increasingly popular privacy-preserving collaborative machine learning paradigm that enables clients to train a global model collaboratively without sharing their raw data. Despite its advantages, FL is vulnerable to untargeted Byzantine poisoning attacks in which malicious clients send incorrect model updates during training to disrupt the global model’s performance or prevent it from converging. Existing defenses based on anomaly detection typically rely on additional auxiliary datasets and assume a known and fixed proportion of malicious clients. To overcome these shortcomings, we propose FedMP, a multi-pronged defense algorithm against untargeted Byzantine poisoning attacks. FedMP’s primary idea is to detect anomalous variations in the magnitude and direction of model updates across communication rounds. In particular, FedMP first utilizes an adaptive scaling module to limit the impact of malicious updates with anomalous amplitudes. Then, FedMP identifies and filters malicious model updates with abnormal directions through dynamic clustering and partial filtering methods. Finally, FedMP extracts pure ingredients from the filtered updates as reputation scores for model aggregation to further reduce the influence of malicious updates. Comprehensive evaluations across three publicly accessible datasets demonstrate that FedMP significantly outperforms the existing Byzantine robust defenses under a high proportion of malicious clients (0.7 in our experiments) and high Non-IID degree (0.1 in our experiments) scenarios.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 110990"},"PeriodicalIF":4.4000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389128624008223","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Federated learning (FL) is an increasingly popular privacy-preserving collaborative machine learning paradigm that enables clients to train a global model collaboratively without sharing their raw data. Despite its advantages, FL is vulnerable to untargeted Byzantine poisoning attacks in which malicious clients send incorrect model updates during training to disrupt the global model’s performance or prevent it from converging. Existing defenses based on anomaly detection typically rely on additional auxiliary datasets and assume a known and fixed proportion of malicious clients. To overcome these shortcomings, we propose FedMP, a multi-pronged defense algorithm against untargeted Byzantine poisoning attacks. FedMP’s primary idea is to detect anomalous variations in the magnitude and direction of model updates across communication rounds. In particular, FedMP first utilizes an adaptive scaling module to limit the impact of malicious updates with anomalous amplitudes. Then, FedMP identifies and filters malicious model updates with abnormal directions through dynamic clustering and partial filtering methods. Finally, FedMP extracts pure ingredients from the filtered updates as reputation scores for model aggregation to further reduce the influence of malicious updates. Comprehensive evaluations across three publicly accessible datasets demonstrate that FedMP significantly outperforms the existing Byzantine robust defenses under a high proportion of malicious clients (0.7 in our experiments) and high Non-IID degree (0.1 in our experiments) scenarios.
期刊介绍:
Computer Networks is an international, archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in the computer communications networking area. The audience includes researchers, managers and operators of networks as well as designers and implementors. The Editorial Board will consider any material for publication that is of interest to those groups.