Yong Li , TongTong Liu , HaiChao Ling , Wei Du , XiangLin Ren
{"title":"A robust federated learning algorithm for partially trusted environments","authors":"Yong Li , TongTong Liu , HaiChao Ling , Wei Du , XiangLin Ren","doi":"10.1016/j.cose.2024.104161","DOIUrl":null,"url":null,"abstract":"<div><div>Due to the distributed nature of federated learning, it is vulnerable to poisoning attacks during the training process. The model’s resistance to poisoning attacks can be improved using robust aggregation algorithms. Current research on federated learning to resist poisoning attacks is mainly based on two settings: No trust or Byzantine robustness. However, both settings are not close enough to reality in practical scenarios. In many practical applications, some participants in federated learning are trustworthy. For example, participants who have participated in the training of this model before and performed very well, or participants with strong compliance and credibility such as governments and some national agencies participate in the training. In existing research, these trusted participants still have to accept the judgment of the aggregation node, which generates unnecessary computation, increases overhead, and does not take advantage of a trusted environment. Since there is no attack behavior on the trusted client, its training results are used to classify the trustworthiness of other untrusted clients and identify attack nodes with higher accuracy. Therefore, this paper proposes a robust federated learning algorithm for partially trusted environments. The proposed scheme uses the experimental results of trusted clients to judge the behavior of untrustworthy clients by the cosine similarity and the Local Outlier Factor and further identify and detect malicious clients. Experiments are performed on MNIST and CIFAR datasets. Comparison with other six aggregation algorithms under 30% attack scenario. And compared with the other four aggregation algorithms under 70% attack conditions. Our algorithm is more accurate than almost all of the other aggregation algorithms. The paper is the first to conduct robust research on federated learning in a partially trusted environment, and the proposed algorithm can more effectively resist poisoning attacks.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"148 ","pages":"Article 104161"},"PeriodicalIF":4.8000,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Security","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167404824004668","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Due to the distributed nature of federated learning, it is vulnerable to poisoning attacks during the training process. The model’s resistance to poisoning attacks can be improved using robust aggregation algorithms. Current research on federated learning to resist poisoning attacks is mainly based on two settings: No trust or Byzantine robustness. However, both settings are not close enough to reality in practical scenarios. In many practical applications, some participants in federated learning are trustworthy. For example, participants who have participated in the training of this model before and performed very well, or participants with strong compliance and credibility such as governments and some national agencies participate in the training. In existing research, these trusted participants still have to accept the judgment of the aggregation node, which generates unnecessary computation, increases overhead, and does not take advantage of a trusted environment. Since there is no attack behavior on the trusted client, its training results are used to classify the trustworthiness of other untrusted clients and identify attack nodes with higher accuracy. Therefore, this paper proposes a robust federated learning algorithm for partially trusted environments. The proposed scheme uses the experimental results of trusted clients to judge the behavior of untrustworthy clients by the cosine similarity and the Local Outlier Factor and further identify and detect malicious clients. Experiments are performed on MNIST and CIFAR datasets. Comparison with other six aggregation algorithms under 30% attack scenario. And compared with the other four aggregation algorithms under 70% attack conditions. Our algorithm is more accurate than almost all of the other aggregation algorithms. The paper is the first to conduct robust research on federated learning in a partially trusted environment, and the proposed algorithm can more effectively resist poisoning attacks.
期刊介绍:
Computers & Security is the most respected technical journal in the IT security field. With its high-profile editorial board and informative regular features and columns, the journal is essential reading for IT security professionals around the world.
Computers & Security provides you with a unique blend of leading edge research and sound practical management advice. It is aimed at the professional involved with computer security, audit, control and data integrity in all sectors - industry, commerce and academia. Recognized worldwide as THE primary source of reference for applied research and technical expertise it is your first step to fully secure systems.