{"title":"A Collaboration Federated Learning Framework with a Grouping Scheme against Poisoning Attacks","authors":"Chuan-Kang Liu, Chi-Hui Chiang","doi":"10.1109/IS3C57901.2023.00092","DOIUrl":null,"url":null,"abstract":"Federated learning has been regarded as emerging machine learning framework due to its privacy protection. In the IoT trend, federated learning enables edge clients to predict or classify local detected data with a global model that is computed by a FL server through the aggregation of all local models trained by a base FL algorithm. However, meanwhile, its distributed nature also brings several security challenges. Poisoning attacks are the main security risks that can easily and efficiently affect the accuracy of the global learning model. Previous work proposed a voting strategy which can predict the label of the input robustly no matter the attacks the malicious users use. However, its accuracy also easily falls down as the number of malicious user increases while the number of groups is fixed. This paper proposes a new attack defense algorithm against poisoning attacks in federated learning. This paper uses ID-distribution features to group all clients, including normal and malicious ones. The main idea of this proposed scheme is to put those potential malicious clients in specified groups. Hence, the resulting vote output can accurately classify the dataset inputs, regardless of the number of the groups the learning framework has. Our analytical results also show that our scheme exactly perform better compared to original voting scheme.","PeriodicalId":142483,"journal":{"name":"2023 Sixth International Symposium on Computer, Consumer and Control (IS3C)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 Sixth International Symposium on Computer, Consumer and Control (IS3C)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IS3C57901.2023.00092","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Federated learning has been regarded as emerging machine learning framework due to its privacy protection. In the IoT trend, federated learning enables edge clients to predict or classify local detected data with a global model that is computed by a FL server through the aggregation of all local models trained by a base FL algorithm. However, meanwhile, its distributed nature also brings several security challenges. Poisoning attacks are the main security risks that can easily and efficiently affect the accuracy of the global learning model. Previous work proposed a voting strategy which can predict the label of the input robustly no matter the attacks the malicious users use. However, its accuracy also easily falls down as the number of malicious user increases while the number of groups is fixed. This paper proposes a new attack defense algorithm against poisoning attacks in federated learning. This paper uses ID-distribution features to group all clients, including normal and malicious ones. The main idea of this proposed scheme is to put those potential malicious clients in specified groups. Hence, the resulting vote output can accurately classify the dataset inputs, regardless of the number of the groups the learning framework has. Our analytical results also show that our scheme exactly perform better compared to original voting scheme.