Mohamed Amine Merzouk, F. Cuppens, Nora Boulahia-Cuppens, Reda Yaich
{"title":"基于联邦学习的入侵检测中的参数化中毒攻击","authors":"Mohamed Amine Merzouk, F. Cuppens, Nora Boulahia-Cuppens, Reda Yaich","doi":"10.1145/3600160.3605090","DOIUrl":null,"url":null,"abstract":"Federated learning is a promising research direction in network intrusion detection. It enables collaborative training of machine learning models without revealing sensitive data. However, the lack of transparency in federated learning creates a security threat. Since the server cannot ensure the clients’ reliability by analyzing their data, malicious clients have the opportunity to insert a backdoor in the model and activate it to evade detection. To maximize their chances of success, adversaries must fine-tune the attack parameters. Here we evaluate the impact of four attack parameters on the effectiveness, stealthiness, consistency, and timing of data poisoning attacks. Our results show that each parameter is decisive for the success of poisoning attacks, provided they are carefully adjusted to avoid damaging the model’s accuracy or the data’s consistency. Our findings serve as guidelines for the security evaluation of federated learning systems and insights for defense strategies. Our experiments are carried out on the UNSW-NB15 dataset, and their implementation is available in a public code repository.","PeriodicalId":107145,"journal":{"name":"Proceedings of the 18th International Conference on Availability, Reliability and Security","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Parameterizing poisoning attacks in federated learning-based intrusion detection\",\"authors\":\"Mohamed Amine Merzouk, F. Cuppens, Nora Boulahia-Cuppens, Reda Yaich\",\"doi\":\"10.1145/3600160.3605090\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning is a promising research direction in network intrusion detection. It enables collaborative training of machine learning models without revealing sensitive data. However, the lack of transparency in federated learning creates a security threat. Since the server cannot ensure the clients’ reliability by analyzing their data, malicious clients have the opportunity to insert a backdoor in the model and activate it to evade detection. To maximize their chances of success, adversaries must fine-tune the attack parameters. Here we evaluate the impact of four attack parameters on the effectiveness, stealthiness, consistency, and timing of data poisoning attacks. Our results show that each parameter is decisive for the success of poisoning attacks, provided they are carefully adjusted to avoid damaging the model’s accuracy or the data’s consistency. Our findings serve as guidelines for the security evaluation of federated learning systems and insights for defense strategies. Our experiments are carried out on the UNSW-NB15 dataset, and their implementation is available in a public code repository.\",\"PeriodicalId\":107145,\"journal\":{\"name\":\"Proceedings of the 18th International Conference on Availability, Reliability and Security\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-08-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 18th International Conference on Availability, Reliability and Security\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3600160.3605090\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 18th International Conference on Availability, Reliability and Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3600160.3605090","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Parameterizing poisoning attacks in federated learning-based intrusion detection
Federated learning is a promising research direction in network intrusion detection. It enables collaborative training of machine learning models without revealing sensitive data. However, the lack of transparency in federated learning creates a security threat. Since the server cannot ensure the clients’ reliability by analyzing their data, malicious clients have the opportunity to insert a backdoor in the model and activate it to evade detection. To maximize their chances of success, adversaries must fine-tune the attack parameters. Here we evaluate the impact of four attack parameters on the effectiveness, stealthiness, consistency, and timing of data poisoning attacks. Our results show that each parameter is decisive for the success of poisoning attacks, provided they are carefully adjusted to avoid damaging the model’s accuracy or the data’s consistency. Our findings serve as guidelines for the security evaluation of federated learning systems and insights for defense strategies. Our experiments are carried out on the UNSW-NB15 dataset, and their implementation is available in a public code repository.