Hao Jian Huang, Bekzod Iskandarov, Mizanur Rahman, Hakan T. Otal, M. Abdullah Canbaz
{"title":"对抗环境中的联合学习:网络安全中的试验台设计和抗中毒能力","authors":"Hao Jian Huang, Bekzod Iskandarov, Mizanur Rahman, Hakan T. Otal, M. Abdullah Canbaz","doi":"arxiv-2409.09794","DOIUrl":null,"url":null,"abstract":"This paper presents the design and implementation of a Federated Learning\n(FL) testbed, focusing on its application in cybersecurity and evaluating its\nresilience against poisoning attacks. Federated Learning allows multiple\nclients to collaboratively train a global model while keeping their data\ndecentralized, addressing critical needs for data privacy and security,\nparticularly in sensitive fields like cybersecurity. Our testbed, built using\nthe Flower framework, facilitates experimentation with various FL frameworks,\nassessing their performance, scalability, and ease of integration. Through a\ncase study on federated intrusion detection systems, we demonstrate the\ntestbed's capabilities in detecting anomalies and securing critical\ninfrastructure without exposing sensitive network data. Comprehensive poisoning\ntests, targeting both model and data integrity, evaluate the system's\nrobustness under adversarial conditions. Our results show that while federated\nlearning enhances data privacy and distributed learning, it remains vulnerable\nto poisoning attacks, which must be mitigated to ensure its reliability in\nreal-world applications.","PeriodicalId":501422,"journal":{"name":"arXiv - CS - Distributed, Parallel, and Cluster Computing","volume":"25 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Federated Learning in Adversarial Environments: Testbed Design and Poisoning Resilience in Cybersecurity\",\"authors\":\"Hao Jian Huang, Bekzod Iskandarov, Mizanur Rahman, Hakan T. Otal, M. Abdullah Canbaz\",\"doi\":\"arxiv-2409.09794\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents the design and implementation of a Federated Learning\\n(FL) testbed, focusing on its application in cybersecurity and evaluating its\\nresilience against poisoning attacks. Federated Learning allows multiple\\nclients to collaboratively train a global model while keeping their data\\ndecentralized, addressing critical needs for data privacy and security,\\nparticularly in sensitive fields like cybersecurity. Our testbed, built using\\nthe Flower framework, facilitates experimentation with various FL frameworks,\\nassessing their performance, scalability, and ease of integration. Through a\\ncase study on federated intrusion detection systems, we demonstrate the\\ntestbed's capabilities in detecting anomalies and securing critical\\ninfrastructure without exposing sensitive network data. Comprehensive poisoning\\ntests, targeting both model and data integrity, evaluate the system's\\nrobustness under adversarial conditions. Our results show that while federated\\nlearning enhances data privacy and distributed learning, it remains vulnerable\\nto poisoning attacks, which must be mitigated to ensure its reliability in\\nreal-world applications.\",\"PeriodicalId\":501422,\"journal\":{\"name\":\"arXiv - CS - Distributed, Parallel, and Cluster Computing\",\"volume\":\"25 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Distributed, Parallel, and Cluster Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.09794\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Distributed, Parallel, and Cluster Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.09794","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Federated Learning in Adversarial Environments: Testbed Design and Poisoning Resilience in Cybersecurity
This paper presents the design and implementation of a Federated Learning
(FL) testbed, focusing on its application in cybersecurity and evaluating its
resilience against poisoning attacks. Federated Learning allows multiple
clients to collaboratively train a global model while keeping their data
decentralized, addressing critical needs for data privacy and security,
particularly in sensitive fields like cybersecurity. Our testbed, built using
the Flower framework, facilitates experimentation with various FL frameworks,
assessing their performance, scalability, and ease of integration. Through a
case study on federated intrusion detection systems, we demonstrate the
testbed's capabilities in detecting anomalies and securing critical
infrastructure without exposing sensitive network data. Comprehensive poisoning
tests, targeting both model and data integrity, evaluate the system's
robustness under adversarial conditions. Our results show that while federated
learning enhances data privacy and distributed learning, it remains vulnerable
to poisoning attacks, which must be mitigated to ensure its reliability in
real-world applications.