{"title":"GuardianAI: Privacy-preserving federated anomaly detection with differential privacy","authors":"Abdulatif Alabdulatif","doi":"10.1016/j.array.2025.100381","DOIUrl":null,"url":null,"abstract":"<div><div>In the rapidly evolving landscape of cybersecurity, privacy-preserving anomaly detection has become crucial, particularly with the rise of sophisticated privacy attacks in distributed learning systems. Traditional centralized anomaly detection systems face challenges related to data privacy and scalability, making federated learning a promising alternative. However, federated learning models remain vulnerable to several privacy attacks, such as inference attacks, model inversion, and gradient leakage. To address these threats, this paper presents GuardianAI, a novel federated anomaly detection framework that incorporates advanced differential privacy techniques, including Gaussian noise addition and secure aggregation protocols, specifically designed to mitigate these attacks. GuardianAI aims to enhance privacy while maintaining high detection accuracy across distributed nodes. The framework effectively prevents attackers from extracting sensitive data from model updates by introducing noise to the gradients and securely aggregating updates across nodes. Experimental results show that GuardianAI achieves a testing accuracy of 99.8 %, outperforming other models like Logistic Regression, SVM, and Random Forest, while robustly defending against common privacy threats. These results demonstrate the practical potential of GuardianAI for secure deployment in various network environments, ensuring privacy without compromising performance.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"26 ","pages":"Article 100381"},"PeriodicalIF":2.3000,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Array","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2590005625000086","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
In the rapidly evolving landscape of cybersecurity, privacy-preserving anomaly detection has become crucial, particularly with the rise of sophisticated privacy attacks in distributed learning systems. Traditional centralized anomaly detection systems face challenges related to data privacy and scalability, making federated learning a promising alternative. However, federated learning models remain vulnerable to several privacy attacks, such as inference attacks, model inversion, and gradient leakage. To address these threats, this paper presents GuardianAI, a novel federated anomaly detection framework that incorporates advanced differential privacy techniques, including Gaussian noise addition and secure aggregation protocols, specifically designed to mitigate these attacks. GuardianAI aims to enhance privacy while maintaining high detection accuracy across distributed nodes. The framework effectively prevents attackers from extracting sensitive data from model updates by introducing noise to the gradients and securely aggregating updates across nodes. Experimental results show that GuardianAI achieves a testing accuracy of 99.8 %, outperforming other models like Logistic Regression, SVM, and Random Forest, while robustly defending against common privacy threats. These results demonstrate the practical potential of GuardianAI for secure deployment in various network environments, ensuring privacy without compromising performance.