{"title":"A differential privacy aided DeepFed intrusion detection system for IoT applications","authors":"Sayeda Suaiba Anwar, Asaduzzaman, Iqbal H. Sarker","doi":"10.1002/spy2.445","DOIUrl":null,"url":null,"abstract":"In the rapidly‐developing Internet of Things (IoT) ecosystem, safeguarding the privacy and accuracy of linked devices and networks is of utmost importance, with the challenge lying in effective implementation of intrusion detection systems on resource‐constrained IoT devices. This study introduces a differential privacy (DP)‐aided DeepFed architecture for intrusion detection in IoT contexts as a novel approach to addressing these difficulties. To build an intrusion detection model, we combined components of a convolutional neural network with bidirectional long short‐term memory. We apply this approach to the Bot‐IoT dataset, which was rigorously curated by the University of New South Wales (UNSW) and N‐BaIoT dataset. Our major goal is to create a model that delivers high accuracy while protecting privacy, an often‐overlooked aspect of IoT security. Intrusion detection tasks are distributed across multiple IoT devices using federated learning principles to protect data privacy, incorporating the DP framework to gauge and minimize information leakage, all while investigating the intricate relationship between privacy and accuracy in pursuit of an ideal compromise. The trade‐off between privacy preservation and model accuracy is investigated by adjusting the privacy loss and noise multiplier. Our research enhances IoT security by introducing a deep learning model for intrusion detection in IoT devices, explores the integration of DP in federated learning framework for IoT and offers guidance on minimizing the accuracy‐privacy trade‐off based on specific privacy and security needs. Our study explores the privacy‐accuracy trade‐off by examining the effects of varying epsilon values on accuracy for various delta values for a range of clients between 5 and 25. We also investigate the influence of several noise multipliers on accuracy and find a consistent accuracy curve, especially around a noise multiplier value of about 0.5. The findings of this study have the possibilities to enhance IoT ecosystem security and privacy, contributing to the IoT landscape's trustworthiness and sustainability.","PeriodicalId":29939,"journal":{"name":"Security and Privacy","volume":null,"pages":null},"PeriodicalIF":1.5000,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Security and Privacy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/spy2.445","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
In the rapidly‐developing Internet of Things (IoT) ecosystem, safeguarding the privacy and accuracy of linked devices and networks is of utmost importance, with the challenge lying in effective implementation of intrusion detection systems on resource‐constrained IoT devices. This study introduces a differential privacy (DP)‐aided DeepFed architecture for intrusion detection in IoT contexts as a novel approach to addressing these difficulties. To build an intrusion detection model, we combined components of a convolutional neural network with bidirectional long short‐term memory. We apply this approach to the Bot‐IoT dataset, which was rigorously curated by the University of New South Wales (UNSW) and N‐BaIoT dataset. Our major goal is to create a model that delivers high accuracy while protecting privacy, an often‐overlooked aspect of IoT security. Intrusion detection tasks are distributed across multiple IoT devices using federated learning principles to protect data privacy, incorporating the DP framework to gauge and minimize information leakage, all while investigating the intricate relationship between privacy and accuracy in pursuit of an ideal compromise. The trade‐off between privacy preservation and model accuracy is investigated by adjusting the privacy loss and noise multiplier. Our research enhances IoT security by introducing a deep learning model for intrusion detection in IoT devices, explores the integration of DP in federated learning framework for IoT and offers guidance on minimizing the accuracy‐privacy trade‐off based on specific privacy and security needs. Our study explores the privacy‐accuracy trade‐off by examining the effects of varying epsilon values on accuracy for various delta values for a range of clients between 5 and 25. We also investigate the influence of several noise multipliers on accuracy and find a consistent accuracy curve, especially around a noise multiplier value of about 0.5. The findings of this study have the possibilities to enhance IoT ecosystem security and privacy, contributing to the IoT landscape's trustworthiness and sustainability.