{"title":"Adversarial Black-Box Attacks Against Network Intrusion Detection Systems: A Survey","authors":"Huda Ali Alatwi, A. Aldweesh","doi":"10.1109/AIIoT52608.2021.9454214","DOIUrl":null,"url":null,"abstract":"Due to their massive success in various domains, deep learning techniques are increasingly used to design network intrusion detection solutions that detect and mitigate unknown and known attacks with high accuracy detection rates and minimal feature engineering. However, it has been found that deep learning models are vulnerable to data instances that can mislead the model to make incorrect classification decisions socalled adversarial examples. Such vulnerability allows attackers to target NIDSs in a black-box setting by adding small crafty perturbations to the malicious traffic to evade detection and disrupt the system's critical functionalities. Yet, little researches have addressed the risks of black-box adversarial attacks against NIDS and proposed mitigation solutions. This survey explores this research problem and identifies open issues and certain areas that demand further research for considerable impacts.","PeriodicalId":443405,"journal":{"name":"2021 IEEE World AI IoT Congress (AIIoT)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE World AI IoT Congress (AIIoT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIIoT52608.2021.9454214","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
Due to their massive success in various domains, deep learning techniques are increasingly used to design network intrusion detection solutions that detect and mitigate unknown and known attacks with high accuracy detection rates and minimal feature engineering. However, it has been found that deep learning models are vulnerable to data instances that can mislead the model to make incorrect classification decisions socalled adversarial examples. Such vulnerability allows attackers to target NIDSs in a black-box setting by adding small crafty perturbations to the malicious traffic to evade detection and disrupt the system's critical functionalities. Yet, little researches have addressed the risks of black-box adversarial attacks against NIDS and proposed mitigation solutions. This survey explores this research problem and identifies open issues and certain areas that demand further research for considerable impacts.