{"title":"Realism versus Performance for Adversarial Examples Against DL-based NIDS","authors":"Huda Ali Alatwi, C. Morisset","doi":"10.1145/3555776.3577671","DOIUrl":null,"url":null,"abstract":"The application of deep learning-based (DL) network intrusion detection systems (NIDS) enables effective automated detection of cyberattacks. Such models can extract valuable features from high-dimensional and heterogeneous network traffic with minimal feature engineering and provide high accuracy detection rates. However, it has been shown that DL can be vulnerable to adversarial examples (AEs), which mislead classification decisions at inference time, and several works have shown that AEs are indeed a threat against DL-based NIDS. In this work, we argue that these threats are not necessarily realistic. Indeed, some general techniques used to generate AE manipulate features in a way that would be inconsistent with actual network traffic. In this paper, we first implement the main AE attacks selected from the literature (FGSM, BIM, PGD, NewtonFool, CW, DeepFool, EN, Boundary, HSJ, ZOO) for two different datasets (WSN-DS and BoT-IoT) and we compare their relative performance. We then analyze the perturbation generated by these attacks and use the metrics to establish a notion of \"attack unrealism\". We conclude that, for these datasets, some of these attacks are performant but not realistic.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":0.4000,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Computing Review","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3555776.3577671","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
The application of deep learning-based (DL) network intrusion detection systems (NIDS) enables effective automated detection of cyberattacks. Such models can extract valuable features from high-dimensional and heterogeneous network traffic with minimal feature engineering and provide high accuracy detection rates. However, it has been shown that DL can be vulnerable to adversarial examples (AEs), which mislead classification decisions at inference time, and several works have shown that AEs are indeed a threat against DL-based NIDS. In this work, we argue that these threats are not necessarily realistic. Indeed, some general techniques used to generate AE manipulate features in a way that would be inconsistent with actual network traffic. In this paper, we first implement the main AE attacks selected from the literature (FGSM, BIM, PGD, NewtonFool, CW, DeepFool, EN, Boundary, HSJ, ZOO) for two different datasets (WSN-DS and BoT-IoT) and we compare their relative performance. We then analyze the perturbation generated by these attacks and use the metrics to establish a notion of "attack unrealism". We conclude that, for these datasets, some of these attacks are performant but not realistic.