{"title":"自编码器对深度学习模型鲁棒性的影响","authors":"Elif Değirmenci, Ilker Özçelik, A. Yazıcı","doi":"10.1109/SIU55565.2022.9864975","DOIUrl":null,"url":null,"abstract":"Adversarial attacks aim to deceive the target system. Recently, deep learning methods have become a target of adversarial attacks. Even small perturbations could lead to classification errors in deep learning models. In an intrusion detection system using deep learning method, adversarial attack can cause classification error and malicious traffic can be classified as benign. In this study, the effects of adversarial attacks on the accuracy of deep learning-based intrusion detection systems were examined. CICIDS2017 dataset was used to test the detection systems . At first, DDoS attacks weredetected using Autoencoder, MLP, AEMLP, DNN, AEDNN, CNN and AECNN methods. Then, the Fast Gradient Sign Method (FGSM) is used to perform adversarial attacks. Finally, the sensitivity of the methods against the adversarial attacks were examined. Our results showed that the classification performance of deep learning based detection methods decreased up to %17 after the adversarial attacks. The results obtained in this study form the basis for the validation and validation studies of learning-based intrusion detection systems.","PeriodicalId":115446,"journal":{"name":"2022 30th Signal Processing and Communications Applications Conference (SIU)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The Effects of Autoencoders on the Robustness of Deep Learning Models\",\"authors\":\"Elif Değirmenci, Ilker Özçelik, A. Yazıcı\",\"doi\":\"10.1109/SIU55565.2022.9864975\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Adversarial attacks aim to deceive the target system. Recently, deep learning methods have become a target of adversarial attacks. Even small perturbations could lead to classification errors in deep learning models. In an intrusion detection system using deep learning method, adversarial attack can cause classification error and malicious traffic can be classified as benign. In this study, the effects of adversarial attacks on the accuracy of deep learning-based intrusion detection systems were examined. CICIDS2017 dataset was used to test the detection systems . At first, DDoS attacks weredetected using Autoencoder, MLP, AEMLP, DNN, AEDNN, CNN and AECNN methods. Then, the Fast Gradient Sign Method (FGSM) is used to perform adversarial attacks. Finally, the sensitivity of the methods against the adversarial attacks were examined. Our results showed that the classification performance of deep learning based detection methods decreased up to %17 after the adversarial attacks. The results obtained in this study form the basis for the validation and validation studies of learning-based intrusion detection systems.\",\"PeriodicalId\":115446,\"journal\":{\"name\":\"2022 30th Signal Processing and Communications Applications Conference (SIU)\",\"volume\":\"23 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 30th Signal Processing and Communications Applications Conference (SIU)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SIU55565.2022.9864975\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 30th Signal Processing and Communications Applications Conference (SIU)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SIU55565.2022.9864975","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The Effects of Autoencoders on the Robustness of Deep Learning Models
Adversarial attacks aim to deceive the target system. Recently, deep learning methods have become a target of adversarial attacks. Even small perturbations could lead to classification errors in deep learning models. In an intrusion detection system using deep learning method, adversarial attack can cause classification error and malicious traffic can be classified as benign. In this study, the effects of adversarial attacks on the accuracy of deep learning-based intrusion detection systems were examined. CICIDS2017 dataset was used to test the detection systems . At first, DDoS attacks weredetected using Autoencoder, MLP, AEMLP, DNN, AEDNN, CNN and AECNN methods. Then, the Fast Gradient Sign Method (FGSM) is used to perform adversarial attacks. Finally, the sensitivity of the methods against the adversarial attacks were examined. Our results showed that the classification performance of deep learning based detection methods decreased up to %17 after the adversarial attacks. The results obtained in this study form the basis for the validation and validation studies of learning-based intrusion detection systems.