Tiki-Taka:攻击和防御基于深度学习的入侵检测系统

Chaoyun Zhang, X. Costa, P. Patras
{"title":"Tiki-Taka:攻击和防御基于深度学习的入侵检测系统","authors":"Chaoyun Zhang, X. Costa, P. Patras","doi":"10.1145/3411495.3421359","DOIUrl":null,"url":null,"abstract":"Neural networks are increasingly important in the development of Network Intrusion Detection Systems (NIDS), as they have the potential to achieve high detection accuracy while requiring limited feature engineering. Deep learning-based detectors can be however vulnerable to adversarial examples, by which attackers that may be oblivious to the precise mechanics of the targeted NIDS add subtle perturbations to malicious traffic features, with the aim of evading detection and disrupting critical systems in a cost-effective manner. Defending against such adversarial attacks is therefore of high importance, but requires to address daunting challenges. In this paper, we introduce Tiki-Taka, a general framework for (i) assessing the robustness of state-of-the-art deep learning-based NIDS against adversarial manipulations, and which (ii) incorporates our proposed defense mechanisms to increase the NIDS' resistance to attacks employing such evasion techniques. Specifically, we select five different cutting-edge adversarial attack mechanisms to subvert three popular malicious traffic detectors that employ neural networks. We experiment with a publicly available dataset and consider both one-to-all and one-to-one classification scenarios, i.e., discriminating illicit vs benign traffic and respectively identifying specific types of anomalous traffic among many observed. The results obtained reveal that, under realistic constraints, attackers can evade NIDS with up to 35.7% success rates, by only altering time-based features of the traffic generated. To counteract these weaknesses, we propose three defense mechanisms, namely: model voting ensembling, ensembling adversarial training, and query detection. To the best of our knowledge, our work is the first to propose defenses against adversarial attacks targeting NIDS. We demonstrate that when employing the proposed methods, intrusion detection rates can be improved to nearly 100% against most types of malicious traffic, and attacks with potentially catastrophic consequences (e.g., botnet) can be thwarted. This confirms the effectiveness of our solutions and makes the case for their adoption when designing robust and reliable deep anomaly detectors.","PeriodicalId":125943,"journal":{"name":"Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"29","resultStr":"{\"title\":\"Tiki-Taka: Attacking and Defending Deep Learning-based Intrusion Detection Systems\",\"authors\":\"Chaoyun Zhang, X. Costa, P. Patras\",\"doi\":\"10.1145/3411495.3421359\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neural networks are increasingly important in the development of Network Intrusion Detection Systems (NIDS), as they have the potential to achieve high detection accuracy while requiring limited feature engineering. Deep learning-based detectors can be however vulnerable to adversarial examples, by which attackers that may be oblivious to the precise mechanics of the targeted NIDS add subtle perturbations to malicious traffic features, with the aim of evading detection and disrupting critical systems in a cost-effective manner. Defending against such adversarial attacks is therefore of high importance, but requires to address daunting challenges. In this paper, we introduce Tiki-Taka, a general framework for (i) assessing the robustness of state-of-the-art deep learning-based NIDS against adversarial manipulations, and which (ii) incorporates our proposed defense mechanisms to increase the NIDS' resistance to attacks employing such evasion techniques. Specifically, we select five different cutting-edge adversarial attack mechanisms to subvert three popular malicious traffic detectors that employ neural networks. We experiment with a publicly available dataset and consider both one-to-all and one-to-one classification scenarios, i.e., discriminating illicit vs benign traffic and respectively identifying specific types of anomalous traffic among many observed. The results obtained reveal that, under realistic constraints, attackers can evade NIDS with up to 35.7% success rates, by only altering time-based features of the traffic generated. To counteract these weaknesses, we propose three defense mechanisms, namely: model voting ensembling, ensembling adversarial training, and query detection. To the best of our knowledge, our work is the first to propose defenses against adversarial attacks targeting NIDS. We demonstrate that when employing the proposed methods, intrusion detection rates can be improved to nearly 100% against most types of malicious traffic, and attacks with potentially catastrophic consequences (e.g., botnet) can be thwarted. This confirms the effectiveness of our solutions and makes the case for their adoption when designing robust and reliable deep anomaly detectors.\",\"PeriodicalId\":125943,\"journal\":{\"name\":\"Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop\",\"volume\":\"30 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-11-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"29\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3411495.3421359\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3411495.3421359","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 29

摘要

神经网络在网络入侵检测系统(NIDS)的发展中越来越重要,因为它们有可能在需要有限的特征工程的情况下实现高检测精度。然而,基于深度学习的检测器可能容易受到对抗性示例的攻击,攻击者可能会忽略目标NIDS的精确机制,从而对恶意流量特征添加微妙的扰动,目的是以经济有效的方式逃避检测并破坏关键系统。因此,防范这种对抗性攻击非常重要,但需要应对令人生畏的挑战。在本文中,我们介绍了Tiki-Taka,这是一个通用框架,用于(i)评估最先进的基于深度学习的NIDS对对抗性操作的鲁棒性,并且(ii)结合我们提出的防御机制,以增加NIDS对使用此类逃避技术的攻击的抵抗力。具体来说,我们选择了五种不同的尖端对抗性攻击机制来破坏三种使用神经网络的流行恶意流量检测器。我们使用公开可用的数据集进行实验,并考虑一对所有和一对一的分类场景,即区分非法流量与良性流量,并在许多观察到的流量中分别识别特定类型的异常流量。结果表明,在现实的约束条件下,攻击者仅通过改变流量的时间特征,就可以规避NIDS,成功率高达35.7%。为了克服这些弱点,我们提出了三种防御机制,即:模型投票集成、集成对抗训练和查询检测。据我们所知,我们的工作是第一个提出防御针对NIDS的对抗性攻击的研究。我们证明,当采用所提出的方法时,针对大多数类型的恶意流量,入侵检测率可以提高到接近100%,并且可以挫败具有潜在灾难性后果的攻击(例如僵尸网络)。这证实了我们的解决方案的有效性,并在设计强大可靠的深部异常探测器时采用它们。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Tiki-Taka: Attacking and Defending Deep Learning-based Intrusion Detection Systems
Neural networks are increasingly important in the development of Network Intrusion Detection Systems (NIDS), as they have the potential to achieve high detection accuracy while requiring limited feature engineering. Deep learning-based detectors can be however vulnerable to adversarial examples, by which attackers that may be oblivious to the precise mechanics of the targeted NIDS add subtle perturbations to malicious traffic features, with the aim of evading detection and disrupting critical systems in a cost-effective manner. Defending against such adversarial attacks is therefore of high importance, but requires to address daunting challenges. In this paper, we introduce Tiki-Taka, a general framework for (i) assessing the robustness of state-of-the-art deep learning-based NIDS against adversarial manipulations, and which (ii) incorporates our proposed defense mechanisms to increase the NIDS' resistance to attacks employing such evasion techniques. Specifically, we select five different cutting-edge adversarial attack mechanisms to subvert three popular malicious traffic detectors that employ neural networks. We experiment with a publicly available dataset and consider both one-to-all and one-to-one classification scenarios, i.e., discriminating illicit vs benign traffic and respectively identifying specific types of anomalous traffic among many observed. The results obtained reveal that, under realistic constraints, attackers can evade NIDS with up to 35.7% success rates, by only altering time-based features of the traffic generated. To counteract these weaknesses, we propose three defense mechanisms, namely: model voting ensembling, ensembling adversarial training, and query detection. To the best of our knowledge, our work is the first to propose defenses against adversarial attacks targeting NIDS. We demonstrate that when employing the proposed methods, intrusion detection rates can be improved to nearly 100% against most types of malicious traffic, and attacks with potentially catastrophic consequences (e.g., botnet) can be thwarted. This confirms the effectiveness of our solutions and makes the case for their adoption when designing robust and reliable deep anomaly detectors.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
MARTINI: Memory Access Traces to Detect Attacks Securing Classifiers Against Both White-Box and Black-Box Attacks using Encrypted-Input Obfuscation GANRED: GAN-based Reverse Engineering of DNNs via Cache Side-Channel Towards Enabling Secure Web-Based Cloud Services using Client-Side Encryption Non-Interactive Cryptographic Access Control for Secure Outsourced Storage
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1