保护基于深度学习的异常检测系统免受白盒攻击和后门攻击

Khaled Alrawashdeh, Stephen Goldsmith
{"title":"保护基于深度学习的异常检测系统免受白盒攻击和后门攻击","authors":"Khaled Alrawashdeh, Stephen Goldsmith","doi":"10.1109/ISTAS50296.2020.9462227","DOIUrl":null,"url":null,"abstract":"Deep Neural Network (DNN) has witnessed rapid progress and significant successes in the recent years. Wide range of applications depends on the high performance of deep learning to solve real-life challenges. Deep learning is being applied in many safety-critical environments. However, deep neural networks have been recently found vulnerable to adversarial examples and backdoor attacks. Stealthy adversarial examples and backdoor attacks can easily fool deep neural networks to generate the wrong results. The risk of adversarial examples attacks that target deep learning models impedes the wide deployment of deep neural networks in safety-critical environments. In this work we propose a defensive technique for deep learning by combining activation function and neurons pruning to reduce the effects of adversarial examples and backdoor attacks. We evaluate the efficacy of the method on an anomaly detection application using Deep Belief Network (DBN) and Coupled Generative Adversarial Network (CoGAN). The method reduces the loss of accuracy from the attacks from an average 10% to 2% using DBN and from an average 14% to 2% using CoGAN. We evaluate the method using two benchmark datasets: NSL-KDD and ransomware.","PeriodicalId":196560,"journal":{"name":"2020 IEEE International Symposium on Technology and Society (ISTAS)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Defending Deep Learning Based Anomaly Detection Systems Against White-Box Adversarial Examples and Backdoor Attacks\",\"authors\":\"Khaled Alrawashdeh, Stephen Goldsmith\",\"doi\":\"10.1109/ISTAS50296.2020.9462227\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep Neural Network (DNN) has witnessed rapid progress and significant successes in the recent years. Wide range of applications depends on the high performance of deep learning to solve real-life challenges. Deep learning is being applied in many safety-critical environments. However, deep neural networks have been recently found vulnerable to adversarial examples and backdoor attacks. Stealthy adversarial examples and backdoor attacks can easily fool deep neural networks to generate the wrong results. The risk of adversarial examples attacks that target deep learning models impedes the wide deployment of deep neural networks in safety-critical environments. In this work we propose a defensive technique for deep learning by combining activation function and neurons pruning to reduce the effects of adversarial examples and backdoor attacks. We evaluate the efficacy of the method on an anomaly detection application using Deep Belief Network (DBN) and Coupled Generative Adversarial Network (CoGAN). The method reduces the loss of accuracy from the attacks from an average 10% to 2% using DBN and from an average 14% to 2% using CoGAN. We evaluate the method using two benchmark datasets: NSL-KDD and ransomware.\",\"PeriodicalId\":196560,\"journal\":{\"name\":\"2020 IEEE International Symposium on Technology and Society (ISTAS)\",\"volume\":\"48 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-11-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE International Symposium on Technology and Society (ISTAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISTAS50296.2020.9462227\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Symposium on Technology and Society (ISTAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISTAS50296.2020.9462227","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

近年来,深度神经网络(Deep Neural Network, DNN)取得了长足的发展和显著的成功。广泛的应用依赖于深度学习的高性能来解决现实生活中的挑战。深度学习正在许多安全关键环境中得到应用。然而,深度神经网络最近被发现容易受到对抗性示例和后门攻击。隐形的对抗性示例和后门攻击可以很容易地欺骗深度神经网络产生错误的结果。针对深度学习模型的对抗性示例攻击的风险阻碍了深度神经网络在安全关键环境中的广泛部署。在这项工作中,我们提出了一种深度学习的防御技术,通过结合激活函数和神经元修剪来减少对抗性示例和后门攻击的影响。我们使用深度信念网络(DBN)和耦合生成对抗网络(CoGAN)来评估该方法在异常检测应用中的有效性。该方法使用DBN将攻击的准确度损失从平均10%降低到2%,使用CoGAN将攻击的准确度损失从平均14%降低到2%。我们使用两个基准数据集:NSL-KDD和勒索软件来评估该方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Defending Deep Learning Based Anomaly Detection Systems Against White-Box Adversarial Examples and Backdoor Attacks
Deep Neural Network (DNN) has witnessed rapid progress and significant successes in the recent years. Wide range of applications depends on the high performance of deep learning to solve real-life challenges. Deep learning is being applied in many safety-critical environments. However, deep neural networks have been recently found vulnerable to adversarial examples and backdoor attacks. Stealthy adversarial examples and backdoor attacks can easily fool deep neural networks to generate the wrong results. The risk of adversarial examples attacks that target deep learning models impedes the wide deployment of deep neural networks in safety-critical environments. In this work we propose a defensive technique for deep learning by combining activation function and neurons pruning to reduce the effects of adversarial examples and backdoor attacks. We evaluate the efficacy of the method on an anomaly detection application using Deep Belief Network (DBN) and Coupled Generative Adversarial Network (CoGAN). The method reduces the loss of accuracy from the attacks from an average 10% to 2% using DBN and from an average 14% to 2% using CoGAN. We evaluate the method using two benchmark datasets: NSL-KDD and ransomware.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
The Historical and Ideological Chasm between Engineering and Development Sustainability means inclusivity: engaging citizens in early stage smart city development Taiwan’s Ability to Reduce the Transmission of COVID-19: A Success Story Tesseract Optimization for Data Privacy and Sharing Economics Using Open Source Licensing to Regulate the Assembly of LAWS: A Preliminary Analysis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1