Invisible Poison: A Blackbox Clean Label Backdoor Attack to Deep Neural Networks

R. Ning, Jiang Li, Chunsheng Xin, Hongyi Wu
{"title":"Invisible Poison: A Blackbox Clean Label Backdoor Attack to Deep Neural Networks","authors":"R. Ning, Jiang Li, Chunsheng Xin, Hongyi Wu","doi":"10.1109/INFOCOM42981.2021.9488902","DOIUrl":null,"url":null,"abstract":"This paper reports a new clean-label data poisoning backdoor attack, named Invisible Poison, which stealthily and aggressively plants a backdoor in neural networks. It converts a regular trigger to a noised trigger that can be easily concealed inside images for training NN, with the objective to plant a backdoor that can be later activated by the trigger. Compared with existing data poisoning backdoor attacks, this newfound attack has the following distinct properties. First, it is a blackbox attack, requiring zero-knowledge of the target model. Second, this attack utilizes \"invisible poison\" to achieve stealthiness where the trigger is disguised as ‘noise’, and thus can easily evade human inspection. On the other hand, this noised trigger remains effective in the feature space to poison training data. Third, the attack is practical and aggressive. A backdoor can be effectively planted with a small amount of poisoned data and is robust to most data augmentation methods during training. The attack is fully tested on multiple benchmark datasets including MNIST, Cifar10, and ImageNet10, as well as application specific data sets such as Yahoo Adblocker and GTSRB. Two countermeasures, namely Supervised and Unsupervised Poison Sample Detection, are introduced to defend the attack.","PeriodicalId":293079,"journal":{"name":"IEEE INFOCOM 2021 - IEEE Conference on Computer Communications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"17","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE INFOCOM 2021 - IEEE Conference on Computer Communications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INFOCOM42981.2021.9488902","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 17

Abstract

This paper reports a new clean-label data poisoning backdoor attack, named Invisible Poison, which stealthily and aggressively plants a backdoor in neural networks. It converts a regular trigger to a noised trigger that can be easily concealed inside images for training NN, with the objective to plant a backdoor that can be later activated by the trigger. Compared with existing data poisoning backdoor attacks, this newfound attack has the following distinct properties. First, it is a blackbox attack, requiring zero-knowledge of the target model. Second, this attack utilizes "invisible poison" to achieve stealthiness where the trigger is disguised as ‘noise’, and thus can easily evade human inspection. On the other hand, this noised trigger remains effective in the feature space to poison training data. Third, the attack is practical and aggressive. A backdoor can be effectively planted with a small amount of poisoned data and is robust to most data augmentation methods during training. The attack is fully tested on multiple benchmark datasets including MNIST, Cifar10, and ImageNet10, as well as application specific data sets such as Yahoo Adblocker and GTSRB. Two countermeasures, namely Supervised and Unsupervised Poison Sample Detection, are introduced to defend the attack.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
看不见的毒药:对深度神经网络的黑盒干净标签后门攻击
本文报道了一种新的干净标签数据中毒后门攻击,称为“隐形毒药”,它在神经网络中秘密地、积极地植入后门。它将常规触发器转换为噪声触发器,可以很容易地隐藏在图像中用于训练神经网络,目的是植入一个后门,稍后可以被触发器激活。与现有的数据中毒后门攻击相比,这种新发现的攻击具有以下明显的特性:首先,它是一种黑盒攻击,需要对目标模型零知识。其次,这种攻击利用“看不见的毒药”来实现隐形,触发器伪装成“噪音”,因此可以很容易地逃避人类的检查。另一方面,这种带噪声的触发器在特征空间中仍然有效,可以毒害训练数据。第三,进攻切实有力。后门可以有效地植入少量有毒数据,并且在训练期间对大多数数据增强方法具有鲁棒性。该攻击在多个基准数据集(包括MNIST、Cifar10和ImageNet10)以及特定于应用程序的数据集(如Yahoo Adblocker和GTSRB)上进行了全面测试。提出了两种防范措施,即有监督和无监督的毒物样本检测。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Message from the TPC Chairs Enabling Edge-Cloud Video Analytics for Robotics Applications Practical Analysis of Replication-Based Systems Towards Minimum Fleet for Ridesharing-Aware Mobility-on-Demand Systems Beyond Value Perturbation: Local Differential Privacy in the Temporal Setting
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1