基于多尺度失活防御深度神经网络后门攻击

IF 8.1 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Information Sciences Pub Date : 2024-10-17 DOI:10.1016/j.ins.2024.121562
Anqing Zhang , Honglong Chen , Xiaomeng Wang , Junjian Li , Yudong Gao , Xingang Wang
{"title":"基于多尺度失活防御深度神经网络后门攻击","authors":"Anqing Zhang ,&nbsp;Honglong Chen ,&nbsp;Xiaomeng Wang ,&nbsp;Junjian Li ,&nbsp;Yudong Gao ,&nbsp;Xingang Wang","doi":"10.1016/j.ins.2024.121562","DOIUrl":null,"url":null,"abstract":"<div><div>Deep neural networks (DNNs) have excellent performance in various applications, especially for image classification tasks. However, DNNs also face the threat of backdoor attacks. Backdoor attacks embed a hidden backdoor into a model, after which the infected model can achieve correct classification on benign images, while incorrectly classify the images with the backdoor triggers as the target label. To obtain a clean model from a backdoor dataset, we propose a Kalman filtering based multi-scale inactivation scheme, which can effectively remove poison data in a poison dataset and obtain a clean model. Every sample in the suspicious training dataset will be judged by multi-scale inactivation and obtain a series of judging results, then data fusion is conducted using kalman filtering to determine whether it is a poison sample. To further improve the performance, a trigger localization and target determination based scheme is proposed. Extensive experiments are conducted to demonstrate the superior effectiveness of the proposed method. The results show that the proposed methods can remove poison samples effectively, and achieve greater than 99% recall rate, and the attack success rate of the retrained clean model is smaller than 1%.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"690 ","pages":"Article 121562"},"PeriodicalIF":8.1000,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Defending against backdoor attack on deep neural networks based on multi-scale inactivation\",\"authors\":\"Anqing Zhang ,&nbsp;Honglong Chen ,&nbsp;Xiaomeng Wang ,&nbsp;Junjian Li ,&nbsp;Yudong Gao ,&nbsp;Xingang Wang\",\"doi\":\"10.1016/j.ins.2024.121562\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Deep neural networks (DNNs) have excellent performance in various applications, especially for image classification tasks. However, DNNs also face the threat of backdoor attacks. Backdoor attacks embed a hidden backdoor into a model, after which the infected model can achieve correct classification on benign images, while incorrectly classify the images with the backdoor triggers as the target label. To obtain a clean model from a backdoor dataset, we propose a Kalman filtering based multi-scale inactivation scheme, which can effectively remove poison data in a poison dataset and obtain a clean model. Every sample in the suspicious training dataset will be judged by multi-scale inactivation and obtain a series of judging results, then data fusion is conducted using kalman filtering to determine whether it is a poison sample. To further improve the performance, a trigger localization and target determination based scheme is proposed. Extensive experiments are conducted to demonstrate the superior effectiveness of the proposed method. The results show that the proposed methods can remove poison samples effectively, and achieve greater than 99% recall rate, and the attack success rate of the retrained clean model is smaller than 1%.</div></div>\",\"PeriodicalId\":51063,\"journal\":{\"name\":\"Information Sciences\",\"volume\":\"690 \",\"pages\":\"Article 121562\"},\"PeriodicalIF\":8.1000,\"publicationDate\":\"2024-10-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Sciences\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0020025524014762\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Sciences","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0020025524014762","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

深度神经网络(DNN)在各种应用中表现出色,尤其是在图像分类任务中。然而,深度神经网络也面临着后门攻击的威胁。后门攻击会在模型中嵌入隐藏的后门,受感染的模型可以对良性图像进行正确分类,而对以后门触发器为目标标签的图像进行错误分类。为了从后门数据集中获得干净的模型,我们提出了一种基于卡尔曼滤波的多尺度失活方案,它能有效去除有毒数据集中的有毒数据,获得干净的模型。对可疑训练数据集中的每个样本进行多尺度失活判断,得到一系列判断结果,然后利用卡尔曼滤波进行数据融合,判断其是否为中毒样本。为了进一步提高性能,提出了一种基于触发定位和目标判定的方案。为了证明所提方法的优越性能,我们进行了广泛的实验。实验结果表明,所提出的方法可以有效地去除有毒样本,召回率大于 99%,重新训练的干净模型的攻击成功率小于 1%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Defending against backdoor attack on deep neural networks based on multi-scale inactivation
Deep neural networks (DNNs) have excellent performance in various applications, especially for image classification tasks. However, DNNs also face the threat of backdoor attacks. Backdoor attacks embed a hidden backdoor into a model, after which the infected model can achieve correct classification on benign images, while incorrectly classify the images with the backdoor triggers as the target label. To obtain a clean model from a backdoor dataset, we propose a Kalman filtering based multi-scale inactivation scheme, which can effectively remove poison data in a poison dataset and obtain a clean model. Every sample in the suspicious training dataset will be judged by multi-scale inactivation and obtain a series of judging results, then data fusion is conducted using kalman filtering to determine whether it is a poison sample. To further improve the performance, a trigger localization and target determination based scheme is proposed. Extensive experiments are conducted to demonstrate the superior effectiveness of the proposed method. The results show that the proposed methods can remove poison samples effectively, and achieve greater than 99% recall rate, and the attack success rate of the retrained clean model is smaller than 1%.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Information Sciences
Information Sciences 工程技术-计算机:信息系统
CiteScore
14.00
自引率
17.30%
发文量
1322
审稿时长
10.4 months
期刊介绍: Informatics and Computer Science Intelligent Systems Applications is an esteemed international journal that focuses on publishing original and creative research findings in the field of information sciences. We also feature a limited number of timely tutorial and surveying contributions. Our journal aims to cater to a diverse audience, including researchers, developers, managers, strategic planners, graduate students, and anyone interested in staying up-to-date with cutting-edge research in information science, knowledge engineering, and intelligent systems. While readers are expected to share a common interest in information science, they come from varying backgrounds such as engineering, mathematics, statistics, physics, computer science, cell biology, molecular biology, management science, cognitive science, neurobiology, behavioral sciences, and biochemistry.
期刊最新文献
Editorial Board Community structure testing by counting frequent common neighbor sets Finite-time secure synchronization for stochastic complex networks with delayed coupling under deception attacks: A two-step switching control scheme Adaptive granular data compression and interval granulation for efficient classification Introducing fairness in network visualization
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1