基于突变检测的dnn安全性检测及改进

Yuhao Wei, Song Huang, Yu Wang, Ruilin Liu, Chunyan Xia
{"title":"基于突变检测的dnn安全性检测及改进","authors":"Yuhao Wei, Song Huang, Yu Wang, Ruilin Liu, Chunyan Xia","doi":"10.1109/QRS57517.2022.00087","DOIUrl":null,"url":null,"abstract":"In recent years, deep neural networks (DNNs) have made great progress in people’s daily life since it becomes easier for data accessing and labeling. However, DNN has been proven to behave uncertainly, especially when facing small perturbations in their input data, which becomes a limitation for its application in self-driving and other safety-critical fields. Those human-made attacks like adversarial attacks would cause extremely serious consequences. In this work, we design and evaluate a safety testing method for DNNs based on mutation testing, and propose an adversarial training method based on testing results and joint optimization. First, we conduct an adversarial mutation on the test datasets and measure the performance of models in response to the adversarial samples by mutation scores. Next, we evaluate the validity of mutation scores as a quantitative indicator of safety by comparing DNN models and their updated versions. Finally, we construct a joint optimization problem with safety scores for adversarial training, thus improving the safety of the model as well as the generalizability of the defense capability.","PeriodicalId":143812,"journal":{"name":"2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Mutation Testing based Safety Testing and Improving on DNNs\",\"authors\":\"Yuhao Wei, Song Huang, Yu Wang, Ruilin Liu, Chunyan Xia\",\"doi\":\"10.1109/QRS57517.2022.00087\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, deep neural networks (DNNs) have made great progress in people’s daily life since it becomes easier for data accessing and labeling. However, DNN has been proven to behave uncertainly, especially when facing small perturbations in their input data, which becomes a limitation for its application in self-driving and other safety-critical fields. Those human-made attacks like adversarial attacks would cause extremely serious consequences. In this work, we design and evaluate a safety testing method for DNNs based on mutation testing, and propose an adversarial training method based on testing results and joint optimization. First, we conduct an adversarial mutation on the test datasets and measure the performance of models in response to the adversarial samples by mutation scores. Next, we evaluate the validity of mutation scores as a quantitative indicator of safety by comparing DNN models and their updated versions. Finally, we construct a joint optimization problem with safety scores for adversarial training, thus improving the safety of the model as well as the generalizability of the defense capability.\",\"PeriodicalId\":143812,\"journal\":{\"name\":\"2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/QRS57517.2022.00087\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/QRS57517.2022.00087","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

近年来,深度神经网络(deep neural networks, dnn)在人们的日常生活中取得了很大的进展,因为它变得更容易对数据进行访问和标记。然而,深度神经网络已被证明具有不确定性,特别是当其输入数据面临小扰动时,这成为其在自动驾驶和其他安全关键领域应用的限制。那些人为的攻击,比如对抗性攻击,会造成极其严重的后果。在这项工作中,我们设计并评估了一种基于突变测试的dnn安全性测试方法,并提出了一种基于测试结果和联合优化的对抗性训练方法。首先,我们对测试数据集进行对抗性突变,并通过突变分数衡量模型响应对抗性样本的性能。接下来,我们通过比较DNN模型及其更新版本来评估突变评分作为安全性定量指标的有效性。最后,我们构建了一个带有安全分数的联合优化问题用于对抗训练,从而提高了模型的安全性和防御能力的泛化性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Mutation Testing based Safety Testing and Improving on DNNs
In recent years, deep neural networks (DNNs) have made great progress in people’s daily life since it becomes easier for data accessing and labeling. However, DNN has been proven to behave uncertainly, especially when facing small perturbations in their input data, which becomes a limitation for its application in self-driving and other safety-critical fields. Those human-made attacks like adversarial attacks would cause extremely serious consequences. In this work, we design and evaluate a safety testing method for DNNs based on mutation testing, and propose an adversarial training method based on testing results and joint optimization. First, we conduct an adversarial mutation on the test datasets and measure the performance of models in response to the adversarial samples by mutation scores. Next, we evaluate the validity of mutation scores as a quantitative indicator of safety by comparing DNN models and their updated versions. Finally, we construct a joint optimization problem with safety scores for adversarial training, thus improving the safety of the model as well as the generalizability of the defense capability.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Continuous Usability Requirements Evaluation based on Runtime User Behavior Mining Fine-Tuning Pre-Trained Model to Extract Undesired Behaviors from App Reviews An Empirical Study on Source Code Feature Extraction in Preprocessing of IR-Based Requirements Traceability Predictive Mutation Analysis of Test Case Prioritization for Deep Neural Networks Conceptualizing the Secure Machine Learning Operations (SecMLOps) Paradigm
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1