{"title":"Deep - test:一个基于漏洞的深度神经网络鲁棒性测试和增强框架","authors":"Minghao Yang, Shunkun Yang, Wenda Wu","doi":"10.1109/QRS57517.2022.00081","DOIUrl":null,"url":null,"abstract":"Effective testing methods have been proposed to verify the reliability and robustness of Deep Neural Networks (DNNs). However, enhancing their adversarial robustness against various attacks and perturbations through testing remains a key issue for their further applications. Therefore, we propose DeepRTest, a white-box testing framework for DNNs guided by vulnerability to effectively test and improve the adversarial robustness of DNNs. Specifically, the test input generation algorithm based on joint optimization fully induces the misclassification of DNNs. The generated high neuron coverage inputs near classification boundaries expose vulnerabilities to test adversarial robustness comprehensively. Then, retraining based on the generated inputs effectively optimize the classification boundaries and fix the vulnerabilities to improve the adversarial robustness against perturbations. The experimental results indicate that DeepRTest achieved higher neuron coverage and classification accuracy than baseline methods. Moreover, DeepRTest could improve the adversarial robustness by 39% on average, which was 12.56% higher than other methods.","PeriodicalId":143812,"journal":{"name":"2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DeepRTest: A Vulnerability-Guided Robustness Testing and Enhancement Framework for Deep Neural Networks\",\"authors\":\"Minghao Yang, Shunkun Yang, Wenda Wu\",\"doi\":\"10.1109/QRS57517.2022.00081\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Effective testing methods have been proposed to verify the reliability and robustness of Deep Neural Networks (DNNs). However, enhancing their adversarial robustness against various attacks and perturbations through testing remains a key issue for their further applications. Therefore, we propose DeepRTest, a white-box testing framework for DNNs guided by vulnerability to effectively test and improve the adversarial robustness of DNNs. Specifically, the test input generation algorithm based on joint optimization fully induces the misclassification of DNNs. The generated high neuron coverage inputs near classification boundaries expose vulnerabilities to test adversarial robustness comprehensively. Then, retraining based on the generated inputs effectively optimize the classification boundaries and fix the vulnerabilities to improve the adversarial robustness against perturbations. The experimental results indicate that DeepRTest achieved higher neuron coverage and classification accuracy than baseline methods. Moreover, DeepRTest could improve the adversarial robustness by 39% on average, which was 12.56% higher than other methods.\",\"PeriodicalId\":143812,\"journal\":{\"name\":\"2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS)\",\"volume\":\"46 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/QRS57517.2022.00081\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/QRS57517.2022.00081","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
DeepRTest: A Vulnerability-Guided Robustness Testing and Enhancement Framework for Deep Neural Networks
Effective testing methods have been proposed to verify the reliability and robustness of Deep Neural Networks (DNNs). However, enhancing their adversarial robustness against various attacks and perturbations through testing remains a key issue for their further applications. Therefore, we propose DeepRTest, a white-box testing framework for DNNs guided by vulnerability to effectively test and improve the adversarial robustness of DNNs. Specifically, the test input generation algorithm based on joint optimization fully induces the misclassification of DNNs. The generated high neuron coverage inputs near classification boundaries expose vulnerabilities to test adversarial robustness comprehensively. Then, retraining based on the generated inputs effectively optimize the classification boundaries and fix the vulnerabilities to improve the adversarial robustness against perturbations. The experimental results indicate that DeepRTest achieved higher neuron coverage and classification accuracy than baseline methods. Moreover, DeepRTest could improve the adversarial robustness by 39% on average, which was 12.56% higher than other methods.