{"title":"径向基函数网络:鲁棒性和抗敌对实例能力","authors":"Jules Chenou, G. Hsieh, Tonya Fields","doi":"10.1109/CSCI49370.2019.00024","DOIUrl":null,"url":null,"abstract":"this work is a continuation of an ongoing effort to increase the robustness of the deep neural network, and thus mitigate possible adversarial examples. In our previous work, the emphasis was placed on denoising the input dataset by adding colored noise before processing. In that work, the evaluation made with the empirical robustness score, resulted in a 1% improvement on average for individual noise and a 3.74% improvement on average for ensemble noise. The aim of this paper is to demonstrate the effective robustness of a well-designed radial basis function neural network in tackling adversarial examples. With the empirical robustness as a metric, the results show a 72.5% increase with Fast Gradient Sign Method (FGSM) attack on the MNIST dataset in comparison to a simple deep network and a 6.4 % increase with FGSM on the CIFAR10 dataset.","PeriodicalId":103662,"journal":{"name":"2019 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Radial Basis Function Network: Its Robustness and Ability to Mitigate Adversarial Examples\",\"authors\":\"Jules Chenou, G. Hsieh, Tonya Fields\",\"doi\":\"10.1109/CSCI49370.2019.00024\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"this work is a continuation of an ongoing effort to increase the robustness of the deep neural network, and thus mitigate possible adversarial examples. In our previous work, the emphasis was placed on denoising the input dataset by adding colored noise before processing. In that work, the evaluation made with the empirical robustness score, resulted in a 1% improvement on average for individual noise and a 3.74% improvement on average for ensemble noise. The aim of this paper is to demonstrate the effective robustness of a well-designed radial basis function neural network in tackling adversarial examples. With the empirical robustness as a metric, the results show a 72.5% increase with Fast Gradient Sign Method (FGSM) attack on the MNIST dataset in comparison to a simple deep network and a 6.4 % increase with FGSM on the CIFAR10 dataset.\",\"PeriodicalId\":103662,\"journal\":{\"name\":\"2019 International Conference on Computational Science and Computational Intelligence (CSCI)\",\"volume\":\"10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 International Conference on Computational Science and Computational Intelligence (CSCI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CSCI49370.2019.00024\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on Computational Science and Computational Intelligence (CSCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSCI49370.2019.00024","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Radial Basis Function Network: Its Robustness and Ability to Mitigate Adversarial Examples
this work is a continuation of an ongoing effort to increase the robustness of the deep neural network, and thus mitigate possible adversarial examples. In our previous work, the emphasis was placed on denoising the input dataset by adding colored noise before processing. In that work, the evaluation made with the empirical robustness score, resulted in a 1% improvement on average for individual noise and a 3.74% improvement on average for ensemble noise. The aim of this paper is to demonstrate the effective robustness of a well-designed radial basis function neural network in tackling adversarial examples. With the empirical robustness as a metric, the results show a 72.5% increase with Fast Gradient Sign Method (FGSM) attack on the MNIST dataset in comparison to a simple deep network and a 6.4 % increase with FGSM on the CIFAR10 dataset.