{"title":"Detecting Adversarial Samples with Neuron Coverage","authors":"Huayang Cao, Wei Kong, Xiaohui Kuang, Jianwen Tian","doi":"10.1109/CSAIEE54046.2021.9543451","DOIUrl":null,"url":null,"abstract":"Deep learning technologies have shown impressive performance in many areas. However, deep learning systems can be deceived by using intentionally crafted data, says, adversarial samples. This inherent vulnerability limits its application in safety-critical domains such as automatic driving, military applications and so on. As a kind of defense measures, various approaches have been proposed to detect adversarial samples, among which their efficiency should be further improved to accomplish practical application requirements. In this paper, we proposed a neuron coverage-based approach which detect adversarial samples by distinguishing the activated neurons' distribution features in classifier layer. The analysis and experiments showed that this approach achieves high accuracy while having relatively low computation and storage cost.","PeriodicalId":376014,"journal":{"name":"2021 IEEE International Conference on Computer Science, Artificial Intelligence and Electronic Engineering (CSAIEE)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Computer Science, Artificial Intelligence and Electronic Engineering (CSAIEE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSAIEE54046.2021.9543451","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning technologies have shown impressive performance in many areas. However, deep learning systems can be deceived by using intentionally crafted data, says, adversarial samples. This inherent vulnerability limits its application in safety-critical domains such as automatic driving, military applications and so on. As a kind of defense measures, various approaches have been proposed to detect adversarial samples, among which their efficiency should be further improved to accomplish practical application requirements. In this paper, we proposed a neuron coverage-based approach which detect adversarial samples by distinguishing the activated neurons' distribution features in classifier layer. The analysis and experiments showed that this approach achieves high accuracy while having relatively low computation and storage cost.