Jiliang Zhang, Shuang Peng, Yupeng Hu, Fei Peng, Wei Hu, Jinmei Lai, Jing Ye, Xiangqi Wang
{"title":"HRAE: Hardware-assisted Randomization against Adversarial Example Attacks","authors":"Jiliang Zhang, Shuang Peng, Yupeng Hu, Fei Peng, Wei Hu, Jinmei Lai, Jing Ye, Xiangqi Wang","doi":"10.1109/ATS49688.2020.9301586","DOIUrl":null,"url":null,"abstract":"With the rapid advancements of the artificial intelligence, machine learning, especially neural networks, have shown huge superiority over humans in image recognition, autonomous vehicles and medical diagnosis. However, its opacity and inexplicability provide many chances for malicious attackers. Recent researches have shown that neural networks are vulnerable to adversarial example (AE) attacks. In the testing stage, it fools the model by adding subtle perturbations to the original sample to misclassify the input, which poses a serious threat to safety-critical areas such as autonomous driving. In order to mitigate this threat, this paper proposes a hardware-assisted randomization method against AEs, where an approximate computing technique in hardware, voltage over-scaling (VOS), is used to randomize the training set of the model, then the processed data are used to generate multiple neural network models, finally multiple redundant models are used for the integrated classification and detection of the AEs. Various AE attacks on the proposed defense are evaluated to prove its effectiveness.","PeriodicalId":220508,"journal":{"name":"2020 IEEE 29th Asian Test Symposium (ATS)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 29th Asian Test Symposium (ATS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ATS49688.2020.9301586","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
With the rapid advancements of the artificial intelligence, machine learning, especially neural networks, have shown huge superiority over humans in image recognition, autonomous vehicles and medical diagnosis. However, its opacity and inexplicability provide many chances for malicious attackers. Recent researches have shown that neural networks are vulnerable to adversarial example (AE) attacks. In the testing stage, it fools the model by adding subtle perturbations to the original sample to misclassify the input, which poses a serious threat to safety-critical areas such as autonomous driving. In order to mitigate this threat, this paper proposes a hardware-assisted randomization method against AEs, where an approximate computing technique in hardware, voltage over-scaling (VOS), is used to randomize the training set of the model, then the processed data are used to generate multiple neural network models, finally multiple redundant models are used for the integrated classification and detection of the AEs. Various AE attacks on the proposed defense are evaluated to prove its effectiveness.