{"title":"深度驾驶机动分类模型的无怀疑黑盒对抗攻击","authors":"Ankur Sarker, Haiying Shen, Tanmoy Sen","doi":"10.1109/ICDCS51616.2021.00080","DOIUrl":null,"url":null,"abstract":"The current autonomous vehicles are equipped with onboard deep neural network (DNN) models to process the data from different sensor and communication units. In the connected autonomous vehicle (CAV) scenario, each vehicle receives time-series driving signals (e.g., speed, brake status) from nearby vehicles through the wireless communication technologies. In the CAV scenario, several black-box adversarial attacks have been proposed, in which an attacker deliberately sends false driving signals to its nearby vehicle to fool its onboard DNN model and cause unwanted traffic incidents. However, the previously proposed black-box adversarial attack can be easily detected. To handle this problem, in this paper, we propose a Suspicion-free Boundary Black-box Adversarial (SBBA) attack, where the attacker utilizes the DNN model's output to design the adversarial perturbation. First, we formulate the attack design problem as a goal satisfying optimization problem with constraints so that the proposed attack will not be easily detectable by detection methods. Second, we solve the proposed optimization problem using the Bayesian optimization method. In our Bayesian optimization framework, we use the Gaussian process to model the posterior distribution of the DNN model, and we use the knowledge gradient function to choose the next sample point. We devise a gradient estimation technique for the knowledge gradient method to reduce the solution searching time. Finally, we conduct extensive experimental evaluations using two real driving datasets. The experimental results show that SBBA outperforms the previous adversarial attacks by 56% higher success rate under detection methods, 238% less time to launch the attacks, and 76% less perturbation (to avoid being detected), and 257% fewer queries (to the DNN model to verify the attack success).","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"A Suspicion-Free Black-box Adversarial Attack for Deep Driving Maneuver Classification Models\",\"authors\":\"Ankur Sarker, Haiying Shen, Tanmoy Sen\",\"doi\":\"10.1109/ICDCS51616.2021.00080\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The current autonomous vehicles are equipped with onboard deep neural network (DNN) models to process the data from different sensor and communication units. In the connected autonomous vehicle (CAV) scenario, each vehicle receives time-series driving signals (e.g., speed, brake status) from nearby vehicles through the wireless communication technologies. In the CAV scenario, several black-box adversarial attacks have been proposed, in which an attacker deliberately sends false driving signals to its nearby vehicle to fool its onboard DNN model and cause unwanted traffic incidents. However, the previously proposed black-box adversarial attack can be easily detected. To handle this problem, in this paper, we propose a Suspicion-free Boundary Black-box Adversarial (SBBA) attack, where the attacker utilizes the DNN model's output to design the adversarial perturbation. First, we formulate the attack design problem as a goal satisfying optimization problem with constraints so that the proposed attack will not be easily detectable by detection methods. Second, we solve the proposed optimization problem using the Bayesian optimization method. In our Bayesian optimization framework, we use the Gaussian process to model the posterior distribution of the DNN model, and we use the knowledge gradient function to choose the next sample point. We devise a gradient estimation technique for the knowledge gradient method to reduce the solution searching time. Finally, we conduct extensive experimental evaluations using two real driving datasets. The experimental results show that SBBA outperforms the previous adversarial attacks by 56% higher success rate under detection methods, 238% less time to launch the attacks, and 76% less perturbation (to avoid being detected), and 257% fewer queries (to the DNN model to verify the attack success).\",\"PeriodicalId\":222376,\"journal\":{\"name\":\"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)\",\"volume\":\"17 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICDCS51616.2021.00080\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDCS51616.2021.00080","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Suspicion-Free Black-box Adversarial Attack for Deep Driving Maneuver Classification Models
The current autonomous vehicles are equipped with onboard deep neural network (DNN) models to process the data from different sensor and communication units. In the connected autonomous vehicle (CAV) scenario, each vehicle receives time-series driving signals (e.g., speed, brake status) from nearby vehicles through the wireless communication technologies. In the CAV scenario, several black-box adversarial attacks have been proposed, in which an attacker deliberately sends false driving signals to its nearby vehicle to fool its onboard DNN model and cause unwanted traffic incidents. However, the previously proposed black-box adversarial attack can be easily detected. To handle this problem, in this paper, we propose a Suspicion-free Boundary Black-box Adversarial (SBBA) attack, where the attacker utilizes the DNN model's output to design the adversarial perturbation. First, we formulate the attack design problem as a goal satisfying optimization problem with constraints so that the proposed attack will not be easily detectable by detection methods. Second, we solve the proposed optimization problem using the Bayesian optimization method. In our Bayesian optimization framework, we use the Gaussian process to model the posterior distribution of the DNN model, and we use the knowledge gradient function to choose the next sample point. We devise a gradient estimation technique for the knowledge gradient method to reduce the solution searching time. Finally, we conduct extensive experimental evaluations using two real driving datasets. The experimental results show that SBBA outperforms the previous adversarial attacks by 56% higher success rate under detection methods, 238% less time to launch the attacks, and 76% less perturbation (to avoid being detected), and 257% fewer queries (to the DNN model to verify the attack success).