{"title":"一种针对UAP语音识别的鲁棒对抗性攻击","authors":"Ziheng Qin , Xianglong Zhang , Shujun Li","doi":"10.1016/j.hcc.2022.100098","DOIUrl":null,"url":null,"abstract":"<div><p>Speech recognition (SR) systems based on deep neural networks are increasingly widespread in smart devices. However, they are vulnerable to human-imperceptible adversarial attacks, which cause the SR to generate incorrect or targeted adversarial commands. Meanwhile, audio adversarial attacks are particularly susceptible to various factors, e.g., ambient noise, after applying them to a real-world attack. To circumvent this issue, we develop a universal adversarial perturbation (UAP) generation method to construct robust real-world UAP by integrating ambient noise into the generation process. The proposed UAP can work well in the case of input-agnostic and independent sources. We validate the effectiveness of our method on two different SRs in different real-world scenarios and parameters, the results demonstrate that our method yields state-of-the-art performance, i.e. given any audio waveform, the word error rate can be up to 80%. Extensive experiments investigate the impact of different parameters (e.g, signal-to-noise ratio, distance, and attack angle) on the attack success rate.</p></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"3 1","pages":"Article 100098"},"PeriodicalIF":3.2000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A robust adversarial attack against speech recognition with UAP\",\"authors\":\"Ziheng Qin , Xianglong Zhang , Shujun Li\",\"doi\":\"10.1016/j.hcc.2022.100098\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Speech recognition (SR) systems based on deep neural networks are increasingly widespread in smart devices. However, they are vulnerable to human-imperceptible adversarial attacks, which cause the SR to generate incorrect or targeted adversarial commands. Meanwhile, audio adversarial attacks are particularly susceptible to various factors, e.g., ambient noise, after applying them to a real-world attack. To circumvent this issue, we develop a universal adversarial perturbation (UAP) generation method to construct robust real-world UAP by integrating ambient noise into the generation process. The proposed UAP can work well in the case of input-agnostic and independent sources. We validate the effectiveness of our method on two different SRs in different real-world scenarios and parameters, the results demonstrate that our method yields state-of-the-art performance, i.e. given any audio waveform, the word error rate can be up to 80%. Extensive experiments investigate the impact of different parameters (e.g, signal-to-noise ratio, distance, and attack angle) on the attack success rate.</p></div>\",\"PeriodicalId\":100605,\"journal\":{\"name\":\"High-Confidence Computing\",\"volume\":\"3 1\",\"pages\":\"Article 100098\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2023-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"High-Confidence Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2667295222000502\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"High-Confidence Computing","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667295222000502","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
A robust adversarial attack against speech recognition with UAP
Speech recognition (SR) systems based on deep neural networks are increasingly widespread in smart devices. However, they are vulnerable to human-imperceptible adversarial attacks, which cause the SR to generate incorrect or targeted adversarial commands. Meanwhile, audio adversarial attacks are particularly susceptible to various factors, e.g., ambient noise, after applying them to a real-world attack. To circumvent this issue, we develop a universal adversarial perturbation (UAP) generation method to construct robust real-world UAP by integrating ambient noise into the generation process. The proposed UAP can work well in the case of input-agnostic and independent sources. We validate the effectiveness of our method on two different SRs in different real-world scenarios and parameters, the results demonstrate that our method yields state-of-the-art performance, i.e. given any audio waveform, the word error rate can be up to 80%. Extensive experiments investigate the impact of different parameters (e.g, signal-to-noise ratio, distance, and attack angle) on the attack success rate.