{"title":"利用线性化反向传播增强对扬声器识别系统的黑盒对抗攻击的跨域可转移性","authors":"Umang Patel, Shruti Bhilare, Avik Hati","doi":"10.1007/s10044-024-01269-w","DOIUrl":null,"url":null,"abstract":"<p>Speaker recognition system (SRS) serves as the gatekeeper for secure access, using the unique vocal characteristics of individuals for identification and verification. SRS can be found several biometric security applications such as in banks, autonomous cars, military, and smart devices. However, as technology advances, so do the threats to these models. With the rise of adversarial attacks, these models have been put to the test. Adversarial machine learning (AML) techniques have been utilized to exploit vulnerabilities in SRS, threatening their reliability and security. In this study, we concentrate on transferability in AML within the realm of SRS. Transferability refers to the capability of adversarial examples generated for one model to outsmart another model. Our research centers on enhancing the transferability of adversarial attacks in SRS. Our innovative approach involves strategically skipping non-linear activation functions during the backpropagation process to achieve this goal. The proposed method yields promising results in enhancing the transferability of adversarial examples across diverse SRS architectures, parameters, features, and datasets. To validate the effectiveness of our proposed method, we conduct an evaluation using the state-of-the-art FoolHD attack, an attack designed specifically for exploiting SRS. By implementing our method in various scenarios, including cross-architecture, cross-parameter, cross-feature, and cross-dataset settings, we demonstrate its resilience and versatility. To evaluate the performance of the proposed method in improving transferability, we have introduced three novel metrics: <i>enhanced transferability</i>, <i>relative transferability</i>, and <i>effort in enhancing transferability</i>. Our experiments demonstrate a significant boost in the transferability of adversarial examples in SRS. This research contributes to the growing body of knowledge on AML for SRS and emphasizes the urgency of developing robust defenses to safeguard these critical biometric systems.</p>","PeriodicalId":54639,"journal":{"name":"Pattern Analysis and Applications","volume":"79 1","pages":""},"PeriodicalIF":3.7000,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhancing cross-domain transferability of black-box adversarial attacks on speaker recognition systems using linearized backpropagation\",\"authors\":\"Umang Patel, Shruti Bhilare, Avik Hati\",\"doi\":\"10.1007/s10044-024-01269-w\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Speaker recognition system (SRS) serves as the gatekeeper for secure access, using the unique vocal characteristics of individuals for identification and verification. SRS can be found several biometric security applications such as in banks, autonomous cars, military, and smart devices. However, as technology advances, so do the threats to these models. With the rise of adversarial attacks, these models have been put to the test. Adversarial machine learning (AML) techniques have been utilized to exploit vulnerabilities in SRS, threatening their reliability and security. In this study, we concentrate on transferability in AML within the realm of SRS. Transferability refers to the capability of adversarial examples generated for one model to outsmart another model. Our research centers on enhancing the transferability of adversarial attacks in SRS. Our innovative approach involves strategically skipping non-linear activation functions during the backpropagation process to achieve this goal. The proposed method yields promising results in enhancing the transferability of adversarial examples across diverse SRS architectures, parameters, features, and datasets. To validate the effectiveness of our proposed method, we conduct an evaluation using the state-of-the-art FoolHD attack, an attack designed specifically for exploiting SRS. By implementing our method in various scenarios, including cross-architecture, cross-parameter, cross-feature, and cross-dataset settings, we demonstrate its resilience and versatility. To evaluate the performance of the proposed method in improving transferability, we have introduced three novel metrics: <i>enhanced transferability</i>, <i>relative transferability</i>, and <i>effort in enhancing transferability</i>. Our experiments demonstrate a significant boost in the transferability of adversarial examples in SRS. This research contributes to the growing body of knowledge on AML for SRS and emphasizes the urgency of developing robust defenses to safeguard these critical biometric systems.</p>\",\"PeriodicalId\":54639,\"journal\":{\"name\":\"Pattern Analysis and Applications\",\"volume\":\"79 1\",\"pages\":\"\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2024-05-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Pattern Analysis and Applications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s10044-024-01269-w\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Analysis and Applications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s10044-024-01269-w","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Enhancing cross-domain transferability of black-box adversarial attacks on speaker recognition systems using linearized backpropagation
Speaker recognition system (SRS) serves as the gatekeeper for secure access, using the unique vocal characteristics of individuals for identification and verification. SRS can be found several biometric security applications such as in banks, autonomous cars, military, and smart devices. However, as technology advances, so do the threats to these models. With the rise of adversarial attacks, these models have been put to the test. Adversarial machine learning (AML) techniques have been utilized to exploit vulnerabilities in SRS, threatening their reliability and security. In this study, we concentrate on transferability in AML within the realm of SRS. Transferability refers to the capability of adversarial examples generated for one model to outsmart another model. Our research centers on enhancing the transferability of adversarial attacks in SRS. Our innovative approach involves strategically skipping non-linear activation functions during the backpropagation process to achieve this goal. The proposed method yields promising results in enhancing the transferability of adversarial examples across diverse SRS architectures, parameters, features, and datasets. To validate the effectiveness of our proposed method, we conduct an evaluation using the state-of-the-art FoolHD attack, an attack designed specifically for exploiting SRS. By implementing our method in various scenarios, including cross-architecture, cross-parameter, cross-feature, and cross-dataset settings, we demonstrate its resilience and versatility. To evaluate the performance of the proposed method in improving transferability, we have introduced three novel metrics: enhanced transferability, relative transferability, and effort in enhancing transferability. Our experiments demonstrate a significant boost in the transferability of adversarial examples in SRS. This research contributes to the growing body of knowledge on AML for SRS and emphasizes the urgency of developing robust defenses to safeguard these critical biometric systems.
期刊介绍:
The journal publishes high quality articles in areas of fundamental research in intelligent pattern analysis and applications in computer science and engineering. It aims to provide a forum for original research which describes novel pattern analysis techniques and industrial applications of the current technology. In addition, the journal will also publish articles on pattern analysis applications in medical imaging. The journal solicits articles that detail new technology and methods for pattern recognition and analysis in applied domains including, but not limited to, computer vision and image processing, speech analysis, robotics, multimedia, document analysis, character recognition, knowledge engineering for pattern recognition, fractal analysis, and intelligent control. The journal publishes articles on the use of advanced pattern recognition and analysis methods including statistical techniques, neural networks, genetic algorithms, fuzzy pattern recognition, machine learning, and hardware implementations which are either relevant to the development of pattern analysis as a research area or detail novel pattern analysis applications. Papers proposing new classifier systems or their development, pattern analysis systems for real-time applications, fuzzy and temporal pattern recognition and uncertainty management in applied pattern recognition are particularly solicited.