Ivan Yu. Tyukin, Desmond J. Higham, Eliyas Woldegeorgis, Alexander N. Gorban
{"title":"隐身攻击的可行性和必然性","authors":"Ivan Yu. Tyukin, Desmond J. Higham, Eliyas Woldegeorgis, Alexander N. Gorban","doi":"10.1093/imamat/hxad027","DOIUrl":null,"url":null,"abstract":"Abstract We develop and study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence (AI) systems including deep learning neural networks. In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself. Such a stealth attack could be conducted by a mischievous, corrupt or disgruntled member of a software development team. It could also be made by those wishing to exploit a “democratization of AI” agenda, where network architectures and trained parameter sets are shared publicly. We develop a range of new implementable attack strategies with accompanying analysis, showing that with high probability a stealth attack can be made transparent, in the sense that system performance is unchanged on a fixed validation set which is unknown to the attacker, while evoking any desired output on a trigger input of interest. The attacker only needs to have estimates of the size of the validation set and the spread of the AI’s relevant latent space. In the case of deep learning neural networks, we show that a one neuron attack is possible—a modification to the weights and bias associated with a single neuron—revealing a vulnerability arising from over-parameterization. We illustrate these concepts using state of the art architectures on two standard image data sets. Guided by the theory and computational results, we also propose strategies to guard against stealth attacks.","PeriodicalId":56297,"journal":{"name":"IMA Journal of Applied Mathematics","volume":null,"pages":null},"PeriodicalIF":1.4000,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":"{\"title\":\"The feasibility and inevitability of stealth attacks\",\"authors\":\"Ivan Yu. Tyukin, Desmond J. Higham, Eliyas Woldegeorgis, Alexander N. Gorban\",\"doi\":\"10.1093/imamat/hxad027\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract We develop and study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence (AI) systems including deep learning neural networks. In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself. Such a stealth attack could be conducted by a mischievous, corrupt or disgruntled member of a software development team. It could also be made by those wishing to exploit a “democratization of AI” agenda, where network architectures and trained parameter sets are shared publicly. We develop a range of new implementable attack strategies with accompanying analysis, showing that with high probability a stealth attack can be made transparent, in the sense that system performance is unchanged on a fixed validation set which is unknown to the attacker, while evoking any desired output on a trigger input of interest. The attacker only needs to have estimates of the size of the validation set and the spread of the AI’s relevant latent space. In the case of deep learning neural networks, we show that a one neuron attack is possible—a modification to the weights and bias associated with a single neuron—revealing a vulnerability arising from over-parameterization. We illustrate these concepts using state of the art architectures on two standard image data sets. Guided by the theory and computational results, we also propose strategies to guard against stealth attacks.\",\"PeriodicalId\":56297,\"journal\":{\"name\":\"IMA Journal of Applied Mathematics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2023-10-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"13\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IMA Journal of Applied Mathematics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1093/imamat/hxad027\",\"RegionNum\":4,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IMA Journal of Applied Mathematics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/imamat/hxad027","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
The feasibility and inevitability of stealth attacks
Abstract We develop and study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence (AI) systems including deep learning neural networks. In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself. Such a stealth attack could be conducted by a mischievous, corrupt or disgruntled member of a software development team. It could also be made by those wishing to exploit a “democratization of AI” agenda, where network architectures and trained parameter sets are shared publicly. We develop a range of new implementable attack strategies with accompanying analysis, showing that with high probability a stealth attack can be made transparent, in the sense that system performance is unchanged on a fixed validation set which is unknown to the attacker, while evoking any desired output on a trigger input of interest. The attacker only needs to have estimates of the size of the validation set and the spread of the AI’s relevant latent space. In the case of deep learning neural networks, we show that a one neuron attack is possible—a modification to the weights and bias associated with a single neuron—revealing a vulnerability arising from over-parameterization. We illustrate these concepts using state of the art architectures on two standard image data sets. Guided by the theory and computational results, we also propose strategies to guard against stealth attacks.
期刊介绍:
The IMA Journal of Applied Mathematics is a direct successor of the Journal of the Institute of Mathematics and its Applications which was started in 1965. It is an interdisciplinary journal that publishes research on mathematics arising in the physical sciences and engineering as well as suitable articles in the life sciences, social sciences, and finance. Submissions should address interesting and challenging mathematical problems arising in applications. A good balance between the development of the application(s) and the analysis is expected. Papers that either use established methods to address solved problems or that present analysis in the absence of applications will not be considered.
The journal welcomes submissions in many research areas. Examples are: continuum mechanics materials science and elasticity, including boundary layer theory, combustion, complex flows and soft matter, electrohydrodynamics and magnetohydrodynamics, geophysical flows, granular flows, interfacial and free surface flows, vortex dynamics; elasticity theory; linear and nonlinear wave propagation, nonlinear optics and photonics; inverse problems; applied dynamical systems and nonlinear systems; mathematical physics; stochastic differential equations and stochastic dynamics; network science; industrial applications.