Mappings, dimensionality and reversing out of deep neural networks

IF 1.4 4区 数学 Q2 MATHEMATICS, APPLIED IMA Journal of Applied Mathematics Pub Date : 2023-06-23 DOI:10.1093/imamat/hxad019
Zhaofang Cui, P. Grindrod
{"title":"Mappings, dimensionality and reversing out of deep neural networks","authors":"Zhaofang Cui, P. Grindrod","doi":"10.1093/imamat/hxad019","DOIUrl":null,"url":null,"abstract":"\n We consider a large cloud of vectors formed at each layer of a standard neural network, corresponding to a large number of separate inputs which were presented independently to the classifier. Although the embedding dimension (the total possible degrees of freedom) reduces as we pass through successive layers, from input to output, the actual dimensionality of the point clouds that the layers contain does not necessarily reduce. We argue that this phenomenon may result in a vulnerability to (universal) adversarial attacks (which are small specific perturbations). This analysis requires us to estimate the intrinsic dimension of point clouds (with values between 20 and 200) within embedding spaces of dimension 1000 up to 800,000. This needs some care. If the cloud dimension actually increases from one layer to the next it implies there is some ‘volume filling’ over-folding, and thus there exist possible small directional perturbations in the latter space that are equivalent to shifting large distances within the former space, thus inviting possibility of universal and imperceptible attacks.","PeriodicalId":56297,"journal":{"name":"IMA Journal of Applied Mathematics","volume":" ","pages":""},"PeriodicalIF":1.4000,"publicationDate":"2023-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IMA Journal of Applied Mathematics","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1093/imamat/hxad019","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 1

Abstract

We consider a large cloud of vectors formed at each layer of a standard neural network, corresponding to a large number of separate inputs which were presented independently to the classifier. Although the embedding dimension (the total possible degrees of freedom) reduces as we pass through successive layers, from input to output, the actual dimensionality of the point clouds that the layers contain does not necessarily reduce. We argue that this phenomenon may result in a vulnerability to (universal) adversarial attacks (which are small specific perturbations). This analysis requires us to estimate the intrinsic dimension of point clouds (with values between 20 and 200) within embedding spaces of dimension 1000 up to 800,000. This needs some care. If the cloud dimension actually increases from one layer to the next it implies there is some ‘volume filling’ over-folding, and thus there exist possible small directional perturbations in the latter space that are equivalent to shifting large distances within the former space, thus inviting possibility of universal and imperceptible attacks.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
深度神经网络的映射、维数和反转
我们考虑在标准神经网络的每一层形成的大量向量云,对应于大量独立呈现给分类器的单独输入。虽然嵌入维度(总可能的自由度)随着我们通过连续的层而减少,从输入到输出,层中包含的点云的实际维度并不一定会减少。我们认为,这种现象可能导致易受(普遍的)对抗性攻击(这是小的特定扰动)。这种分析要求我们在1000维到80万维的嵌入空间中估计点云(值在20到200之间)的内在维数。这需要小心点。如果云维度实际上从一层增加到下一层,这意味着存在一些“体积填充”的过度折叠,因此在后一层空间中可能存在小的方向性扰动,相当于在前一层空间中移动了很长的距离,从而引发了普遍和难以察觉的攻击的可能性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
2.30
自引率
8.30%
发文量
32
审稿时长
24 months
期刊介绍: The IMA Journal of Applied Mathematics is a direct successor of the Journal of the Institute of Mathematics and its Applications which was started in 1965. It is an interdisciplinary journal that publishes research on mathematics arising in the physical sciences and engineering as well as suitable articles in the life sciences, social sciences, and finance. Submissions should address interesting and challenging mathematical problems arising in applications. A good balance between the development of the application(s) and the analysis is expected. Papers that either use established methods to address solved problems or that present analysis in the absence of applications will not be considered. The journal welcomes submissions in many research areas. Examples are: continuum mechanics materials science and elasticity, including boundary layer theory, combustion, complex flows and soft matter, electrohydrodynamics and magnetohydrodynamics, geophysical flows, granular flows, interfacial and free surface flows, vortex dynamics; elasticity theory; linear and nonlinear wave propagation, nonlinear optics and photonics; inverse problems; applied dynamical systems and nonlinear systems; mathematical physics; stochastic differential equations and stochastic dynamics; network science; industrial applications.
期刊最新文献
The impact of confinement on the deformation of an elastic particle under axisymmetric tube flow On the P-Irreducibility of Quintic Positive Polynomials An explicit Maclaurin series solution to non-autonomous and non-homogeneous evolution equation, Omega Calculus, and associated applications Can physics-informed neural networks beat the finite element method? Trust your source: quantifying source condition elements for variational regularisation methods
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1