决定 ReLU 神经网络注入性和突起性的复杂性

Vincent Froese, Moritz Grillo, Martin Skutella
{"title":"决定 ReLU 神经网络注入性和突起性的复杂性","authors":"Vincent Froese, Moritz Grillo, Martin Skutella","doi":"arxiv-2405.19805","DOIUrl":null,"url":null,"abstract":"Neural networks with ReLU activation play a key role in modern machine\nlearning. In view of safety-critical applications, the verification of trained\nnetworks is of great importance and necessitates a thorough understanding of\nessential properties of the function computed by a ReLU network, including\ncharacteristics like injectivity and surjectivity. Recently, Puthawala et al.\n[JMLR 2022] came up with a characterization for injectivity of a ReLU layer,\nwhich implies an exponential time algorithm. However, the exact computational\ncomplexity of deciding injectivity remained open. We answer this question by\nproving coNP-completeness of deciding injectivity of a ReLU layer. On the\npositive side, as our main result, we present a parameterized algorithm which\nyields fixed-parameter tractability of the problem with respect to the input\ndimension. In addition, we also characterize surjectivity for two-layer ReLU\nnetworks with one-dimensional output. Remarkably, the decision problem turns\nout to be the complement of a basic network verification task. We prove\nNP-hardness for surjectivity, implying a stronger hardness result than\npreviously known for the network verification problem. Finally, we reveal\ninteresting connections to computational convexity by formulating the\nsurjectivity problem as a zonotope containment problem","PeriodicalId":501216,"journal":{"name":"arXiv - CS - Discrete Mathematics","volume":"2011 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Complexity of Deciding Injectivity and Surjectivity of ReLU Neural Networks\",\"authors\":\"Vincent Froese, Moritz Grillo, Martin Skutella\",\"doi\":\"arxiv-2405.19805\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neural networks with ReLU activation play a key role in modern machine\\nlearning. In view of safety-critical applications, the verification of trained\\nnetworks is of great importance and necessitates a thorough understanding of\\nessential properties of the function computed by a ReLU network, including\\ncharacteristics like injectivity and surjectivity. Recently, Puthawala et al.\\n[JMLR 2022] came up with a characterization for injectivity of a ReLU layer,\\nwhich implies an exponential time algorithm. However, the exact computational\\ncomplexity of deciding injectivity remained open. We answer this question by\\nproving coNP-completeness of deciding injectivity of a ReLU layer. On the\\npositive side, as our main result, we present a parameterized algorithm which\\nyields fixed-parameter tractability of the problem with respect to the input\\ndimension. In addition, we also characterize surjectivity for two-layer ReLU\\nnetworks with one-dimensional output. Remarkably, the decision problem turns\\nout to be the complement of a basic network verification task. We prove\\nNP-hardness for surjectivity, implying a stronger hardness result than\\npreviously known for the network verification problem. Finally, we reveal\\ninteresting connections to computational convexity by formulating the\\nsurjectivity problem as a zonotope containment problem\",\"PeriodicalId\":501216,\"journal\":{\"name\":\"arXiv - CS - Discrete Mathematics\",\"volume\":\"2011 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-05-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Discrete Mathematics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2405.19805\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Discrete Mathematics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2405.19805","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

具有 ReLU 激活功能的神经网络在现代机器学习中发挥着关键作用。考虑到安全关键型应用,对训练有素的网络进行验证非常重要,而且需要全面了解 ReLU 网络计算函数的基本特性,包括注入性和可射性等特征。最近,Puthawala 等人[JMLR 2022]提出了 ReLU 层的注入性特征,这意味着一种指数时间算法。然而,决定注入性的精确计算复杂度仍然是个未知数。我们通过证明决定 ReLU 层注入性的 coNP 完备性来回答这个问题。从正面看,作为我们的主要结果,我们提出了一种参数化算法,该算法能使问题在输入维度上具有固定参数的可操作性。此外,我们还描述了具有一维输出的两层 ReLUn 网络的可射性。值得注意的是,决策问题原来是基本网络验证任务的补充。我们证明了可射性的 NP 硬度,这意味着比之前已知的网络验证问题具有更强的硬度。最后,我们通过将射出问题表述为带状包含问题,揭示了与计算凸性的有趣联系
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Complexity of Deciding Injectivity and Surjectivity of ReLU Neural Networks
Neural networks with ReLU activation play a key role in modern machine learning. In view of safety-critical applications, the verification of trained networks is of great importance and necessitates a thorough understanding of essential properties of the function computed by a ReLU network, including characteristics like injectivity and surjectivity. Recently, Puthawala et al. [JMLR 2022] came up with a characterization for injectivity of a ReLU layer, which implies an exponential time algorithm. However, the exact computational complexity of deciding injectivity remained open. We answer this question by proving coNP-completeness of deciding injectivity of a ReLU layer. On the positive side, as our main result, we present a parameterized algorithm which yields fixed-parameter tractability of the problem with respect to the input dimension. In addition, we also characterize surjectivity for two-layer ReLU networks with one-dimensional output. Remarkably, the decision problem turns out to be the complement of a basic network verification task. We prove NP-hardness for surjectivity, implying a stronger hardness result than previously known for the network verification problem. Finally, we reveal interesting connections to computational convexity by formulating the surjectivity problem as a zonotope containment problem
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Reconfiguration of labeled matchings in triangular grid graphs Decision problems on geometric tilings Ants on the highway A sequential solution to the density classification task using an intermediate alphabet Complexity of Deciding the Equality of Matching Numbers
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1