{"title":"决定 ReLU 神经网络注入性和突起性的复杂性","authors":"Vincent Froese, Moritz Grillo, Martin Skutella","doi":"arxiv-2405.19805","DOIUrl":null,"url":null,"abstract":"Neural networks with ReLU activation play a key role in modern machine\nlearning. In view of safety-critical applications, the verification of trained\nnetworks is of great importance and necessitates a thorough understanding of\nessential properties of the function computed by a ReLU network, including\ncharacteristics like injectivity and surjectivity. Recently, Puthawala et al.\n[JMLR 2022] came up with a characterization for injectivity of a ReLU layer,\nwhich implies an exponential time algorithm. However, the exact computational\ncomplexity of deciding injectivity remained open. We answer this question by\nproving coNP-completeness of deciding injectivity of a ReLU layer. On the\npositive side, as our main result, we present a parameterized algorithm which\nyields fixed-parameter tractability of the problem with respect to the input\ndimension. In addition, we also characterize surjectivity for two-layer ReLU\nnetworks with one-dimensional output. Remarkably, the decision problem turns\nout to be the complement of a basic network verification task. We prove\nNP-hardness for surjectivity, implying a stronger hardness result than\npreviously known for the network verification problem. Finally, we reveal\ninteresting connections to computational convexity by formulating the\nsurjectivity problem as a zonotope containment problem","PeriodicalId":501216,"journal":{"name":"arXiv - CS - Discrete Mathematics","volume":"2011 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Complexity of Deciding Injectivity and Surjectivity of ReLU Neural Networks\",\"authors\":\"Vincent Froese, Moritz Grillo, Martin Skutella\",\"doi\":\"arxiv-2405.19805\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neural networks with ReLU activation play a key role in modern machine\\nlearning. In view of safety-critical applications, the verification of trained\\nnetworks is of great importance and necessitates a thorough understanding of\\nessential properties of the function computed by a ReLU network, including\\ncharacteristics like injectivity and surjectivity. Recently, Puthawala et al.\\n[JMLR 2022] came up with a characterization for injectivity of a ReLU layer,\\nwhich implies an exponential time algorithm. However, the exact computational\\ncomplexity of deciding injectivity remained open. We answer this question by\\nproving coNP-completeness of deciding injectivity of a ReLU layer. On the\\npositive side, as our main result, we present a parameterized algorithm which\\nyields fixed-parameter tractability of the problem with respect to the input\\ndimension. In addition, we also characterize surjectivity for two-layer ReLU\\nnetworks with one-dimensional output. Remarkably, the decision problem turns\\nout to be the complement of a basic network verification task. We prove\\nNP-hardness for surjectivity, implying a stronger hardness result than\\npreviously known for the network verification problem. Finally, we reveal\\ninteresting connections to computational convexity by formulating the\\nsurjectivity problem as a zonotope containment problem\",\"PeriodicalId\":501216,\"journal\":{\"name\":\"arXiv - CS - Discrete Mathematics\",\"volume\":\"2011 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-05-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Discrete Mathematics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2405.19805\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Discrete Mathematics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2405.19805","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Complexity of Deciding Injectivity and Surjectivity of ReLU Neural Networks
Neural networks with ReLU activation play a key role in modern machine
learning. In view of safety-critical applications, the verification of trained
networks is of great importance and necessitates a thorough understanding of
essential properties of the function computed by a ReLU network, including
characteristics like injectivity and surjectivity. Recently, Puthawala et al.
[JMLR 2022] came up with a characterization for injectivity of a ReLU layer,
which implies an exponential time algorithm. However, the exact computational
complexity of deciding injectivity remained open. We answer this question by
proving coNP-completeness of deciding injectivity of a ReLU layer. On the
positive side, as our main result, we present a parameterized algorithm which
yields fixed-parameter tractability of the problem with respect to the input
dimension. In addition, we also characterize surjectivity for two-layer ReLU
networks with one-dimensional output. Remarkably, the decision problem turns
out to be the complement of a basic network verification task. We prove
NP-hardness for surjectivity, implying a stronger hardness result than
previously known for the network verification problem. Finally, we reveal
interesting connections to computational convexity by formulating the
surjectivity problem as a zonotope containment problem