{"title":"Reachability in Simple Neural Networks","authors":"Marco Sälzer, Martin Lange","doi":"10.3233/fi-222160","DOIUrl":null,"url":null,"abstract":"We investigate the complexity of the reachability problem for (deep) neural networks: does it compute valid output given some valid input? It was recently claimed that the problem is NP-complete for general neural networks and specifications over the input/output dimension given by conjunctions of linear inequalities. We recapitulate the proof and repair some flaws in the original upper and lower bound proofs. Motivated by the general result, we show that NP-hardness already holds for restricted classes of simple specifications and neural networks. Allowing for a single hidden layer and an output dimension of one as well as neural networks with just one negative, zero and one positive weight or bias is sufficient to ensure NP-hardness. Additionally, we give a thorough discussion and outlook of possible extensions for this direction of research on neural network verification.","PeriodicalId":56310,"journal":{"name":"Fundamenta Informaticae","volume":"7 ","pages":"0"},"PeriodicalIF":0.4000,"publicationDate":"2023-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Fundamenta Informaticae","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3233/fi-222160","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
We investigate the complexity of the reachability problem for (deep) neural networks: does it compute valid output given some valid input? It was recently claimed that the problem is NP-complete for general neural networks and specifications over the input/output dimension given by conjunctions of linear inequalities. We recapitulate the proof and repair some flaws in the original upper and lower bound proofs. Motivated by the general result, we show that NP-hardness already holds for restricted classes of simple specifications and neural networks. Allowing for a single hidden layer and an output dimension of one as well as neural networks with just one negative, zero and one positive weight or bias is sufficient to ensure NP-hardness. Additionally, we give a thorough discussion and outlook of possible extensions for this direction of research on neural network verification.
期刊介绍:
Fundamenta Informaticae is an international journal publishing original research results in all areas of theoretical computer science. Papers are encouraged contributing:
solutions by mathematical methods of problems emerging in computer science
solutions of mathematical problems inspired by computer science.
Topics of interest include (but are not restricted to):
theory of computing,
complexity theory,
algorithms and data structures,
computational aspects of combinatorics and graph theory,
programming language theory,
theoretical aspects of programming languages,
computer-aided verification,
computer science logic,
database theory,
logic programming,
automated deduction,
formal languages and automata theory,
concurrency and distributed computing,
cryptography and security,
theoretical issues in artificial intelligence,
machine learning,
pattern recognition,
algorithmic game theory,
bioinformatics and computational biology,
quantum computing,
probabilistic methods,
algebraic and categorical methods.