Proceedings. Fourteenth Annual IEEE Conference on Computational Complexity (Formerly: Structure in Complexity Theory Conference) (Cat.No.99CB36317)最新文献
We consider the following (promise) problem, denoted ED (for Entropy Difference): The input is a pair of circuits, and YES instances (resp., NO instances) are such pairs in which the first (resp., second) circuit generates a distribution with noticeably higher entropy. On one hand we show that any language having a (honest-verifier) statistical zero-knowledge proof is Karp-reducible to ED. On the other hand, we present a public-coin (honest-verifier) statistical zero-knowledge proof for ED. Thus, we obtain an alternative proof of Okamoto's result by which HVSZK: (i.e., honest-verifier statistical zero knowledge) equals public-coin HVSZK. The new proof is much simpler than the original one. The above also yields a trivial proof that HVSZK: is closed under complementation (since ED easily reduces to its complement). Among the new results obtained is an equivalence of a weak notion of statistical zero knowledge to the standard one.
{"title":"Comparing entropies in statistical zero knowledge with applications to the structure of SZK","authors":"Oded Goldreich, S. Vadhan","doi":"10.1109/CCC.1999.766262","DOIUrl":"https://doi.org/10.1109/CCC.1999.766262","url":null,"abstract":"We consider the following (promise) problem, denoted ED (for Entropy Difference): The input is a pair of circuits, and YES instances (resp., NO instances) are such pairs in which the first (resp., second) circuit generates a distribution with noticeably higher entropy. On one hand we show that any language having a (honest-verifier) statistical zero-knowledge proof is Karp-reducible to ED. On the other hand, we present a public-coin (honest-verifier) statistical zero-knowledge proof for ED. Thus, we obtain an alternative proof of Okamoto's result by which HVSZK: (i.e., honest-verifier statistical zero knowledge) equals public-coin HVSZK. The new proof is much simpler than the original one. The above also yields a trivial proof that HVSZK: is closed under complementation (since ED easily reduces to its complement). Among the new results obtained is an equivalence of a weak notion of statistical zero knowledge to the standard one.","PeriodicalId":432015,"journal":{"name":"Proceedings. Fourteenth Annual IEEE Conference on Computational Complexity (Formerly: Structure in Complexity Theory Conference) (Cat.No.99CB36317)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122595380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kolmogorov complexity has proven to be a very useful tool in simplifying and improving proofs that use complicated combinatorial arguments. Using Kolmogorov complexity for oracle construction, we obtain separation results that are much stronger than separations obtained previously even with the use of very complicated combinatorial arguments. Moreover the use of Kolmogorov arguments almost trivializes the construction itself: In particular we construct relativized worlds where: 1. NP/spl cap/CoNP/spl isin/P/poly. 2. NP has a set that is both simple and NP/spl cap/CoNP-immune. 3. CoNP has a set that is both simple and NP/spl cap/CoNP-immune. 4. /spl Pi//sub 2//sup p/ has a set that is both simple and /spl Pi//sub 2//sup p//spl cap//spl Sigma//sup 2p/-immune.
{"title":"Complicated complementations","authors":"H. Buhrman, L. Torenvliet","doi":"10.1109/CCC.1999.766281","DOIUrl":"https://doi.org/10.1109/CCC.1999.766281","url":null,"abstract":"Kolmogorov complexity has proven to be a very useful tool in simplifying and improving proofs that use complicated combinatorial arguments. Using Kolmogorov complexity for oracle construction, we obtain separation results that are much stronger than separations obtained previously even with the use of very complicated combinatorial arguments. Moreover the use of Kolmogorov arguments almost trivializes the construction itself: In particular we construct relativized worlds where: 1. NP/spl cap/CoNP/spl isin/P/poly. 2. NP has a set that is both simple and NP/spl cap/CoNP-immune. 3. CoNP has a set that is both simple and NP/spl cap/CoNP-immune. 4. /spl Pi//sub 2//sup p/ has a set that is both simple and /spl Pi//sub 2//sup p//spl cap//spl Sigma//sup 2p/-immune.","PeriodicalId":432015,"journal":{"name":"Proceedings. Fourteenth Annual IEEE Conference on Computational Complexity (Formerly: Structure in Complexity Theory Conference) (Cat.No.99CB36317)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126590524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We develop a general strategy for proving width lower bounds, which follows Haken's original proof technique but is now simple and clear. It reveals that large width is implied by certain natural expansion properties of the clauses (axioms) of the tautology in question. We show that in the classical examples of the Pigeonhole principle, Tseitin graph tautologies, and random k-CNFs, these expansion properties are quite simple to prove. We further illustrate the power of this approach by proving new exponential lower bounds to two different restricted versions of the pigeon-hole principle. One restriction allows the encoding of the principle to use arbitrarily many extension variables in a structured way. The second restriction allows every pigeon to choose a hole from some constant size set of holes.
{"title":"Short proofs are narrow-resolution made simple","authors":"Eli Ben-Sasson, A. Wigderson","doi":"10.1145/375827.375835","DOIUrl":"https://doi.org/10.1145/375827.375835","url":null,"abstract":"We develop a general strategy for proving width lower bounds, which follows Haken's original proof technique but is now simple and clear. It reveals that large width is implied by certain natural expansion properties of the clauses (axioms) of the tautology in question. We show that in the classical examples of the Pigeonhole principle, Tseitin graph tautologies, and random k-CNFs, these expansion properties are quite simple to prove. We further illustrate the power of this approach by proving new exponential lower bounds to two different restricted versions of the pigeon-hole principle. One restriction allows the encoding of the principle to use arbitrarily many extension variables in a structured way. The second restriction allows every pigeon to choose a hole from some constant size set of holes.","PeriodicalId":432015,"journal":{"name":"Proceedings. Fourteenth Annual IEEE Conference on Computational Complexity (Formerly: Structure in Complexity Theory Conference) (Cat.No.99CB36317)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134125529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the computational complexity of solving systems of equations over a finite group. An equation over a group G is an expression of the form w/sub 1//spl middot/w/sub 2//spl middot//spl middot//spl middot//spl middot//spl middot/w/sub k/=id where each w/sub i/ is either a variable, an inverted variable, or group constant and id is the identity element of G. A solution to such an equation is an assignment of the variables (to values in G) which realizes the equality. A system of equations is a collection of such equations; a solution is then an assignment which simultaneously realizes each equation. We demonstrate that the problem of determining if a (single) equation has a solution is NP-complete for all nonsolvable groups G. For nilpotent groups, this same problem is shown to be in P. The analogous problem for systems of such equations is shown to be NP-complete if G is non-Abelian, and in P otherwise. Finally, we observe some connections between these languages and the theory of nonuniform automata.
{"title":"The complexity of solving equations over finite groups","authors":"M. Goldmann, A. Russell","doi":"10.1109/CCC.1999.766266","DOIUrl":"https://doi.org/10.1109/CCC.1999.766266","url":null,"abstract":"We study the computational complexity of solving systems of equations over a finite group. An equation over a group G is an expression of the form w/sub 1//spl middot/w/sub 2//spl middot//spl middot//spl middot//spl middot//spl middot/w/sub k/=id where each w/sub i/ is either a variable, an inverted variable, or group constant and id is the identity element of G. A solution to such an equation is an assignment of the variables (to values in G) which realizes the equality. A system of equations is a collection of such equations; a solution is then an assignment which simultaneously realizes each equation. We demonstrate that the problem of determining if a (single) equation has a solution is NP-complete for all nonsolvable groups G. For nilpotent groups, this same problem is shown to be in P. The analogous problem for systems of such equations is shown to be NP-complete if G is non-Abelian, and in P otherwise. Finally, we observe some connections between these languages and the theory of nonuniform automata.","PeriodicalId":432015,"journal":{"name":"Proceedings. Fourteenth Annual IEEE Conference on Computational Complexity (Formerly: Structure in Complexity Theory Conference) (Cat.No.99CB36317)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131036393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The introduction of randomization into efficient computation has been one of the most fertile and useful ideas in computer science. In cryptography and asynchronous computing, randomization makes possible tasks that are impossible to perform deterministically. Even for function computation, many examples are known in which randomization allows considerable savings in resources like space and time over deterministic algorithms, or even "only" simplifies them. But to what extent is this seeming power of randomness over determinism real? The most famous concrete version of this question regards the power of BPP, the class of problems solvable by probabilistic polynomial time algorithms making small constant error. What is the relative power of such algorithms compared to deterministic ones? This is largely open. On the one hand, it is possible that P=BPP, i.e., randomness is useless for solving new problems in polynomial-time. On the other, we might have BPP=EXP, which would say that randomness would be a nearly omnipotent tool for algorithm design. The only viable path towards resolving this problem was the concept of "pseudorandom generators", and the "hardness vs. randomness" paradigm: BPP can be nontrivially simulated by deterministic algorithms, if some hard function is available. While the hard functions above needed in fact to be one-way functions, completely different pseudo-random generators allowed the use of any hard function in EXP for such nontrivial simulation. Further progress considerably weakened the hardness requirement, and considerably strengthened the deterministic simulation.
{"title":"De-randomizing BPP: the state of the art","authors":"A. Wigderson","doi":"10.1109/CCC.1999.766263","DOIUrl":"https://doi.org/10.1109/CCC.1999.766263","url":null,"abstract":"The introduction of randomization into efficient computation has been one of the most fertile and useful ideas in computer science. In cryptography and asynchronous computing, randomization makes possible tasks that are impossible to perform deterministically. Even for function computation, many examples are known in which randomization allows considerable savings in resources like space and time over deterministic algorithms, or even \"only\" simplifies them. But to what extent is this seeming power of randomness over determinism real? The most famous concrete version of this question regards the power of BPP, the class of problems solvable by probabilistic polynomial time algorithms making small constant error. What is the relative power of such algorithms compared to deterministic ones? This is largely open. On the one hand, it is possible that P=BPP, i.e., randomness is useless for solving new problems in polynomial-time. On the other, we might have BPP=EXP, which would say that randomness would be a nearly omnipotent tool for algorithm design. The only viable path towards resolving this problem was the concept of \"pseudorandom generators\", and the \"hardness vs. randomness\" paradigm: BPP can be nontrivially simulated by deterministic algorithms, if some hard function is available. While the hard functions above needed in fact to be one-way functions, completely different pseudo-random generators allowed the use of any hard function in EXP for such nontrivial simulation. Further progress considerably weakened the hardness requirement, and considerably strengthened the deterministic simulation.","PeriodicalId":432015,"journal":{"name":"Proceedings. Fourteenth Annual IEEE Conference on Computational Complexity (Formerly: Structure in Complexity Theory Conference) (Cat.No.99CB36317)","volume":"164 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132626533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We show that the problem of deciding whether a given rational lattice L has a vector of length less than some given value r is NP-hard under randomized reductions, even under the promise that L has exactly zero or one vector of length less than r.
{"title":"A note on the shortest lattice vector problem","authors":"Ravi Kumar, D. Sivakumar","doi":"10.1109/CCC.1999.766277","DOIUrl":"https://doi.org/10.1109/CCC.1999.766277","url":null,"abstract":"We show that the problem of deciding whether a given rational lattice L has a vector of length less than some given value r is NP-hard under randomized reductions, even under the promise that L has exactly zero or one vector of length less than r.","PeriodicalId":432015,"journal":{"name":"Proceedings. Fourteenth Annual IEEE Conference on Computational Complexity (Formerly: Structure in Complexity Theory Conference) (Cat.No.99CB36317)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129526735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Since the publication of M. Furst et al. (1984) seminal paper connecting AC/sup 0/ with the polynomial hierarchy, it has been well known that circuit lower bounds allow you to construct oracles that separate complexity classes. We show that similar circuit lower bounds allow you to construct oracles that collapse complexity classes. For example, based on Hastad's parity lower bound, we construct an oracle such that P=PH/spl sub//spl oplus/P=EXP.
自从M. Furst et al.(1984)发表了将AC/sup 0/与多项式层次结构联系起来的开创性论文以来,众所周知,电路下界允许您构建分离复杂性类的预言机。我们展示了类似的电路下界允许您构建分解复杂类的预言器。例如,基于hasad的奇偶下界,我们构造了一个P=PH/spl sub//spl oplus/P=EXP的oracle。
{"title":"Circuit lower bounds collapse relativized complexity classes","authors":"R. Beigel, Alexis Maciel","doi":"10.1109/CCC.1999.766280","DOIUrl":"https://doi.org/10.1109/CCC.1999.766280","url":null,"abstract":"Since the publication of M. Furst et al. (1984) seminal paper connecting AC/sup 0/ with the polynomial hierarchy, it has been well known that circuit lower bounds allow you to construct oracles that separate complexity classes. We show that similar circuit lower bounds allow you to construct oracles that collapse complexity classes. For example, based on Hastad's parity lower bound, we construct an oracle such that P=PH/spl sub//spl oplus/P=EXP.","PeriodicalId":432015,"journal":{"name":"Proceedings. Fourteenth Annual IEEE Conference on Computational Complexity (Formerly: Structure in Complexity Theory Conference) (Cat.No.99CB36317)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124399723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider the question: Is finding just a part of a solution easier than finding the full solution? For example, is finding only an /spl epsiv/ fraction of the bits in a satisfying assignment to a 3-CNF formula easier than computing the whole assignment? For several important problems in NP we show that obtaining only a small fraction of the solution is as hard as finding the full solution. This can be interpreted in two ways: On the positive side, it is enough to look for an efficient algorithm that only recovers a small part of the solution, in order to completely solve any of these problems. On the negative side, any partial solution to these problems may be hard to find Some of our results can also be interpreted as robust proofs of membership.
{"title":"Computing from partial solutions","authors":"A. Gál, S. Halevi, R. Lipton, E. Petrank","doi":"10.1109/CCC.1999.766260","DOIUrl":"https://doi.org/10.1109/CCC.1999.766260","url":null,"abstract":"We consider the question: Is finding just a part of a solution easier than finding the full solution? For example, is finding only an /spl epsiv/ fraction of the bits in a satisfying assignment to a 3-CNF formula easier than computing the whole assignment? For several important problems in NP we show that obtaining only a small fraction of the solution is as hard as finding the full solution. This can be interpreted in two ways: On the positive side, it is enough to look for an efficient algorithm that only recovers a small part of the solution, in order to completely solve any of these problems. On the negative side, any partial solution to these problems may be hard to find Some of our results can also be interpreted as robust proofs of membership.","PeriodicalId":432015,"journal":{"name":"Proceedings. Fourteenth Annual IEEE Conference on Computational Complexity (Formerly: Structure in Complexity Theory Conference) (Cat.No.99CB36317)","volume":"159 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131435928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we prove near quadratic lower bounds for depth-3 arithmetic formulae over fields of characteristic zero. Such bounds are obtained for the elementary symmetric functions, the (trace of) iterated matrix multiplication, and the determinant. As corollaries we get the first non-trivial lower bounds for computing polynomials of constant degree, and a gap between the power depth-3 arithmetic formulas and depth-4 arithmetic formulas. The main technical contribution relates the complexity of computing a polynomial in this model to the wealth of partial derivatives it has on every affine subspace of small co-dimension. Lower bounds for related models utilize an algebraic analog of Nechiporuk lower bound on Boolean formulae.
{"title":"Depth-3 arithmetic formulae over fields of characteristic zero","authors":"Amir Shpilka, A. Wigderson","doi":"10.1109/CCC.1999.766267","DOIUrl":"https://doi.org/10.1109/CCC.1999.766267","url":null,"abstract":"In this paper we prove near quadratic lower bounds for depth-3 arithmetic formulae over fields of characteristic zero. Such bounds are obtained for the elementary symmetric functions, the (trace of) iterated matrix multiplication, and the determinant. As corollaries we get the first non-trivial lower bounds for computing polynomials of constant degree, and a gap between the power depth-3 arithmetic formulas and depth-4 arithmetic formulas. The main technical contribution relates the complexity of computing a polynomial in this model to the wealth of partial derivatives it has on every affine subspace of small co-dimension. Lower bounds for related models utilize an algebraic analog of Nechiporuk lower bound on Boolean formulae.","PeriodicalId":432015,"journal":{"name":"Proceedings. Fourteenth Annual IEEE Conference on Computational Complexity (Formerly: Structure in Complexity Theory Conference) (Cat.No.99CB36317)","volume":"33 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123259341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The problem of k-SAT is to determine if the given k-CNF has a satisfying solution. It is a celebrated open question as to whether it requires exponential time to solve k-SAT for k/spl ges/3. Define s/sub k/ (for k/spl ges/3) to be the infimum of {/spl delta/: there exists an O(2/sup /spl delta/n/) algorithm for solving k-SAT}. Define ETH (Exponential-Time Hypothesis) for k-SAT as follows: for k/spl ges/3, s/sub k/>0. In other words, for k/spl ges/3, k-SA does not have a subexponential-time algorithm. In this paper we show that s/sub k/ is an increasing sequence assuming ETH for k-SAT: Let s/sub /spl infin// be the limit of s/sub k/. We in fact show that s/sub k//spl les/(1-d/k) s/sub /spl infin// for some constant d>0.
{"title":"Complexity of k-SAT","authors":"R. Impagliazzo, R. Paturi","doi":"10.1109/CCC.1999.766282","DOIUrl":"https://doi.org/10.1109/CCC.1999.766282","url":null,"abstract":"The problem of k-SAT is to determine if the given k-CNF has a satisfying solution. It is a celebrated open question as to whether it requires exponential time to solve k-SAT for k/spl ges/3. Define s/sub k/ (for k/spl ges/3) to be the infimum of {/spl delta/: there exists an O(2/sup /spl delta/n/) algorithm for solving k-SAT}. Define ETH (Exponential-Time Hypothesis) for k-SAT as follows: for k/spl ges/3, s/sub k/>0. In other words, for k/spl ges/3, k-SA does not have a subexponential-time algorithm. In this paper we show that s/sub k/ is an increasing sequence assuming ETH for k-SAT: Let s/sub /spl infin// be the limit of s/sub k/. We in fact show that s/sub k//spl les/(1-d/k) s/sub /spl infin// for some constant d>0.","PeriodicalId":432015,"journal":{"name":"Proceedings. Fourteenth Annual IEEE Conference on Computational Complexity (Formerly: Structure in Complexity Theory Conference) (Cat.No.99CB36317)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129001235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}