We consider a general approach to the hoary problem of (im)proving circuit lower bounds. We define notions of hardness condensing and hardness extraction, in analogy to the corresponding notions from the computational theory of randomness. A hardness condenser is a procedure that takes in a Boolean function as input, as well as an advice string, and outputs a Boolean function on a smaller number of bits which has greater hardness when measured in terms of input length. A hardness extractor takes in a Boolean function as input, as well as an advice string, and outputs a Boolean function defined on a smaller number of bits which has close to maximum hardness. We prove several positive and negative results about these objects. First, we observe that hardness-based pseudo-random generators can be used to extract deterministic hardness from non-deterministic hardness. We derive several consequences of this observation. Among other results, we show that if E/O(n) has exponential non-deterministic hardness, then E/O{n) has deterministic hardness 2n/n, which is close to the maximum possible. We demonstrate a rare downward closure result: E with sub-exponential advice is contained in non-uniform space 2deltan for all delta > 0 if and only if there is k > 0 such that P with quadratic advice can be approximated in non-uniform space nk . Next, we consider limitations on natural models of hardness condensing and extraction. We show lower bounds on the advice length required for hardness condensing in a very general model of "relativizing" condensers. We show that non-trivial black-box extraction of deterministic hardness from deterministic hardness is essentially impossible. Finally, we prove positive results on hardness condensing in certain special cases. We show how to condense hardness from a biased function without advice using a hashing technique. We also give a hardness condenser without advice from average-case hardness to worst-case hardness. Our technique uses a connection between hardness condensing and explicit constructions of covering codes
{"title":"Making hard problems harder","authors":"Joshua Buresh-Oppenheim, R. Santhanam","doi":"10.1109/CCC.2006.26","DOIUrl":"https://doi.org/10.1109/CCC.2006.26","url":null,"abstract":"We consider a general approach to the hoary problem of (im)proving circuit lower bounds. We define notions of hardness condensing and hardness extraction, in analogy to the corresponding notions from the computational theory of randomness. A hardness condenser is a procedure that takes in a Boolean function as input, as well as an advice string, and outputs a Boolean function on a smaller number of bits which has greater hardness when measured in terms of input length. A hardness extractor takes in a Boolean function as input, as well as an advice string, and outputs a Boolean function defined on a smaller number of bits which has close to maximum hardness. We prove several positive and negative results about these objects. First, we observe that hardness-based pseudo-random generators can be used to extract deterministic hardness from non-deterministic hardness. We derive several consequences of this observation. Among other results, we show that if E/O(n) has exponential non-deterministic hardness, then E/O{n) has deterministic hardness 2n/n, which is close to the maximum possible. We demonstrate a rare downward closure result: E with sub-exponential advice is contained in non-uniform space 2deltan for all delta > 0 if and only if there is k > 0 such that P with quadratic advice can be approximated in non-uniform space nk . Next, we consider limitations on natural models of hardness condensing and extraction. We show lower bounds on the advice length required for hardness condensing in a very general model of \"relativizing\" condensers. We show that non-trivial black-box extraction of deterministic hardness from deterministic hardness is essentially impossible. Finally, we prove positive results on hardness condensing in certain special cases. We show how to condense hardness from a biased function without advice using a hashing technique. We also give a hardness condenser without advice from average-case hardness to worst-case hardness. Our technique uses a connection between hardness condensing and explicit constructions of covering codes","PeriodicalId":325664,"journal":{"name":"21st Annual IEEE Conference on Computational Complexity (CCC'06)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126678832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We give an algorithm that learns any monotone Boolean function f: {-1, 1}n rarr {-1, 1} to any constant accuracy, under the uniform distribution, in time polynomial in n and in the decision tree size of f. This is the first algorithm that can learn arbitrary monotone Boolean functions to high accuracy, using random examples only, in time polynomial in a reasonable measure of the complexity of f. A key ingredient of the result is a new bound showing that the average sensitivity of any monotone function computed by a decision tree of size s must be at most radic(log s). This bound has already proved to be of independent utility in the study of decision tree complexity (Schramm et al., 2005). We generalize the basic inequality and learning result described above in various ways; specifically, to partition size (a stronger complexity measure than decision tree size), p-biased measures over the Boolean cube (rather than just the uniform distribution), and real-valued (rather than just Boolean-valued) functions
我们给出了一个算法,可以学习任何单调布尔函数f:{- 1,1}n rarr{- 1,1}到任意常数精度,在均匀分布下,在n的时间多项式内,在f的决策树大小内。这是第一个仅使用随机样本就能高精度学习任意单调布尔函数的算法。该结果的一个关键组成部分是一个新的界,表明由大小为s的决策树计算的任何单调函数的平均灵敏度必须最多为径向(log s)。该界已经被证明在决策树复杂性的研究中具有独立的效用(Schramm et al., 2005)。我们将上述的基本不等式和学习结果用不同的方法进行推广;特别是分区大小(比决策树大小更复杂的度量)、布尔立方体(而不仅仅是均匀分布)上的p偏度量和实值(而不仅仅是布尔值)函数
{"title":"Learning monotone decision trees in polynomial time","authors":"Ryan O'Donnell, R. Servedio","doi":"10.1137/060669309","DOIUrl":"https://doi.org/10.1137/060669309","url":null,"abstract":"We give an algorithm that learns any monotone Boolean function f: {-1, 1}n rarr {-1, 1} to any constant accuracy, under the uniform distribution, in time polynomial in n and in the decision tree size of f. This is the first algorithm that can learn arbitrary monotone Boolean functions to high accuracy, using random examples only, in time polynomial in a reasonable measure of the complexity of f. A key ingredient of the result is a new bound showing that the average sensitivity of any monotone function computed by a decision tree of size s must be at most radic(log s). This bound has already proved to be of independent utility in the study of decision tree complexity (Schramm et al., 2005). We generalize the basic inequality and learning result described above in various ways; specifically, to partition size (a stronger complexity measure than decision tree size), p-biased measures over the Boolean cube (rather than just the uniform distribution), and real-valued (rather than just Boolean-valued) functions","PeriodicalId":325664,"journal":{"name":"21st Annual IEEE Conference on Computational Complexity (CCC'06)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117336905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Godel was born 100 years ago in this country. It is a good opportunity to commemorate this anniversary by a lecture about his influence on computational complexity. He also made contributions on computability, length of proofs, and diagonalization
{"title":"Godel and Computations","authors":"P. Pudlák","doi":"10.1109/CCC.2006.21","DOIUrl":"https://doi.org/10.1109/CCC.2006.21","url":null,"abstract":"Godel was born 100 years ago in this country. It is a good opportunity to commemorate this anniversary by a lecture about his influence on computational complexity. He also made contributions on computability, length of proofs, and diagonalization","PeriodicalId":325664,"journal":{"name":"21st Annual IEEE Conference on Computational Complexity (CCC'06)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134219879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Two long-standing open problems exist on the fringe of complexity theory and cryptography: (1) Does there exist a reduction from an NP-complete problem to a one-way function? (2) Do parallelized versions of classical constant-round zero-knowledge proofs for NP conceal every "hard" bit of the witness to the statement proved? We show that, unless the polynomial-hierarchy collapses, black-box reductions cannot be used to provide positive answers to both questions
{"title":"Parallel repetition of zero-knowledge proofs and the possibility of basing cryptography on NP-hardness","authors":"R. Pass","doi":"10.1109/CCC.2006.33","DOIUrl":"https://doi.org/10.1109/CCC.2006.33","url":null,"abstract":"Two long-standing open problems exist on the fringe of complexity theory and cryptography: (1) Does there exist a reduction from an NP-complete problem to a one-way function? (2) Do parallelized versions of classical constant-round zero-knowledge proofs for NP conceal every \"hard\" bit of the witness to the statement proved? We show that, unless the polynomial-hierarchy collapses, black-box reductions cannot be used to provide positive answers to both questions","PeriodicalId":325664,"journal":{"name":"21st Annual IEEE Conference on Computational Complexity (CCC'06)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131467660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We extend Nisan's breakthrough derandomization result that BPHL sube SC2 (1992) to bounded error probabilistic complexity classes based on auxiliary pushdown automata. In particular, we show that any logarithmic space, polynomial time two-sided bounded-error probabilistic auxiliary pushdown automaton (the corresponding complexity class is denoted by BPHLOGCFL) can be simulated by an SC2 machine. This derandomization result improves a classical result by Cook (1979) that LOGDCFL sube SC2 since LOGDCFL is contained in BPHLOGCFL. We also present a simple circuit-based proof that BPHLOGCFL is in NC 2
我们将Nisan的突破性非随机化结果BPHL subbe SC2(1992)推广到基于辅助下推自动机的有界误差概率复杂度类。特别地,我们证明了任何对数空间,多项式时间的双边有界误差概率辅助下推自动机(相应的复杂度类用BPHLOGCFL表示)都可以用SC2机器模拟。由于LOGDCFL包含在BPHLOGCFL中,因此该非随机化结果改进了Cook(1979)的经典结果LOGDCFL sub - SC2。我们还提出了一个简单的基于电路的证明,证明BPHLOGCFL在NC 2中
{"title":"Derandomization of probabilistic auxiliary pushdown automata classes","authors":"H. Venkateswaran","doi":"10.1109/CCC.2006.16","DOIUrl":"https://doi.org/10.1109/CCC.2006.16","url":null,"abstract":"We extend Nisan's breakthrough derandomization result that BP<sub>H</sub>L sube SC<sup>2</sup> (1992) to bounded error probabilistic complexity classes based on auxiliary pushdown automata. In particular, we show that any logarithmic space, polynomial time two-sided bounded-error probabilistic auxiliary pushdown automaton (the corresponding complexity class is denoted by BP<sub>H</sub>LOGCFL) can be simulated by an SC<sup>2</sup> machine. This derandomization result improves a classical result by Cook (1979) that LOGDCFL sube SC<sup>2 </sup> since LOGDCFL is contained in BP<sub>H</sub>LOGCFL. We also present a simple circuit-based proof that BP<sub>H</sub>LOGCFL is in NC <sup>2</sup>","PeriodicalId":325664,"journal":{"name":"21st Annual IEEE Conference on Computational Complexity (CCC'06)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115268644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study a very basic open problem regarding the PCP characterization of NP, namely, the power of PCPs with 3 non-adaptive queries and perfect completeness. The lowest soundness known till now for such a PCP is 6/8 + epsi given by a construction of Hastad (1997). However, Zwick (1998) shows that a 3-query non-adaptive PCP with perfect completeness cannot achieve soundness below 5/8. In this paper, we construct a 3-query non-adaptive PCP with perfect completeness and soundness 20/27 + epsi, which improves upon the previous best soundness of 6/8 + epsi. A standard reduction from PCPs to constraint satisfaction problems (CSPs) implies that it is NP-hard to tell if a Boolean CSP on 3-variables has a satisfying assignment or no assignment satisfies more than 20/27 + epsi fraction of the constraints. Our construction uses "biased long codes" introduced by Dinur and Safra (2002). We develop new 3-query tests to check consistency between such codes. These tests are analyzed by extending Hastad's Fourier methods (1997) to the biased case
{"title":"A 3-query non-adaptive PCP with perfect completeness","authors":"Subhash Khot, Rishi Saket","doi":"10.1109/CCC.2006.5","DOIUrl":"https://doi.org/10.1109/CCC.2006.5","url":null,"abstract":"We study a very basic open problem regarding the PCP characterization of NP, namely, the power of PCPs with 3 non-adaptive queries and perfect completeness. The lowest soundness known till now for such a PCP is 6/8 + epsi given by a construction of Hastad (1997). However, Zwick (1998) shows that a 3-query non-adaptive PCP with perfect completeness cannot achieve soundness below 5/8. In this paper, we construct a 3-query non-adaptive PCP with perfect completeness and soundness 20/27 + epsi, which improves upon the previous best soundness of 6/8 + epsi. A standard reduction from PCPs to constraint satisfaction problems (CSPs) implies that it is NP-hard to tell if a Boolean CSP on 3-variables has a satisfying assignment or no assignment satisfies more than 20/27 + epsi fraction of the constraints. Our construction uses \"biased long codes\" introduced by Dinur and Safra (2002). We develop new 3-query tests to check consistency between such codes. These tests are analyzed by extending Hastad's Fourier methods (1997) to the biased case","PeriodicalId":325664,"journal":{"name":"21st Annual IEEE Conference on Computational Complexity (CCC'06)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128937058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Uniformity notions more restrictive than the usual FO[<, +, *]-uniformity = FO[<, Bit]-uniformity are introduced. It is shown that the general framework exhibited by Barrington et al. still holds if the fan-in of the gates in the corresponding circuits is considered
{"title":"FO[<]-uniformity","authors":"C. Behle, K. Lange","doi":"10.1109/CCC.2006.20","DOIUrl":"https://doi.org/10.1109/CCC.2006.20","url":null,"abstract":"Uniformity notions more restrictive than the usual FO[<, +, *]-uniformity = FO[<, Bit]-uniformity are introduced. It is shown that the general framework exhibited by Barrington et al. still holds if the fan-in of the gates in the corresponding circuits is considered","PeriodicalId":325664,"journal":{"name":"21st Annual IEEE Conference on Computational Complexity (CCC'06)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122859706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a new technique for removing existential quantifiers over quantum states. Using this technique, we show that there is no way to pack an exponential number of bits into a polynomial-size quantum state, in such a way that the value of any one of those bits can later be proven with the help of a polynomial-size quantum witness. We also show that any problem in QMA with polynomial-size quantum advice, is also in PSPACE with polynomial-size classical advice. This builds on our earlier result that BQP/qpoly sube PP/poly, and offers an intriguing counterpoint to the recent discovery of Raz that QIP/qpoly = ALL. Finally, we show that QCMA/qpoly sube PP/poly and that QMA/rpoly = QMA/poly
{"title":"QMA/qpoly /spl sube/ PSPACE/poly: de-Merlinizing quantum protocols","authors":"S. Aaronson","doi":"10.1109/CCC.2006.36","DOIUrl":"https://doi.org/10.1109/CCC.2006.36","url":null,"abstract":"This paper introduces a new technique for removing existential quantifiers over quantum states. Using this technique, we show that there is no way to pack an exponential number of bits into a polynomial-size quantum state, in such a way that the value of any one of those bits can later be proven with the help of a polynomial-size quantum witness. We also show that any problem in QMA with polynomial-size quantum advice, is also in PSPACE with polynomial-size classical advice. This builds on our earlier result that BQP/qpoly sube PP/poly, and offers an intriguing counterpoint to the recent discovery of Raz that QIP/qpoly = ALL. Finally, we show that QCMA/qpoly sube PP/poly and that QMA/rpoly = QMA/poly","PeriodicalId":325664,"journal":{"name":"21st Annual IEEE Conference on Computational Complexity (CCC'06)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123447392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Allender, D. M. Barrington, T. Chakraborty, Samir Datta, Sambuddha Roy
We study the complexity of reachability problems on various classes of grid graphs. Reachability on certain classes of grid graphs gives natural examples of problems that are hard for NC1 under AC0 reductions but are not known to be hard far L; they thus give insight into the structure of L. In addition to explicating the structure of L, another of our goals is to expand the class of digraphs for which connectivity can be solved in logspace, by building on the work of Jakoby et al. (2001), who showed that reachability in series-parallel digraphs is solvable in L. We show that reachability for single-source multiple sink planar dags is solvable in L
{"title":"Grid graph reachability problems","authors":"E. Allender, D. M. Barrington, T. Chakraborty, Samir Datta, Sambuddha Roy","doi":"10.1109/CCC.2006.22","DOIUrl":"https://doi.org/10.1109/CCC.2006.22","url":null,"abstract":"We study the complexity of reachability problems on various classes of grid graphs. Reachability on certain classes of grid graphs gives natural examples of problems that are hard for NC1 under AC0 reductions but are not known to be hard far L; they thus give insight into the structure of L. In addition to explicating the structure of L, another of our goals is to expand the class of digraphs for which connectivity can be solved in logspace, by building on the work of Jakoby et al. (2001), who showed that reachability in series-parallel digraphs is solvable in L. We show that reachability for single-source multiple sink planar dags is solvable in L","PeriodicalId":325664,"journal":{"name":"21st Annual IEEE Conference on Computational Complexity (CCC'06)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128116737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We provide the first hardness result for the covering radius problem on lattices (CRP). Namely, we show that for any large enough p les infin there exists a constant cp > 1 such that CRP in the lscrp norm is Pi2-hard to approximate to within any constant less than cp. In particular, for the case p = infin, we obtain the constant Cinfin = 1.5. This gets close to the constant 2 beyond which the problem is not believed to be Pi2-hard. As part of our proof, we establish a stronger hardness of approximation result for the forallexist-3-SAT problem with bounded occurrences. This hardness result might be useful elsewhere
{"title":"Hardness of the covering radius problem on lattices","authors":"I. Haviv, O. Regev","doi":"10.1109/CCC.2006.23","DOIUrl":"https://doi.org/10.1109/CCC.2006.23","url":null,"abstract":"We provide the first hardness result for the covering radius problem on lattices (CRP). Namely, we show that for any large enough p les infin there exists a constant cp > 1 such that CRP in the lscrp norm is Pi2-hard to approximate to within any constant less than cp. In particular, for the case p = infin, we obtain the constant Cinfin = 1.5. This gets close to the constant 2 beyond which the problem is not believed to be Pi2-hard. As part of our proof, we establish a stronger hardness of approximation result for the forallexist-3-SAT problem with bounded occurrences. This hardness result might be useful elsewhere","PeriodicalId":325664,"journal":{"name":"21st Annual IEEE Conference on Computational Complexity (CCC'06)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114792324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}