Pub Date : 2008-06-22DOI: 10.4086/toc.2011.v007a001
Zeev Dvir, Amir Shpilka
A noisy interpolating set (NIS) for degree d polynomials is a set S sube Fn, where F is a finite field, such that any degree d polynomial q isin F[x1,..., xn] can be efficiently interpolated from its values on S, even if an adversary corrupts a constant fraction of the values. In this paper we construct explicit NIS for every prime field Fp and any degree d. Our sets are of size O(nd) and have efficient interpolation algorithms that can recover qfrom a fraction exp(-O(d)) of errors. Our construction is based on a theorem which roughly states that ifS is a NIS for degree I polynomials then dldrS = {alpha1 + ... + alphad | alpha1 isin S} is a NIS for degree d polynomials. Furthermore, given an efficient interpolation algorithm for S, we show how to use it in a black-box manner to build an efficient interpolation algorithm for d ldr S. As a corollary we get an explicit family of punctured Reed-Muller codes that is a family of good codes that have an efficient decoding algorithm from a constant fraction of errors. To the best of our knowledge no such construction was known previously.
{"title":"Noisy Interpolating Sets for Low Degree Polynomials","authors":"Zeev Dvir, Amir Shpilka","doi":"10.4086/toc.2011.v007a001","DOIUrl":"https://doi.org/10.4086/toc.2011.v007a001","url":null,"abstract":"A noisy interpolating set (NIS) for degree d polynomials is a set S sube Fn, where F is a finite field, such that any degree d polynomial q isin F[x1,..., xn] can be efficiently interpolated from its values on S, even if an adversary corrupts a constant fraction of the values. In this paper we construct explicit NIS for every prime field Fp and any degree d. Our sets are of size O(nd) and have efficient interpolation algorithms that can recover qfrom a fraction exp(-O(d)) of errors. Our construction is based on a theorem which roughly states that ifS is a NIS for degree I polynomials then dldrS = {alpha1 + ... + alphad | alpha1 isin S} is a NIS for degree d polynomials. Furthermore, given an efficient interpolation algorithm for S, we show how to use it in a black-box manner to build an efficient interpolation algorithm for d ldr S. As a corollary we get an explicit family of punctured Reed-Muller codes that is a family of good codes that have an efficient decoding algorithm from a constant fraction of errors. To the best of our knowledge no such construction was known previously.","PeriodicalId":338061,"journal":{"name":"2008 23rd Annual IEEE Conference on Computational Complexity","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115341835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper shows a complete upward collapse in the Polynomial Hierarchy (PH) if for ZPP, two queries to a SAT oracle is equivalent to one query. That is, ZPPSAT[1] = ZPPSAT||[2] rArr ZPPSAT[1] = PH. These ZPP machines are required to succeed with probability at least 1/2 + 1/p(n) on inputs of length n for some polynomial p(n). This result builds upon recent work by Tripathi who showed a collapse of PH to S2P. The use of the probability bound of 1/2 + 1/p(n) is justified in part by showing that this bound can be amplified to 1 - 2-nk for ZPPSAT[1] computations. This paper also shows that in the deterministic case, PSAT[1] = PSAT||[2] rArr PH sube ZPPSAT[1] where the ZPPSAT[1] machine achieves a probability of success of 1/2 - 2-nk.
{"title":"Amplifying ZPP^SAT[1] and the Two Queries Problem","authors":"Richard Chang, Suresh Purini","doi":"10.1109/CCC.2008.32","DOIUrl":"https://doi.org/10.1109/CCC.2008.32","url":null,"abstract":"This paper shows a complete upward collapse in the Polynomial Hierarchy (PH) if for ZPP, two queries to a SAT oracle is equivalent to one query. That is, ZPP<sup>SAT[1]</sup> = ZPP<sup>SAT||[2]</sup> rArr ZPP<sup>SAT[1]</sup> = PH. These ZPP machines are required to succeed with probability at least 1/2 + 1/p(n) on inputs of length n for some polynomial p(n). This result builds upon recent work by Tripathi who showed a collapse of PH to S<sub>2</sub> <sup>P</sup>. The use of the probability bound of 1/2 + 1/p(n) is justified in part by showing that this bound can be amplified to 1 - 2<sup>-nk</sup> for ZPP<sup>SAT[1]</sup> computations. This paper also shows that in the deterministic case, P<sup>SAT[1]</sup> = P<sup>SAT||[2]</sup> rArr PH sube ZPP<sup>SAT[1]</sup> where the ZPP<sup>SAT[1]</sup> machine achieves a probability of success of 1/2 - 2<sup>-nk</sup>.","PeriodicalId":338061,"journal":{"name":"2008 23rd Annual IEEE Conference on Computational Complexity","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124948445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We prove that the weighted monotone circuit satisfiability problem has no fixed-parameter tractable approximation algorithm with constant or polylogarithmic approximation ratio unless FPT = W[P]. Our result answers a question of Alekhnovich and Razborov, who proved that the weighted monotone circuit satisfiability problem has no fixed-parameter tractable 2-approximation algorithm unless every problem in W[P] can be solved by a randomized fpt algorithm and asked whether their result can be derandomized. Alekhnovich and Razborov used their inapproximability result as a lemma for proving that resolution is not automatizable unless W[P] is contained in randomized FPT. It is an immediate consequence of our result that the complexity theoretic assumption can be weakened to W[P] ne FPT. The decision version of the monotone circuit satisfiability problem is known to be complete for the class W[P]. By reducing them to the monotone circuit satisfiability problem with suitable approximation preserving reductions, we prove similar inapproximability results for all other natural minimisation problems known to be W[P]-complete.
证明了加权单调电路可满足性问题不存在具有常数或多对数近似比的定参数可处理逼近算法,除非FPT = W[P]。我们的结果回答了Alekhnovich和Razborov的一个问题,他们证明了加权单调电路可满足性问题没有固定参数可处理的2逼近算法,除非W[P]中的每个问题都可以用随机化的fpt算法求解,并询问了他们的结果是否可以非随机化。Alekhnovich和Razborov用他们的不可逼近性结果作为引理来证明除非W[P]包含在随机FPT中,否则分辨率是不可自动化的。我们的结果的直接结果是,复杂性理论假设可以被削弱为W[P] ne FPT。已知单调电路可满足性问题的决策版本对于W[P]类是完全的。通过适当的近似保留约简,我们证明了所有其他已知W[P]-完全的自然最小化问题的类似的不可逼近性结果。
{"title":"Approximation of Natural W[P]-Complete Minimisation Problems Is Hard","authors":"Kord Eickmeyer, Martin Grohe, M. Grüber","doi":"10.1109/CCC.2008.24","DOIUrl":"https://doi.org/10.1109/CCC.2008.24","url":null,"abstract":"We prove that the weighted monotone circuit satisfiability problem has no fixed-parameter tractable approximation algorithm with constant or polylogarithmic approximation ratio unless FPT = W[P]. Our result answers a question of Alekhnovich and Razborov, who proved that the weighted monotone circuit satisfiability problem has no fixed-parameter tractable 2-approximation algorithm unless every problem in W[P] can be solved by a randomized fpt algorithm and asked whether their result can be derandomized. Alekhnovich and Razborov used their inapproximability result as a lemma for proving that resolution is not automatizable unless W[P] is contained in randomized FPT. It is an immediate consequence of our result that the complexity theoretic assumption can be weakened to W[P] ne FPT. The decision version of the monotone circuit satisfiability problem is known to be complete for the class W[P]. By reducing them to the monotone circuit satisfiability problem with suitable approximation preserving reductions, we prove similar inapproximability results for all other natural minimisation problems known to be W[P]-complete.","PeriodicalId":338061,"journal":{"name":"2008 23rd Annual IEEE Conference on Computational Complexity","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130912232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-22DOI: 10.1017/S0963548308009656
N. Linial, A. Shraibman
This paper has two main focal points. We first consider an important class of machine learning algorithms - large margin classifiers, such as support vector machines. The notion of margin complexity quantifies the extent to which a given class of functions can be learned by large margin classifiers. We prove that up to a small multiplicative constant, margin complexity is equal to the inverse of discrepancy. This establishes a strong tie between seemingly very different notions from two distinct areas. In the same way that matrix rigidity is related to rank, we introduce the notion of rigidity of margin complexity. We prove that sign matrices with small margin complexity rigidity are very rare. This leads to the question of proving lower bounds on the rigidity of margin complexity. Quite surprisingly, this question turns out to be closely related to basic open problems in communication complexity, e.g., whether PSPACE can be separated from the polynomial hierarchy in communication complexity. There are numerous known relations between the field of learning theory and that of communication complexity, as one might expect since communication is an inherent aspect of learning. The results of this paper constitute another link in this rich web of relations. This link has already proved significant as it was used in the solution of a few open problems in communication complexity.
{"title":"Learning Complexity vs. Communication Complexity","authors":"N. Linial, A. Shraibman","doi":"10.1017/S0963548308009656","DOIUrl":"https://doi.org/10.1017/S0963548308009656","url":null,"abstract":"This paper has two main focal points. We first consider an important class of machine learning algorithms - large margin classifiers, such as support vector machines. The notion of margin complexity quantifies the extent to which a given class of functions can be learned by large margin classifiers. We prove that up to a small multiplicative constant, margin complexity is equal to the inverse of discrepancy. This establishes a strong tie between seemingly very different notions from two distinct areas. In the same way that matrix rigidity is related to rank, we introduce the notion of rigidity of margin complexity. We prove that sign matrices with small margin complexity rigidity are very rare. This leads to the question of proving lower bounds on the rigidity of margin complexity. Quite surprisingly, this question turns out to be closely related to basic open problems in communication complexity, e.g., whether PSPACE can be separated from the polynomial hierarchy in communication complexity. There are numerous known relations between the field of learning theory and that of communication complexity, as one might expect since communication is an inherent aspect of learning. The results of this paper constitute another link in this rich web of relations. This link has already proved significant as it was used in the solution of a few open problems in communication complexity.","PeriodicalId":338061,"journal":{"name":"2008 23rd Annual IEEE Conference on Computational Complexity","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127567210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a simple game family, called constraint logic, where players reverse edges in a directed graph while satisfying vertex in-flow constraints. This game family can be interpreted in many different game-theoretic settings, ranging from zero-player automata to a more economic setting of team multiplayer games with hidden information. Each setting gives rise to a model of computation that we show corresponds to a classic complexity class. In this way we obtain a uniform framework for modeling various complexities of computation as games. Most surprising among our results is that a game with three players and a bounded amount of state can simulate any (infinite) Turing computation, making the game undecidable. Our framework also provides a more graphical, less formulaic viewpoint of computation. This graph model has been shown to be particularly appropriate for reducing to many existing combinatorial games and puzzles - such as Sokoban, rush hour, river crossing, tipover, the warehouseman's problem, pushing blocks, hinged-dissection reconfiguration, Amazons, and Konane (hawaiian checkers) - which have an intrinsically planar structure. Our framework makes it substantially easier to prove completeness of such games in their appropriate complexity classes.
{"title":"Constraint Logic: A Uniform Framework for Modeling Computation as Games","authors":"E. Demaine, R. Hearn","doi":"10.1109/CCC.2008.35","DOIUrl":"https://doi.org/10.1109/CCC.2008.35","url":null,"abstract":"We introduce a simple game family, called constraint logic, where players reverse edges in a directed graph while satisfying vertex in-flow constraints. This game family can be interpreted in many different game-theoretic settings, ranging from zero-player automata to a more economic setting of team multiplayer games with hidden information. Each setting gives rise to a model of computation that we show corresponds to a classic complexity class. In this way we obtain a uniform framework for modeling various complexities of computation as games. Most surprising among our results is that a game with three players and a bounded amount of state can simulate any (infinite) Turing computation, making the game undecidable. Our framework also provides a more graphical, less formulaic viewpoint of computation. This graph model has been shown to be particularly appropriate for reducing to many existing combinatorial games and puzzles - such as Sokoban, rush hour, river crossing, tipover, the warehouseman's problem, pushing blocks, hinged-dissection reconfiguration, Amazons, and Konane (hawaiian checkers) - which have an intrinsically planar structure. Our framework makes it substantially easier to prove completeness of such games in their appropriate complexity classes.","PeriodicalId":338061,"journal":{"name":"2008 23rd Annual IEEE Conference on Computational Complexity","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134189514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Discrepancy is a versatile bound in communication complexity which can be used to show lower bounds in randomized, quantum, and even weakly-unbounded error models of communication. We show an optimal product theorem for discrepancy, namely that for any two Boolean functions f, g, disc(f odot g)=thetas(disc(f) disc(g)). As a consequence we obtain a strong direct product theorem for distributional complexity, and direct sum theorems for worst-case complexity, for bounds shown by the discrepancy method. Our results resolve an open problem of Shaltiel (2003) who showed a weaker product theorem for discrepancy with respect to the uniform distribution, discUodot(fodotk)=O(discU(f))k/3. The main tool for our results is semidefinite programming, in particular a recent characterization of discrepancy in terms of a semidefinite programming quantity by Linial and Shraibman (2006).
{"title":"A Direct Product Theorem for Discrepancy","authors":"Troy Lee, A. Shraibman, R. Spalek","doi":"10.1109/CCC.2008.25","DOIUrl":"https://doi.org/10.1109/CCC.2008.25","url":null,"abstract":"Discrepancy is a versatile bound in communication complexity which can be used to show lower bounds in randomized, quantum, and even weakly-unbounded error models of communication. We show an optimal product theorem for discrepancy, namely that for any two Boolean functions f, g, disc(f odot g)=thetas(disc(f) disc(g)). As a consequence we obtain a strong direct product theorem for distributional complexity, and direct sum theorems for worst-case complexity, for bounds shown by the discrepancy method. Our results resolve an open problem of Shaltiel (2003) who showed a weaker product theorem for discrepancy with respect to the uniform distribution, discUodot(fodotk)=O(discU(f))k/3. The main tool for our results is semidefinite programming, in particular a recent characterization of discrepancy in terms of a semidefinite programming quantity by Linial and Shraibman (2006).","PeriodicalId":338061,"journal":{"name":"2008 23rd Annual IEEE Conference on Computational Complexity","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126864104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We show that tree-like OBDD proofs of unsatisfiability require an exponential increase (s rarr 2s Omega(1)) in proof size to simulate unrestricted resolution, and that unrestricted OBDD proofs of unsatisfiability require an almost-exponential increase (s rarr 22(log s) Omega(1)) in proof size to simulate Res (O(log n)). The "OBDD proof system" that we consider has lines that are ordered binary decision diagrams in the same variables as the input formula, and is allowed to combine two previously derived OBDDs by any sound inference rule. In particular, this system abstracts satisfiability algorithms based upon explicit construction of OBDDs and satisfiability algorithms based upon symbolic quantifier elimination.
{"title":"On the Relative Efficiency of Resolution-Like Proofs and Ordered Binary Decision Diagram Proofs","authors":"Nathan Segerlind","doi":"10.1109/CCC.2008.34","DOIUrl":"https://doi.org/10.1109/CCC.2008.34","url":null,"abstract":"We show that tree-like OBDD proofs of unsatisfiability require an exponential increase (s rarr 2s Omega(1)) in proof size to simulate unrestricted resolution, and that unrestricted OBDD proofs of unsatisfiability require an almost-exponential increase (s rarr 22(log s) Omega(1)) in proof size to simulate Res (O(log n)). The \"OBDD proof system\" that we consider has lines that are ordered binary decision diagrams in the same variables as the input formula, and is allowed to combine two previously derived OBDDs by any sound inference rule. In particular, this system abstracts satisfiability algorithms based upon explicit construction of OBDDs and satisfiability algorithms based upon symbolic quantifier elimination.","PeriodicalId":338061,"journal":{"name":"2008 23rd Annual IEEE Conference on Computational Complexity","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116984946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we study the individual communication complexity of the following problem. Alice receives an input string x and Bob an input string y, and Alice has to output y. For deterministic protocols it has been shown in Buhrman et al. (2004), that C(y) many bits need to be exchanged even if the actual amount of information C(y|x) is much smaller than C(y). It turns out that for randomised protocols the situation is very different. We establish randomised protocols whose communication complexity is close to the information theoretical lower bound. We furthermore initiate and obtain results about the randomised round complexity of this problem and show trade-offs between the amount of communication and the number of rounds. In order to do this we establish a general framework for studying these types of questions.
本文研究了以下问题的个体通信复杂性。Alice接收到输入字符串x, Bob接收到输入字符串y, Alice必须输出y。Buhrman et al.(2004)表明,对于确定性协议,即使实际信息量C(y|x)远小于C(y),也需要交换C(y)多个比特。事实证明,对于随机协议,情况是非常不同的。建立了通信复杂度接近信息理论下界的随机化协议。我们进一步提出并获得了该问题的随机轮复杂度的结果,并展示了通信量和轮数之间的权衡。为了做到这一点,我们建立了一个研究这类问题的总体框架。
{"title":"Randomised Individual Communication Complexity","authors":"H. Buhrman, M. Koucký, N. Vereshchagin","doi":"10.1109/CCC.2008.33","DOIUrl":"https://doi.org/10.1109/CCC.2008.33","url":null,"abstract":"In this paper we study the individual communication complexity of the following problem. Alice receives an input string x and Bob an input string y, and Alice has to output y. For deterministic protocols it has been shown in Buhrman et al. (2004), that C(y) many bits need to be exchanged even if the actual amount of information C(y|x) is much smaller than C(y). It turns out that for randomised protocols the situation is very different. We establish randomised protocols whose communication complexity is close to the information theoretical lower bound. We furthermore initiate and obtain results about the randomised round complexity of this problem and show trade-offs between the amount of communication and the number of rounds. In order to do this we establish a general framework for studying these types of questions.","PeriodicalId":338061,"journal":{"name":"2008 23rd Annual IEEE Conference on Computational Complexity","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130569126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We observe that many important computational problems in NC1 share a simple self-reducibility property. We then show that, for any problem A having this self-reducibility property, A has polynomial size TC0 circuits if and only if it has TC0 circuits of size n1+isin for every isin>0 (counting the number of wires in a circuit as the size of the circuit). As an example of what this observation yields, consider the Boolean formula evaluation problem (BFE), which is complete for NC1. It follows from a lower bound of Impagliazzo, Paturi, and Saks, that BFE requires depth d TC0 circuits of size n1+isind. If one were able to improve this lower bound to show that there is some constant isin>0 such that every TC0 circuit family recognizing BFE has size n1+isin, then it would follow that TC0neNC1. We also show that problems with small uniform constant- depth circuits have algorithms that simultaneously have small space and time bounds. We then make use of known time-space tradeoff lower bounds to show that SAT requires uniform depth d TC0 and AC0 [6] circuits of size n1+c for some constant c depending on d.
{"title":"Amplifying Lower Bounds by Means of Self-Reducibility","authors":"E. Allender, M. Koucký","doi":"10.1145/1706591.1706594","DOIUrl":"https://doi.org/10.1145/1706591.1706594","url":null,"abstract":"We observe that many important computational problems in NC<sup>1</sup> share a simple self-reducibility property. We then show that, for any problem A having this self-reducibility property, A has polynomial size TC<sup>0</sup> circuits if and only if it has TC<sup>0</sup> circuits of size n<sup>1+isin</sup> for every isin>0 (counting the number of wires in a circuit as the size of the circuit). As an example of what this observation yields, consider the Boolean formula evaluation problem (BFE), which is complete for NC<sup>1</sup>. It follows from a lower bound of Impagliazzo, Paturi, and Saks, that BFE requires depth d TC<sup>0</sup> circuits of size n<sup>1+isin</sup> <sup>d</sup>. If one were able to improve this lower bound to show that there is some constant isin>0 such that every TC<sup>0</sup> circuit family recognizing BFE has size n<sup>1+isin</sup>, then it would follow that TC<sup>0</sup>neNC<sup>1</sup>. We also show that problems with small uniform constant- depth circuits have algorithms that simultaneously have small space and time bounds. We then make use of known time-space tradeoff lower bounds to show that SAT requires uniform depth d TC<sup>0</sup> and AC<sup>0</sup> [6] circuits of size n<sup>1+c</sup> for some constant c depending on d.","PeriodicalId":338061,"journal":{"name":"2008 23rd Annual IEEE Conference on Computational Complexity","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116048549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We define quantum expanders in a natural way. We give two constructions of quantum expanders, both based on classical expander constructions. The first construction is algebraic, and is based on the construction of Cayley Ramanujan graphs over the group PGL(2, q) given by Lubotzky et al. (1988). The second construction is combinatorial, and is based on a quantum variant of the Zig-Zag product introduced by Reingold et al. (2000). Both constructions are of constant degree, and the second one is explicit. Using quantum expanders, we characterize the complexity of comparing and estimating quantum entropies. Specifically, we consider the following task: given two mixed states, each given by a quantum circuit generating it, decide which mixed state has more entropy. We show that this problem is QSZK-complete (where QSZK is the class of languages having a zero-knowledge quantum interactive protocol). This problem is very well motivated from a physical point of view. Our proof resembles the classical proof that the entropy difference problem is SZK-complete, but crucially depends on the use of quantum expanders.
我们用自然的方式定义量子膨胀器。我们给出了两种基于经典扩展器结构的量子扩展器结构。第一种构造是代数的,基于Lubotzky et al.(1988)给出的群PGL(2, q)上的Cayley Ramanujan图的构造。第二种结构是组合的,基于Reingold等人(2000)引入的z - zag积的量子变体。这两个结构都是定度的,第二个结构是明确的。利用量子扩展器,我们描述了比较和估计量子熵的复杂性。具体来说,我们考虑以下任务:给定两种混合状态,每一种状态都由产生它的量子电路给出,决定哪种混合状态具有更多的熵。我们证明了这个问题是QSZK完备的(其中QSZK是具有零知识量子交互协议的语言类)。从物理角度来看,这个问题的动机很好。我们的证明类似于熵差问题是szk完全的经典证明,但关键取决于量子膨胀机的使用。
{"title":"Quantum Expanders: Motivation and Constructions","authors":"Avraham Ben-Aroya, O. Schwartz, A. Ta-Shma","doi":"10.1109/CCC.2008.23","DOIUrl":"https://doi.org/10.1109/CCC.2008.23","url":null,"abstract":"We define quantum expanders in a natural way. We give two constructions of quantum expanders, both based on classical expander constructions. The first construction is algebraic, and is based on the construction of Cayley Ramanujan graphs over the group PGL(2, q) given by Lubotzky et al. (1988). The second construction is combinatorial, and is based on a quantum variant of the Zig-Zag product introduced by Reingold et al. (2000). Both constructions are of constant degree, and the second one is explicit. Using quantum expanders, we characterize the complexity of comparing and estimating quantum entropies. Specifically, we consider the following task: given two mixed states, each given by a quantum circuit generating it, decide which mixed state has more entropy. We show that this problem is QSZK-complete (where QSZK is the class of languages having a zero-knowledge quantum interactive protocol). This problem is very well motivated from a physical point of view. Our proof resembles the classical proof that the entropy difference problem is SZK-complete, but crucially depends on the use of quantum expanders.","PeriodicalId":338061,"journal":{"name":"2008 23rd Annual IEEE Conference on Computational Complexity","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127162969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}