首页 > 最新文献

2008 23rd Annual IEEE Conference on Computational Complexity最新文献

英文 中文
Noisy Interpolating Sets for Low Degree Polynomials 低次多项式的噪声插值集
Pub Date : 2008-06-22 DOI: 10.4086/toc.2011.v007a001
Zeev Dvir, Amir Shpilka
A noisy interpolating set (NIS) for degree d polynomials is a set S sube Fn, where F is a finite field, such that any degree d polynomial q isin F[x1,..., xn] can be efficiently interpolated from its values on S, even if an adversary corrupts a constant fraction of the values. In this paper we construct explicit NIS for every prime field Fp and any degree d. Our sets are of size O(nd) and have efficient interpolation algorithms that can recover qfrom a fraction exp(-O(d)) of errors. Our construction is based on a theorem which roughly states that ifS is a NIS for degree I polynomials then dldrS = {alpha1 + ... + alphad | alpha1 isin S} is a NIS for degree d polynomials. Furthermore, given an efficient interpolation algorithm for S, we show how to use it in a black-box manner to build an efficient interpolation algorithm for d ldr S. As a corollary we get an explicit family of punctured Reed-Muller codes that is a family of good codes that have an efficient decoding algorithm from a constant fraction of errors. To the best of our knowledge no such construction was known previously.
d次多项式的噪声插值集(NIS)是一个集合S子Fn,其中F是一个有限域,使得任意d次多项式q在F[x1,…], xn]可以有效地从它在S上的值插入,即使对手破坏了值的恒定部分。在本文中,我们为每个素数域Fp和任何阶数d构造显式NIS。我们的集合大小为O(nd),并且具有有效的插值算法,可以从分数exp(-O(d))的错误中恢复q。我们的构造基于一个定理,该定理大致表明,如果s是I次多项式的NIS,则dldr = {alpha1 +…+ alpha | alpha1 isin S}是d次多项式的NIS。此外,给定S的有效插值算法,我们展示了如何以黑盒方式使用它来构建d ldr S的有效插值算法。作为一个推论,我们得到了一个明确的穿孔Reed-Muller码族,这是一个具有有效解码算法的好码族,从恒定的错误分数。据我们所知,以前没有这样的构造。
{"title":"Noisy Interpolating Sets for Low Degree Polynomials","authors":"Zeev Dvir, Amir Shpilka","doi":"10.4086/toc.2011.v007a001","DOIUrl":"https://doi.org/10.4086/toc.2011.v007a001","url":null,"abstract":"A noisy interpolating set (NIS) for degree d polynomials is a set S sube Fn, where F is a finite field, such that any degree d polynomial q isin F[x1,..., xn] can be efficiently interpolated from its values on S, even if an adversary corrupts a constant fraction of the values. In this paper we construct explicit NIS for every prime field Fp and any degree d. Our sets are of size O(nd) and have efficient interpolation algorithms that can recover qfrom a fraction exp(-O(d)) of errors. Our construction is based on a theorem which roughly states that ifS is a NIS for degree I polynomials then dldrS = {alpha1 + ... + alphad | alpha1 isin S} is a NIS for degree d polynomials. Furthermore, given an efficient interpolation algorithm for S, we show how to use it in a black-box manner to build an efficient interpolation algorithm for d ldr S. As a corollary we get an explicit family of punctured Reed-Muller codes that is a family of good codes that have an efficient decoding algorithm from a constant fraction of errors. To the best of our knowledge no such construction was known previously.","PeriodicalId":338061,"journal":{"name":"2008 23rd Annual IEEE Conference on Computational Complexity","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115341835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Amplifying ZPP^SAT[1] and the Two Queries Problem 放大ZPP^SAT[1]与双查询问题
Pub Date : 2008-06-22 DOI: 10.1109/CCC.2008.32
Richard Chang, Suresh Purini
This paper shows a complete upward collapse in the Polynomial Hierarchy (PH) if for ZPP, two queries to a SAT oracle is equivalent to one query. That is, ZPPSAT[1] = ZPPSAT||[2] rArr ZPPSAT[1] = PH. These ZPP machines are required to succeed with probability at least 1/2 + 1/p(n) on inputs of length n for some polynomial p(n). This result builds upon recent work by Tripathi who showed a collapse of PH to S2 P. The use of the probability bound of 1/2 + 1/p(n) is justified in part by showing that this bound can be amplified to 1 - 2-nk for ZPPSAT[1] computations. This paper also shows that in the deterministic case, PSAT[1] = PSAT||[2] rArr PH sube ZPPSAT[1] where the ZPPSAT[1] machine achieves a probability of success of 1/2 - 2-nk.
如果对于ZPP,对SAT oracle的两次查询相当于一次查询,那么本文展示了多项式层次结构(PH)中的完全向上折叠。即ZPPSAT[1] = ZPPSAT||[2] rArr ZPPSAT[1] = ph。对于某个多项式p(n),这些ZPP机器需要在长度为n的输入上以至少1/2 + 1/p(n)的概率成功。这个结果是建立在Tripathi最近的工作基础上的,他显示了PH到S2 p的崩溃。使用1/2 + 1/p(n)的概率界是合理的,部分原因是表明该界限可以在ZPPSAT[1]计算中被放大到1 - 2-nk。本文还证明了在确定性情况下,PSAT[1] = PSAT||[2] rArr PH sub ZPPSAT[1],其中ZPPSAT[1]机器的成功概率为1/2 - 2-nk。
{"title":"Amplifying ZPP^SAT[1] and the Two Queries Problem","authors":"Richard Chang, Suresh Purini","doi":"10.1109/CCC.2008.32","DOIUrl":"https://doi.org/10.1109/CCC.2008.32","url":null,"abstract":"This paper shows a complete upward collapse in the Polynomial Hierarchy (PH) if for ZPP, two queries to a SAT oracle is equivalent to one query. That is, ZPP<sup>SAT[1]</sup> = ZPP<sup>SAT||[2]</sup> rArr ZPP<sup>SAT[1]</sup> = PH. These ZPP machines are required to succeed with probability at least 1/2 + 1/p(n) on inputs of length n for some polynomial p(n). This result builds upon recent work by Tripathi who showed a collapse of PH to S<sub>2</sub> <sup>P</sup>. The use of the probability bound of 1/2 + 1/p(n) is justified in part by showing that this bound can be amplified to 1 - 2<sup>-nk</sup> for ZPP<sup>SAT[1]</sup> computations. This paper also shows that in the deterministic case, P<sup>SAT[1]</sup> = P<sup>SAT||[2]</sup> rArr PH sube ZPP<sup>SAT[1]</sup> where the ZPP<sup>SAT[1]</sup> machine achieves a probability of success of 1/2 - 2<sup>-nk</sup>.","PeriodicalId":338061,"journal":{"name":"2008 23rd Annual IEEE Conference on Computational Complexity","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124948445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Approximation of Natural W[P]-Complete Minimisation Problems Is Hard 自然W[P]-完全极小化问题的逼近性
Pub Date : 2008-06-22 DOI: 10.1109/CCC.2008.24
Kord Eickmeyer, Martin Grohe, M. Grüber
We prove that the weighted monotone circuit satisfiability problem has no fixed-parameter tractable approximation algorithm with constant or polylogarithmic approximation ratio unless FPT = W[P]. Our result answers a question of Alekhnovich and Razborov, who proved that the weighted monotone circuit satisfiability problem has no fixed-parameter tractable 2-approximation algorithm unless every problem in W[P] can be solved by a randomized fpt algorithm and asked whether their result can be derandomized. Alekhnovich and Razborov used their inapproximability result as a lemma for proving that resolution is not automatizable unless W[P] is contained in randomized FPT. It is an immediate consequence of our result that the complexity theoretic assumption can be weakened to W[P] ne FPT. The decision version of the monotone circuit satisfiability problem is known to be complete for the class W[P]. By reducing them to the monotone circuit satisfiability problem with suitable approximation preserving reductions, we prove similar inapproximability results for all other natural minimisation problems known to be W[P]-complete.
证明了加权单调电路可满足性问题不存在具有常数或多对数近似比的定参数可处理逼近算法,除非FPT = W[P]。我们的结果回答了Alekhnovich和Razborov的一个问题,他们证明了加权单调电路可满足性问题没有固定参数可处理的2逼近算法,除非W[P]中的每个问题都可以用随机化的fpt算法求解,并询问了他们的结果是否可以非随机化。Alekhnovich和Razborov用他们的不可逼近性结果作为引理来证明除非W[P]包含在随机FPT中,否则分辨率是不可自动化的。我们的结果的直接结果是,复杂性理论假设可以被削弱为W[P] ne FPT。已知单调电路可满足性问题的决策版本对于W[P]类是完全的。通过适当的近似保留约简,我们证明了所有其他已知W[P]-完全的自然最小化问题的类似的不可逼近性结果。
{"title":"Approximation of Natural W[P]-Complete Minimisation Problems Is Hard","authors":"Kord Eickmeyer, Martin Grohe, M. Grüber","doi":"10.1109/CCC.2008.24","DOIUrl":"https://doi.org/10.1109/CCC.2008.24","url":null,"abstract":"We prove that the weighted monotone circuit satisfiability problem has no fixed-parameter tractable approximation algorithm with constant or polylogarithmic approximation ratio unless FPT = W[P]. Our result answers a question of Alekhnovich and Razborov, who proved that the weighted monotone circuit satisfiability problem has no fixed-parameter tractable 2-approximation algorithm unless every problem in W[P] can be solved by a randomized fpt algorithm and asked whether their result can be derandomized. Alekhnovich and Razborov used their inapproximability result as a lemma for proving that resolution is not automatizable unless W[P] is contained in randomized FPT. It is an immediate consequence of our result that the complexity theoretic assumption can be weakened to W[P] ne FPT. The decision version of the monotone circuit satisfiability problem is known to be complete for the class W[P]. By reducing them to the monotone circuit satisfiability problem with suitable approximation preserving reductions, we prove similar inapproximability results for all other natural minimisation problems known to be W[P]-complete.","PeriodicalId":338061,"journal":{"name":"2008 23rd Annual IEEE Conference on Computational Complexity","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130912232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Learning Complexity vs. Communication Complexity 学习复杂性vs.沟通复杂性
Pub Date : 2008-06-22 DOI: 10.1017/S0963548308009656
N. Linial, A. Shraibman
This paper has two main focal points. We first consider an important class of machine learning algorithms - large margin classifiers, such as support vector machines. The notion of margin complexity quantifies the extent to which a given class of functions can be learned by large margin classifiers. We prove that up to a small multiplicative constant, margin complexity is equal to the inverse of discrepancy. This establishes a strong tie between seemingly very different notions from two distinct areas. In the same way that matrix rigidity is related to rank, we introduce the notion of rigidity of margin complexity. We prove that sign matrices with small margin complexity rigidity are very rare. This leads to the question of proving lower bounds on the rigidity of margin complexity. Quite surprisingly, this question turns out to be closely related to basic open problems in communication complexity, e.g., whether PSPACE can be separated from the polynomial hierarchy in communication complexity. There are numerous known relations between the field of learning theory and that of communication complexity, as one might expect since communication is an inherent aspect of learning. The results of this paper constitute another link in this rich web of relations. This link has already proved significant as it was used in the solution of a few open problems in communication complexity.
本文主要有两个重点。我们首先考虑一类重要的机器学习算法——大边界分类器,如支持向量机。边际复杂度的概念量化了给定的一类函数可以被大边际分类器学习到的程度。我们证明了在一个很小的乘法常数范围内,边际复杂度等于差值的倒数。这在两个截然不同的领域中建立了一个看似非常不同的概念之间的紧密联系。就像矩阵刚性与秩的关系一样,我们引入了边际复杂度刚性的概念。证明了具有小裕度复杂度刚性的符号矩阵是非常罕见的。这就引出了证明边际复杂度刚性的下界的问题。令人惊讶的是,这个问题与通信复杂性中的基本开放问题密切相关,例如PSPACE是否可以从通信复杂性中的多项式层次中分离出来。正如人们所预料的那样,在学习理论领域和交流复杂性领域之间存在着许多已知的关系,因为交流是学习的一个固有方面。本文的研究结果构成了这张丰富的关系网中的另一环。这个链接已经被证明是重要的,因为它被用于解决通信复杂性中的一些开放问题。
{"title":"Learning Complexity vs. Communication Complexity","authors":"N. Linial, A. Shraibman","doi":"10.1017/S0963548308009656","DOIUrl":"https://doi.org/10.1017/S0963548308009656","url":null,"abstract":"This paper has two main focal points. We first consider an important class of machine learning algorithms - large margin classifiers, such as support vector machines. The notion of margin complexity quantifies the extent to which a given class of functions can be learned by large margin classifiers. We prove that up to a small multiplicative constant, margin complexity is equal to the inverse of discrepancy. This establishes a strong tie between seemingly very different notions from two distinct areas. In the same way that matrix rigidity is related to rank, we introduce the notion of rigidity of margin complexity. We prove that sign matrices with small margin complexity rigidity are very rare. This leads to the question of proving lower bounds on the rigidity of margin complexity. Quite surprisingly, this question turns out to be closely related to basic open problems in communication complexity, e.g., whether PSPACE can be separated from the polynomial hierarchy in communication complexity. There are numerous known relations between the field of learning theory and that of communication complexity, as one might expect since communication is an inherent aspect of learning. The results of this paper constitute another link in this rich web of relations. This link has already proved significant as it was used in the solution of a few open problems in communication complexity.","PeriodicalId":338061,"journal":{"name":"2008 23rd Annual IEEE Conference on Computational Complexity","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127567210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 72
Constraint Logic: A Uniform Framework for Modeling Computation as Games 约束逻辑:将计算建模为游戏的统一框架
Pub Date : 2008-06-22 DOI: 10.1109/CCC.2008.35
E. Demaine, R. Hearn
We introduce a simple game family, called constraint logic, where players reverse edges in a directed graph while satisfying vertex in-flow constraints. This game family can be interpreted in many different game-theoretic settings, ranging from zero-player automata to a more economic setting of team multiplayer games with hidden information. Each setting gives rise to a model of computation that we show corresponds to a classic complexity class. In this way we obtain a uniform framework for modeling various complexities of computation as games. Most surprising among our results is that a game with three players and a bounded amount of state can simulate any (infinite) Turing computation, making the game undecidable. Our framework also provides a more graphical, less formulaic viewpoint of computation. This graph model has been shown to be particularly appropriate for reducing to many existing combinatorial games and puzzles - such as Sokoban, rush hour, river crossing, tipover, the warehouseman's problem, pushing blocks, hinged-dissection reconfiguration, Amazons, and Konane (hawaiian checkers) - which have an intrinsically planar structure. Our framework makes it substantially easier to prove completeness of such games in their appropriate complexity classes.
我们引入了一个简单的游戏族,称为约束逻辑,其中玩家在满足顶点流约束的情况下反转有向图中的边。这个游戏家族可以在许多不同的博弈论背景下解释,从零玩家自动机到更经济的团队多人游戏隐藏信息。每一种设置都会产生一种计算模型,我们将其与一个经典的复杂性类相对应。通过这种方式,我们获得了一个统一的框架,将各种复杂的计算建模为博弈。我们的结果中最令人惊讶的是,一个有三个玩家和有限数量状态的游戏可以模拟任何(无限)图灵计算,使游戏不可判定。我们的框架还提供了一种更图形化、更少公式化的计算观点。这个图形模型被证明特别适合于简化许多现有的组合游戏和谜题——比如推箱子、高峰时间、渡河、倒车、仓库管理员问题、推积木、铰链解剖重构、亚马逊和Konane(夏威夷跳棋)——它们本质上具有平面结构。我们的框架使得在适当的复杂度类中证明这类游戏的完整性变得更加容易。
{"title":"Constraint Logic: A Uniform Framework for Modeling Computation as Games","authors":"E. Demaine, R. Hearn","doi":"10.1109/CCC.2008.35","DOIUrl":"https://doi.org/10.1109/CCC.2008.35","url":null,"abstract":"We introduce a simple game family, called constraint logic, where players reverse edges in a directed graph while satisfying vertex in-flow constraints. This game family can be interpreted in many different game-theoretic settings, ranging from zero-player automata to a more economic setting of team multiplayer games with hidden information. Each setting gives rise to a model of computation that we show corresponds to a classic complexity class. In this way we obtain a uniform framework for modeling various complexities of computation as games. Most surprising among our results is that a game with three players and a bounded amount of state can simulate any (infinite) Turing computation, making the game undecidable. Our framework also provides a more graphical, less formulaic viewpoint of computation. This graph model has been shown to be particularly appropriate for reducing to many existing combinatorial games and puzzles - such as Sokoban, rush hour, river crossing, tipover, the warehouseman's problem, pushing blocks, hinged-dissection reconfiguration, Amazons, and Konane (hawaiian checkers) - which have an intrinsically planar structure. Our framework makes it substantially easier to prove completeness of such games in their appropriate complexity classes.","PeriodicalId":338061,"journal":{"name":"2008 23rd Annual IEEE Conference on Computational Complexity","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134189514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
A Direct Product Theorem for Discrepancy 差的直积定理
Pub Date : 2008-06-22 DOI: 10.1109/CCC.2008.25
Troy Lee, A. Shraibman, R. Spalek
Discrepancy is a versatile bound in communication complexity which can be used to show lower bounds in randomized, quantum, and even weakly-unbounded error models of communication. We show an optimal product theorem for discrepancy, namely that for any two Boolean functions f, g, disc(f odot g)=thetas(disc(f) disc(g)). As a consequence we obtain a strong direct product theorem for distributional complexity, and direct sum theorems for worst-case complexity, for bounds shown by the discrepancy method. Our results resolve an open problem of Shaltiel (2003) who showed a weaker product theorem for discrepancy with respect to the uniform distribution, discUodot(fodotk)=O(discU(f))k/3. The main tool for our results is semidefinite programming, in particular a recent characterization of discrepancy in terms of a semidefinite programming quantity by Linial and Shraibman (2006).
差异是通信复杂度的一个通用界,它可以用来表示随机、量子甚至弱无界通信误差模型的下界。给出了差异的最优积定理,即对于任意两个布尔函数f, g, disc(f·g)=theta (disc(f) disc(g))。由此,我们得到了分布复杂度的一个强直积定理,以及差值法所示界的最坏情况复杂度的直和定理。我们的结果解决了Shaltiel(2003)的一个开放问题,他展示了关于均匀分布的差异的弱乘积定理,discuo (fodotk)=O(discuu (f))k/3。我们的结果的主要工具是半确定规划,特别是最近由Linial和Shraibman(2006)根据半确定规划量描述的差异。
{"title":"A Direct Product Theorem for Discrepancy","authors":"Troy Lee, A. Shraibman, R. Spalek","doi":"10.1109/CCC.2008.25","DOIUrl":"https://doi.org/10.1109/CCC.2008.25","url":null,"abstract":"Discrepancy is a versatile bound in communication complexity which can be used to show lower bounds in randomized, quantum, and even weakly-unbounded error models of communication. We show an optimal product theorem for discrepancy, namely that for any two Boolean functions f, g, disc(f odot g)=thetas(disc(f) disc(g)). As a consequence we obtain a strong direct product theorem for distributional complexity, and direct sum theorems for worst-case complexity, for bounds shown by the discrepancy method. Our results resolve an open problem of Shaltiel (2003) who showed a weaker product theorem for discrepancy with respect to the uniform distribution, discUodot(fodotk)=O(discU(f))k/3. The main tool for our results is semidefinite programming, in particular a recent characterization of discrepancy in terms of a semidefinite programming quantity by Linial and Shraibman (2006).","PeriodicalId":338061,"journal":{"name":"2008 23rd Annual IEEE Conference on Computational Complexity","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126864104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 92
On the Relative Efficiency of Resolution-Like Proofs and Ordered Binary Decision Diagram Proofs 类分辨率证明与有序二元决策图证明的相对效率
Pub Date : 2008-06-22 DOI: 10.1109/CCC.2008.34
Nathan Segerlind
We show that tree-like OBDD proofs of unsatisfiability require an exponential increase (s rarr 2s Omega(1)) in proof size to simulate unrestricted resolution, and that unrestricted OBDD proofs of unsatisfiability require an almost-exponential increase (s rarr 22(log s) Omega(1)) in proof size to simulate Res (O(log n)). The "OBDD proof system" that we consider has lines that are ordered binary decision diagrams in the same variables as the input formula, and is allowed to combine two previously derived OBDDs by any sound inference rule. In particular, this system abstracts satisfiability algorithms based upon explicit construction of OBDDs and satisfiability algorithms based upon symbolic quantifier elimination.
我们证明了树状OBDD的不满足证明需要证明大小的指数增长(s rarr 2s Omega(1))来模拟不受限制的分辨率,而不受限制的OBDD证明需要证明大小的几乎指数增长(s rarr 22(log s) Omega(1))来模拟Res (O(log n))。我们考虑的“OBDD证明系统”具有与输入公式相同变量的有序二元决策图,并且允许通过任何合理的推理规则组合两个先前派生的OBDD。该系统对基于obdd显式构造的可满足性算法和基于符号量词消去的可满足性算法进行了抽象。
{"title":"On the Relative Efficiency of Resolution-Like Proofs and Ordered Binary Decision Diagram Proofs","authors":"Nathan Segerlind","doi":"10.1109/CCC.2008.34","DOIUrl":"https://doi.org/10.1109/CCC.2008.34","url":null,"abstract":"We show that tree-like OBDD proofs of unsatisfiability require an exponential increase (s rarr 2s Omega(1)) in proof size to simulate unrestricted resolution, and that unrestricted OBDD proofs of unsatisfiability require an almost-exponential increase (s rarr 22(log s) Omega(1)) in proof size to simulate Res (O(log n)). The \"OBDD proof system\" that we consider has lines that are ordered binary decision diagrams in the same variables as the input formula, and is allowed to combine two previously derived OBDDs by any sound inference rule. In particular, this system abstracts satisfiability algorithms based upon explicit construction of OBDDs and satisfiability algorithms based upon symbolic quantifier elimination.","PeriodicalId":338061,"journal":{"name":"2008 23rd Annual IEEE Conference on Computational Complexity","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116984946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Randomised Individual Communication Complexity 随机个体通信复杂性
Pub Date : 2008-06-22 DOI: 10.1109/CCC.2008.33
H. Buhrman, M. Koucký, N. Vereshchagin
In this paper we study the individual communication complexity of the following problem. Alice receives an input string x and Bob an input string y, and Alice has to output y. For deterministic protocols it has been shown in Buhrman et al. (2004), that C(y) many bits need to be exchanged even if the actual amount of information C(y|x) is much smaller than C(y). It turns out that for randomised protocols the situation is very different. We establish randomised protocols whose communication complexity is close to the information theoretical lower bound. We furthermore initiate and obtain results about the randomised round complexity of this problem and show trade-offs between the amount of communication and the number of rounds. In order to do this we establish a general framework for studying these types of questions.
本文研究了以下问题的个体通信复杂性。Alice接收到输入字符串x, Bob接收到输入字符串y, Alice必须输出y。Buhrman et al.(2004)表明,对于确定性协议,即使实际信息量C(y|x)远小于C(y),也需要交换C(y)多个比特。事实证明,对于随机协议,情况是非常不同的。建立了通信复杂度接近信息理论下界的随机化协议。我们进一步提出并获得了该问题的随机轮复杂度的结果,并展示了通信量和轮数之间的权衡。为了做到这一点,我们建立了一个研究这类问题的总体框架。
{"title":"Randomised Individual Communication Complexity","authors":"H. Buhrman, M. Koucký, N. Vereshchagin","doi":"10.1109/CCC.2008.33","DOIUrl":"https://doi.org/10.1109/CCC.2008.33","url":null,"abstract":"In this paper we study the individual communication complexity of the following problem. Alice receives an input string x and Bob an input string y, and Alice has to output y. For deterministic protocols it has been shown in Buhrman et al. (2004), that C(y) many bits need to be exchanged even if the actual amount of information C(y|x) is much smaller than C(y). It turns out that for randomised protocols the situation is very different. We establish randomised protocols whose communication complexity is close to the information theoretical lower bound. We furthermore initiate and obtain results about the randomised round complexity of this problem and show trade-offs between the amount of communication and the number of rounds. In order to do this we establish a general framework for studying these types of questions.","PeriodicalId":338061,"journal":{"name":"2008 23rd Annual IEEE Conference on Computational Complexity","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130569126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Amplifying Lower Bounds by Means of Self-Reducibility 利用自约性放大下界
Pub Date : 2008-06-22 DOI: 10.1145/1706591.1706594
E. Allender, M. Koucký
We observe that many important computational problems in NC1 share a simple self-reducibility property. We then show that, for any problem A having this self-reducibility property, A has polynomial size TC0 circuits if and only if it has TC0 circuits of size n1+isin for every isin>0 (counting the number of wires in a circuit as the size of the circuit). As an example of what this observation yields, consider the Boolean formula evaluation problem (BFE), which is complete for NC1. It follows from a lower bound of Impagliazzo, Paturi, and Saks, that BFE requires depth d TC0 circuits of size n1+isin d. If one were able to improve this lower bound to show that there is some constant isin>0 such that every TC0 circuit family recognizing BFE has size n1+isin, then it would follow that TC0neNC1. We also show that problems with small uniform constant- depth circuits have algorithms that simultaneously have small space and time bounds. We then make use of known time-space tradeoff lower bounds to show that SAT requires uniform depth d TC0 and AC0 [6] circuits of size n1+c for some constant c depending on d.
我们观察到NC1中许多重要的计算问题都有一个简单的自约性质。然后,我们证明,对于任何具有这种自约性的问题A,当且仅当对于每个isin>0(将电路中的导线数作为电路的大小),A具有多项式大小为TC0的电路时,且仅当它具有大小为n1+isin的TC0电路时。作为这种观察结果的一个示例,考虑布尔公式求值问题(BFE),它对于NC1来说是完整的。从Impagliazzo, Paturi和Saks的下界可以得出,BFE需要大小为n1+isin d的深度d TC0电路。如果能够改进这个下界,表明存在某个常数isin>0,使得每个识别BFE的TC0电路族都具有大小为n1+isin的TC0电路族,则可以得出TC0neNC1。我们还证明了小的均匀等深度电路问题的算法同时具有小的空间和时间界限。然后,我们利用已知的时空权衡下界来证明SAT需要均匀深度d TC0和AC0[6]电路的大小为n1+c,取决于d的某个常数c。
{"title":"Amplifying Lower Bounds by Means of Self-Reducibility","authors":"E. Allender, M. Koucký","doi":"10.1145/1706591.1706594","DOIUrl":"https://doi.org/10.1145/1706591.1706594","url":null,"abstract":"We observe that many important computational problems in NC<sup>1</sup> share a simple self-reducibility property. We then show that, for any problem A having this self-reducibility property, A has polynomial size TC<sup>0</sup> circuits if and only if it has TC<sup>0</sup> circuits of size n<sup>1+isin</sup> for every isin>0 (counting the number of wires in a circuit as the size of the circuit). As an example of what this observation yields, consider the Boolean formula evaluation problem (BFE), which is complete for NC<sup>1</sup>. It follows from a lower bound of Impagliazzo, Paturi, and Saks, that BFE requires depth d TC<sup>0</sup> circuits of size n<sup>1+isin</sup> <sup>d</sup>. If one were able to improve this lower bound to show that there is some constant isin>0 such that every TC<sup>0</sup> circuit family recognizing BFE has size n<sup>1+isin</sup>, then it would follow that TC<sup>0</sup>neNC<sup>1</sup>. We also show that problems with small uniform constant- depth circuits have algorithms that simultaneously have small space and time bounds. We then make use of known time-space tradeoff lower bounds to show that SAT requires uniform depth d TC<sup>0</sup> and AC<sup>0</sup> [6] circuits of size n<sup>1+c</sup> for some constant c depending on d.","PeriodicalId":338061,"journal":{"name":"2008 23rd Annual IEEE Conference on Computational Complexity","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116048549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 88
Quantum Expanders: Motivation and Constructions 量子扩展器:动机和结构
Pub Date : 2008-06-22 DOI: 10.1109/CCC.2008.23
Avraham Ben-Aroya, O. Schwartz, A. Ta-Shma
We define quantum expanders in a natural way. We give two constructions of quantum expanders, both based on classical expander constructions. The first construction is algebraic, and is based on the construction of Cayley Ramanujan graphs over the group PGL(2, q) given by Lubotzky et al. (1988). The second construction is combinatorial, and is based on a quantum variant of the Zig-Zag product introduced by Reingold et al. (2000). Both constructions are of constant degree, and the second one is explicit. Using quantum expanders, we characterize the complexity of comparing and estimating quantum entropies. Specifically, we consider the following task: given two mixed states, each given by a quantum circuit generating it, decide which mixed state has more entropy. We show that this problem is QSZK-complete (where QSZK is the class of languages having a zero-knowledge quantum interactive protocol). This problem is very well motivated from a physical point of view. Our proof resembles the classical proof that the entropy difference problem is SZK-complete, but crucially depends on the use of quantum expanders.
我们用自然的方式定义量子膨胀器。我们给出了两种基于经典扩展器结构的量子扩展器结构。第一种构造是代数的,基于Lubotzky et al.(1988)给出的群PGL(2, q)上的Cayley Ramanujan图的构造。第二种结构是组合的,基于Reingold等人(2000)引入的z - zag积的量子变体。这两个结构都是定度的,第二个结构是明确的。利用量子扩展器,我们描述了比较和估计量子熵的复杂性。具体来说,我们考虑以下任务:给定两种混合状态,每一种状态都由产生它的量子电路给出,决定哪种混合状态具有更多的熵。我们证明了这个问题是QSZK完备的(其中QSZK是具有零知识量子交互协议的语言类)。从物理角度来看,这个问题的动机很好。我们的证明类似于熵差问题是szk完全的经典证明,但关键取决于量子膨胀机的使用。
{"title":"Quantum Expanders: Motivation and Constructions","authors":"Avraham Ben-Aroya, O. Schwartz, A. Ta-Shma","doi":"10.1109/CCC.2008.23","DOIUrl":"https://doi.org/10.1109/CCC.2008.23","url":null,"abstract":"We define quantum expanders in a natural way. We give two constructions of quantum expanders, both based on classical expander constructions. The first construction is algebraic, and is based on the construction of Cayley Ramanujan graphs over the group PGL(2, q) given by Lubotzky et al. (1988). The second construction is combinatorial, and is based on a quantum variant of the Zig-Zag product introduced by Reingold et al. (2000). Both constructions are of constant degree, and the second one is explicit. Using quantum expanders, we characterize the complexity of comparing and estimating quantum entropies. Specifically, we consider the following task: given two mixed states, each given by a quantum circuit generating it, decide which mixed state has more entropy. We show that this problem is QSZK-complete (where QSZK is the class of languages having a zero-knowledge quantum interactive protocol). This problem is very well motivated from a physical point of view. Our proof resembles the classical proof that the entropy difference problem is SZK-complete, but crucially depends on the use of quantum expanders.","PeriodicalId":338061,"journal":{"name":"2008 23rd Annual IEEE Conference on Computational Complexity","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127162969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
期刊
2008 23rd Annual IEEE Conference on Computational Complexity
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1