首页 > 最新文献

21st Annual IEEE Conference on Computational Complexity (CCC'06)最新文献

英文 中文
Making hard problems harder 让难题变得更难
Pub Date : 2006-07-16 DOI: 10.1109/CCC.2006.26
Joshua Buresh-Oppenheim, R. Santhanam
We consider a general approach to the hoary problem of (im)proving circuit lower bounds. We define notions of hardness condensing and hardness extraction, in analogy to the corresponding notions from the computational theory of randomness. A hardness condenser is a procedure that takes in a Boolean function as input, as well as an advice string, and outputs a Boolean function on a smaller number of bits which has greater hardness when measured in terms of input length. A hardness extractor takes in a Boolean function as input, as well as an advice string, and outputs a Boolean function defined on a smaller number of bits which has close to maximum hardness. We prove several positive and negative results about these objects. First, we observe that hardness-based pseudo-random generators can be used to extract deterministic hardness from non-deterministic hardness. We derive several consequences of this observation. Among other results, we show that if E/O(n) has exponential non-deterministic hardness, then E/O{n) has deterministic hardness 2n/n, which is close to the maximum possible. We demonstrate a rare downward closure result: E with sub-exponential advice is contained in non-uniform space 2deltan for all delta > 0 if and only if there is k > 0 such that P with quadratic advice can be approximated in non-uniform space nk . Next, we consider limitations on natural models of hardness condensing and extraction. We show lower bounds on the advice length required for hardness condensing in a very general model of "relativizing" condensers. We show that non-trivial black-box extraction of deterministic hardness from deterministic hardness is essentially impossible. Finally, we prove positive results on hardness condensing in certain special cases. We show how to condense hardness from a biased function without advice using a hashing technique. We also give a hardness condenser without advice from average-case hardness to worst-case hardness. Our technique uses a connection between hardness condensing and explicit constructions of covering codes
我们考虑了证明电路下界这一棘手问题的一般方法。我们定义了硬度压缩和硬度提取的概念,类比于随机计算理论中的相应概念。硬度电容是一个过程,它接受一个布尔函数作为输入,以及一个建议字符串,并在较小的比特上输出一个布尔函数,当以输入长度衡量时,这个比特具有更大的硬度。硬度提取器接受一个布尔函数作为输入,以及一个建议字符串,并输出一个布尔函数,该函数定义在更小的比特数上,接近最大硬度。我们证明了关于这些对象的几个正负结果。首先,我们观察到基于硬度的伪随机生成器可以用于从非确定性硬度中提取确定性硬度。我们从这一观察得出了几个结论。结果表明,如果E/O(n)具有指数不确定性硬度,则E/O{n)具有确定性硬度2n/n,接近最大可能值。我们证明了一个罕见的向下闭包结果:对于所有> 0的函数,当且仅当k > 0,使得具有二次通知的P可以在非一致空间nk中近似时,具有次指数通知的E包含在非一致空间2delta中。接下来,我们考虑了硬度凝聚和萃取自然模型的局限性。我们在一个非常一般的“相对化”冷凝器模型中展示了硬度冷凝所需的建议长度的下界。我们证明了从确定性硬度中非平凡的黑盒提取确定性硬度本质上是不可能的。最后,在某些特殊情况下,我们证明了硬度凝聚的积极结果。我们将展示如何在没有建议的情况下使用哈希技术从有偏差的函数中压缩硬度。我们还给出了硬度表,没有从平均情况硬度到最坏情况硬度的建议。我们的技术使用了硬度压缩和覆盖代码的显式结构之间的联系
{"title":"Making hard problems harder","authors":"Joshua Buresh-Oppenheim, R. Santhanam","doi":"10.1109/CCC.2006.26","DOIUrl":"https://doi.org/10.1109/CCC.2006.26","url":null,"abstract":"We consider a general approach to the hoary problem of (im)proving circuit lower bounds. We define notions of hardness condensing and hardness extraction, in analogy to the corresponding notions from the computational theory of randomness. A hardness condenser is a procedure that takes in a Boolean function as input, as well as an advice string, and outputs a Boolean function on a smaller number of bits which has greater hardness when measured in terms of input length. A hardness extractor takes in a Boolean function as input, as well as an advice string, and outputs a Boolean function defined on a smaller number of bits which has close to maximum hardness. We prove several positive and negative results about these objects. First, we observe that hardness-based pseudo-random generators can be used to extract deterministic hardness from non-deterministic hardness. We derive several consequences of this observation. Among other results, we show that if E/O(n) has exponential non-deterministic hardness, then E/O{n) has deterministic hardness 2n/n, which is close to the maximum possible. We demonstrate a rare downward closure result: E with sub-exponential advice is contained in non-uniform space 2deltan for all delta > 0 if and only if there is k > 0 such that P with quadratic advice can be approximated in non-uniform space nk . Next, we consider limitations on natural models of hardness condensing and extraction. We show lower bounds on the advice length required for hardness condensing in a very general model of \"relativizing\" condensers. We show that non-trivial black-box extraction of deterministic hardness from deterministic hardness is essentially impossible. Finally, we prove positive results on hardness condensing in certain special cases. We show how to condense hardness from a biased function without advice using a hashing technique. We also give a hardness condenser without advice from average-case hardness to worst-case hardness. Our technique uses a connection between hardness condensing and explicit constructions of covering codes","PeriodicalId":325664,"journal":{"name":"21st Annual IEEE Conference on Computational Complexity (CCC'06)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126678832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Learning monotone decision trees in polynomial time 在多项式时间内学习单调决策树
Pub Date : 2006-07-16 DOI: 10.1137/060669309
Ryan O'Donnell, R. Servedio
We give an algorithm that learns any monotone Boolean function f: {-1, 1}n rarr {-1, 1} to any constant accuracy, under the uniform distribution, in time polynomial in n and in the decision tree size of f. This is the first algorithm that can learn arbitrary monotone Boolean functions to high accuracy, using random examples only, in time polynomial in a reasonable measure of the complexity of f. A key ingredient of the result is a new bound showing that the average sensitivity of any monotone function computed by a decision tree of size s must be at most radic(log s). This bound has already proved to be of independent utility in the study of decision tree complexity (Schramm et al., 2005). We generalize the basic inequality and learning result described above in various ways; specifically, to partition size (a stronger complexity measure than decision tree size), p-biased measures over the Boolean cube (rather than just the uniform distribution), and real-valued (rather than just Boolean-valued) functions
我们给出了一个算法,可以学习任何单调布尔函数f:{- 1,1}n rarr{- 1,1}到任意常数精度,在均匀分布下,在n的时间多项式内,在f的决策树大小内。这是第一个仅使用随机样本就能高精度学习任意单调布尔函数的算法。该结果的一个关键组成部分是一个新的界,表明由大小为s的决策树计算的任何单调函数的平均灵敏度必须最多为径向(log s)。该界已经被证明在决策树复杂性的研究中具有独立的效用(Schramm et al., 2005)。我们将上述的基本不等式和学习结果用不同的方法进行推广;特别是分区大小(比决策树大小更复杂的度量)、布尔立方体(而不仅仅是均匀分布)上的p偏度量和实值(而不仅仅是布尔值)函数
{"title":"Learning monotone decision trees in polynomial time","authors":"Ryan O'Donnell, R. Servedio","doi":"10.1137/060669309","DOIUrl":"https://doi.org/10.1137/060669309","url":null,"abstract":"We give an algorithm that learns any monotone Boolean function f: {-1, 1}n rarr {-1, 1} to any constant accuracy, under the uniform distribution, in time polynomial in n and in the decision tree size of f. This is the first algorithm that can learn arbitrary monotone Boolean functions to high accuracy, using random examples only, in time polynomial in a reasonable measure of the complexity of f. A key ingredient of the result is a new bound showing that the average sensitivity of any monotone function computed by a decision tree of size s must be at most radic(log s). This bound has already proved to be of independent utility in the study of decision tree complexity (Schramm et al., 2005). We generalize the basic inequality and learning result described above in various ways; specifically, to partition size (a stronger complexity measure than decision tree size), p-biased measures over the Boolean cube (rather than just the uniform distribution), and real-valued (rather than just Boolean-valued) functions","PeriodicalId":325664,"journal":{"name":"21st Annual IEEE Conference on Computational Complexity (CCC'06)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117336905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 109
Godel and Computations 哥德尔与计算
Pub Date : 2006-07-16 DOI: 10.1109/CCC.2006.21
P. Pudlák
Godel was born 100 years ago in this country. It is a good opportunity to commemorate this anniversary by a lecture about his influence on computational complexity. He also made contributions on computability, length of proofs, and diagonalization
哥德尔100年前出生在这个国家。这是一个很好的机会,通过演讲来纪念他对计算复杂性的影响。他还在可计算性、证明长度和对角化方面做出了贡献
{"title":"Godel and Computations","authors":"P. Pudlák","doi":"10.1109/CCC.2006.21","DOIUrl":"https://doi.org/10.1109/CCC.2006.21","url":null,"abstract":"Godel was born 100 years ago in this country. It is a good opportunity to commemorate this anniversary by a lecture about his influence on computational complexity. He also made contributions on computability, length of proofs, and diagonalization","PeriodicalId":325664,"journal":{"name":"21st Annual IEEE Conference on Computational Complexity (CCC'06)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134219879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Parallel repetition of zero-knowledge proofs and the possibility of basing cryptography on NP-hardness 零知识证明的并行重复和基于np -硬度的密码学的可能性
Pub Date : 2006-07-16 DOI: 10.1109/CCC.2006.33
R. Pass
Two long-standing open problems exist on the fringe of complexity theory and cryptography: (1) Does there exist a reduction from an NP-complete problem to a one-way function? (2) Do parallelized versions of classical constant-round zero-knowledge proofs for NP conceal every "hard" bit of the witness to the statement proved? We show that, unless the polynomial-hierarchy collapses, black-box reductions cannot be used to provide positive answers to both questions
在复杂性理论和密码学的边缘存在着两个长期存在的开放问题:(1)是否存在从np完全问题到单向函数的约简?(2) NP的经典常轮零知识证明的并行化版本是否隐藏了被证明陈述的证人的每一个“硬”位?我们表明,除非多项式层次结构崩溃,否则黑盒约简不能用于为两个问题提供正答案
{"title":"Parallel repetition of zero-knowledge proofs and the possibility of basing cryptography on NP-hardness","authors":"R. Pass","doi":"10.1109/CCC.2006.33","DOIUrl":"https://doi.org/10.1109/CCC.2006.33","url":null,"abstract":"Two long-standing open problems exist on the fringe of complexity theory and cryptography: (1) Does there exist a reduction from an NP-complete problem to a one-way function? (2) Do parallelized versions of classical constant-round zero-knowledge proofs for NP conceal every \"hard\" bit of the witness to the statement proved? We show that, unless the polynomial-hierarchy collapses, black-box reductions cannot be used to provide positive answers to both questions","PeriodicalId":325664,"journal":{"name":"21st Annual IEEE Conference on Computational Complexity (CCC'06)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131467660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Derandomization of probabilistic auxiliary pushdown automata classes 概率辅助下推自动机类的非随机化
Pub Date : 2006-07-16 DOI: 10.1109/CCC.2006.16
H. Venkateswaran
We extend Nisan's breakthrough derandomization result that BPHL sube SC2 (1992) to bounded error probabilistic complexity classes based on auxiliary pushdown automata. In particular, we show that any logarithmic space, polynomial time two-sided bounded-error probabilistic auxiliary pushdown automaton (the corresponding complexity class is denoted by BPHLOGCFL) can be simulated by an SC2 machine. This derandomization result improves a classical result by Cook (1979) that LOGDCFL sube SC2 since LOGDCFL is contained in BPHLOGCFL. We also present a simple circuit-based proof that BPHLOGCFL is in NC 2
我们将Nisan的突破性非随机化结果BPHL subbe SC2(1992)推广到基于辅助下推自动机的有界误差概率复杂度类。特别地,我们证明了任何对数空间,多项式时间的双边有界误差概率辅助下推自动机(相应的复杂度类用BPHLOGCFL表示)都可以用SC2机器模拟。由于LOGDCFL包含在BPHLOGCFL中,因此该非随机化结果改进了Cook(1979)的经典结果LOGDCFL sub - SC2。我们还提出了一个简单的基于电路的证明,证明BPHLOGCFL在NC 2中
{"title":"Derandomization of probabilistic auxiliary pushdown automata classes","authors":"H. Venkateswaran","doi":"10.1109/CCC.2006.16","DOIUrl":"https://doi.org/10.1109/CCC.2006.16","url":null,"abstract":"We extend Nisan's breakthrough derandomization result that BP<sub>H</sub>L sube SC<sup>2</sup> (1992) to bounded error probabilistic complexity classes based on auxiliary pushdown automata. In particular, we show that any logarithmic space, polynomial time two-sided bounded-error probabilistic auxiliary pushdown automaton (the corresponding complexity class is denoted by BP<sub>H</sub>LOGCFL) can be simulated by an SC<sup>2</sup> machine. This derandomization result improves a classical result by Cook (1979) that LOGDCFL sube SC<sup>2 </sup> since LOGDCFL is contained in BP<sub>H</sub>LOGCFL. We also present a simple circuit-based proof that BP<sub>H</sub>LOGCFL is in NC <sup>2</sup>","PeriodicalId":325664,"journal":{"name":"21st Annual IEEE Conference on Computational Complexity (CCC'06)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115268644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A 3-query non-adaptive PCP with perfect completeness 具有完全完备性的3查询非自适应PCP
Pub Date : 2006-07-16 DOI: 10.1109/CCC.2006.5
Subhash Khot, Rishi Saket
We study a very basic open problem regarding the PCP characterization of NP, namely, the power of PCPs with 3 non-adaptive queries and perfect completeness. The lowest soundness known till now for such a PCP is 6/8 + epsi given by a construction of Hastad (1997). However, Zwick (1998) shows that a 3-query non-adaptive PCP with perfect completeness cannot achieve soundness below 5/8. In this paper, we construct a 3-query non-adaptive PCP with perfect completeness and soundness 20/27 + epsi, which improves upon the previous best soundness of 6/8 + epsi. A standard reduction from PCPs to constraint satisfaction problems (CSPs) implies that it is NP-hard to tell if a Boolean CSP on 3-variables has a satisfying assignment or no assignment satisfies more than 20/27 + epsi fraction of the constraints. Our construction uses "biased long codes" introduced by Dinur and Safra (2002). We develop new 3-query tests to check consistency between such codes. These tests are analyzed by extending Hastad's Fourier methods (1997) to the biased case
我们研究了关于NP的PCP刻画的一个非常基本的开放问题,即具有3个非自适应查询和完全完备性的PCP的幂。到目前为止,这种PCP已知的最低稳健性是6/8 + epsi,由hasad(1997)的构造给出。然而,Zwick(1998)表明,具有完全完备性的3查询非自适应PCP不能达到5/8以下的稳健性。本文构造了一个完备性和健全性为20/27 + epsi的3查询非自适应PCP,改进了之前的最佳健全性为6/8 + epsi。从pcp到约束满足问题(CSP)的标准简化意味着,判断3变量上的布尔CSP是否具有令人满意的赋值或没有赋值满足超过20/27 + epsi分数的约束是np困难的。我们的结构使用了Dinur和Safra(2002)引入的“有偏差的长代码”。我们开发了新的3查询测试来检查这些代码之间的一致性。通过将哈斯德的傅立叶方法(1997)扩展到有偏情况来分析这些测试
{"title":"A 3-query non-adaptive PCP with perfect completeness","authors":"Subhash Khot, Rishi Saket","doi":"10.1109/CCC.2006.5","DOIUrl":"https://doi.org/10.1109/CCC.2006.5","url":null,"abstract":"We study a very basic open problem regarding the PCP characterization of NP, namely, the power of PCPs with 3 non-adaptive queries and perfect completeness. The lowest soundness known till now for such a PCP is 6/8 + epsi given by a construction of Hastad (1997). However, Zwick (1998) shows that a 3-query non-adaptive PCP with perfect completeness cannot achieve soundness below 5/8. In this paper, we construct a 3-query non-adaptive PCP with perfect completeness and soundness 20/27 + epsi, which improves upon the previous best soundness of 6/8 + epsi. A standard reduction from PCPs to constraint satisfaction problems (CSPs) implies that it is NP-hard to tell if a Boolean CSP on 3-variables has a satisfying assignment or no assignment satisfies more than 20/27 + epsi fraction of the constraints. Our construction uses \"biased long codes\" introduced by Dinur and Safra (2002). We develop new 3-query tests to check consistency between such codes. These tests are analyzed by extending Hastad's Fourier methods (1997) to the biased case","PeriodicalId":325664,"journal":{"name":"21st Annual IEEE Conference on Computational Complexity (CCC'06)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128937058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
FO[<]-uniformity FO(<)均匀性
Pub Date : 2006-07-16 DOI: 10.1109/CCC.2006.20
C. Behle, K. Lange
Uniformity notions more restrictive than the usual FO[<, +, *]-uniformity = FO[<, Bit]-uniformity are introduced. It is shown that the general framework exhibited by Barrington et al. still holds if the fan-in of the gates in the corresponding circuits is considered
引入了比通常的FO[<, +, *]-uniformity更严格的一致性概念= FO[<, Bit]-uniformity。结果表明,如果考虑相应电路中栅极的扇入,Barrington等人展示的一般框架仍然成立
{"title":"FO[<]-uniformity","authors":"C. Behle, K. Lange","doi":"10.1109/CCC.2006.20","DOIUrl":"https://doi.org/10.1109/CCC.2006.20","url":null,"abstract":"Uniformity notions more restrictive than the usual FO[<, +, *]-uniformity = FO[<, Bit]-uniformity are introduced. It is shown that the general framework exhibited by Barrington et al. still holds if the fan-in of the gates in the corresponding circuits is considered","PeriodicalId":325664,"journal":{"name":"21st Annual IEEE Conference on Computational Complexity (CCC'06)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122859706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
QMA/qpoly /spl sube/ PSPACE/poly: de-Merlinizing quantum protocols QMA/qpoly /spl sub / PSPACE/poly:脱梅林化量子协议
Pub Date : 2006-07-16 DOI: 10.1109/CCC.2006.36
S. Aaronson
This paper introduces a new technique for removing existential quantifiers over quantum states. Using this technique, we show that there is no way to pack an exponential number of bits into a polynomial-size quantum state, in such a way that the value of any one of those bits can later be proven with the help of a polynomial-size quantum witness. We also show that any problem in QMA with polynomial-size quantum advice, is also in PSPACE with polynomial-size classical advice. This builds on our earlier result that BQP/qpoly sube PP/poly, and offers an intriguing counterpoint to the recent discovery of Raz that QIP/qpoly = ALL. Finally, we show that QCMA/qpoly sube PP/poly and that QMA/rpoly = QMA/poly
本文介绍了一种去除量子态上存在量词的新技术。使用这种技术,我们证明了没有办法将指数数量的比特打包到一个多项式大小的量子态中,这样任何一个比特的值都可以在多项式大小的量子见证的帮助下被证明。我们还证明了在具有多项式大小的量子通知的QMA中的任何问题,在具有多项式大小的经典通知的PSPACE中也是如此。这建立在我们早期的结果,BQP/qpoly子PP/poly,并提供了一个有趣的对应物,最近发现Raz, QIP/qpoly = ALL。最后,我们证明了QCMA/qpoly子PP/poly, QMA/rpoly = QMA/poly
{"title":"QMA/qpoly /spl sube/ PSPACE/poly: de-Merlinizing quantum protocols","authors":"S. Aaronson","doi":"10.1109/CCC.2006.36","DOIUrl":"https://doi.org/10.1109/CCC.2006.36","url":null,"abstract":"This paper introduces a new technique for removing existential quantifiers over quantum states. Using this technique, we show that there is no way to pack an exponential number of bits into a polynomial-size quantum state, in such a way that the value of any one of those bits can later be proven with the help of a polynomial-size quantum witness. We also show that any problem in QMA with polynomial-size quantum advice, is also in PSPACE with polynomial-size classical advice. This builds on our earlier result that BQP/qpoly sube PP/poly, and offers an intriguing counterpoint to the recent discovery of Raz that QIP/qpoly = ALL. Finally, we show that QCMA/qpoly sube PP/poly and that QMA/rpoly = QMA/poly","PeriodicalId":325664,"journal":{"name":"21st Annual IEEE Conference on Computational Complexity (CCC'06)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123447392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Grid graph reachability problems 网格图可达性问题
Pub Date : 2006-07-16 DOI: 10.1109/CCC.2006.22
E. Allender, D. M. Barrington, T. Chakraborty, Samir Datta, Sambuddha Roy
We study the complexity of reachability problems on various classes of grid graphs. Reachability on certain classes of grid graphs gives natural examples of problems that are hard for NC1 under AC0 reductions but are not known to be hard far L; they thus give insight into the structure of L. In addition to explicating the structure of L, another of our goals is to expand the class of digraphs for which connectivity can be solved in logspace, by building on the work of Jakoby et al. (2001), who showed that reachability in series-parallel digraphs is solvable in L. We show that reachability for single-source multiple sink planar dags is solvable in L
我们研究了各种网格图的可达性问题的复杂性。某些网格图的可达性给出了在AC0约简下NC1很难但在L上不知道很难的问题的自然例子;除了解释L的结构外,我们的另一个目标是通过建立Jakoby等人(2001)的工作,扩展可在对数空间中求解连性的有向图的类别,他们表明串联-并行有向图的可达性在L中可解。我们表明单源多sink平面标记的可达性在L中可解
{"title":"Grid graph reachability problems","authors":"E. Allender, D. M. Barrington, T. Chakraborty, Samir Datta, Sambuddha Roy","doi":"10.1109/CCC.2006.22","DOIUrl":"https://doi.org/10.1109/CCC.2006.22","url":null,"abstract":"We study the complexity of reachability problems on various classes of grid graphs. Reachability on certain classes of grid graphs gives natural examples of problems that are hard for NC1 under AC0 reductions but are not known to be hard far L; they thus give insight into the structure of L. In addition to explicating the structure of L, another of our goals is to expand the class of digraphs for which connectivity can be solved in logspace, by building on the work of Jakoby et al. (2001), who showed that reachability in series-parallel digraphs is solvable in L. We show that reachability for single-source multiple sink planar dags is solvable in L","PeriodicalId":325664,"journal":{"name":"21st Annual IEEE Conference on Computational Complexity (CCC'06)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128116737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Hardness of the covering radius problem on lattices 格上覆盖半径问题的硬度
Pub Date : 2006-07-16 DOI: 10.1109/CCC.2006.23
I. Haviv, O. Regev
We provide the first hardness result for the covering radius problem on lattices (CRP). Namely, we show that for any large enough p les infin there exists a constant cp > 1 such that CRP in the lscrp norm is Pi2-hard to approximate to within any constant less than cp. In particular, for the case p = infin, we obtain the constant Cinfin = 1.5. This gets close to the constant 2 beyond which the problem is not believed to be Pi2-hard. As part of our proof, we establish a stronger hardness of approximation result for the forallexist-3-SAT problem with bounded occurrences. This hardness result might be useful elsewhere
我们给出了格上覆盖半径问题(CRP)的第一个硬度结果。也就是说,我们证明,对于任何足够大的p小点infin,存在一个常数cp > 1,使得lscrp范数中的CRP是pi2 -难以在小于cp的任何常数内近似。特别是,对于p = infin的情况,我们得到常数Cinfin = 1.5。它接近于常数2,超过这个常数,问题就不被认为是难的。作为证明的一部分,我们建立了具有有界出现的foralleexistist -3- sat问题的一个较强的逼近结果的硬度。这个硬度结果可能在其他地方有用
{"title":"Hardness of the covering radius problem on lattices","authors":"I. Haviv, O. Regev","doi":"10.1109/CCC.2006.23","DOIUrl":"https://doi.org/10.1109/CCC.2006.23","url":null,"abstract":"We provide the first hardness result for the covering radius problem on lattices (CRP). Namely, we show that for any large enough p les infin there exists a constant cp > 1 such that CRP in the lscrp norm is Pi2-hard to approximate to within any constant less than cp. In particular, for the case p = infin, we obtain the constant Cinfin = 1.5. This gets close to the constant 2 beyond which the problem is not believed to be Pi2-hard. As part of our proof, we establish a stronger hardness of approximation result for the forallexist-3-SAT problem with bounded occurrences. This hardness result might be useful elsewhere","PeriodicalId":325664,"journal":{"name":"21st Annual IEEE Conference on Computational Complexity (CCC'06)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114792324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
期刊
21st Annual IEEE Conference on Computational Complexity (CCC'06)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1