The gap-ETH assumption (Dinur 2016; Manurangsi and Raghavendra 2016) asserts that it is exponentially-hard to distinguish between a satisfiable 3-CNF formula and a 3-CNF formula which is at most 0.99-satisfiable. We show that this assumption follows from the exponential hardness of finding a satisfying assignment for smooth 3-CNFs. Here smoothness means that the number of satisfying assignments is not much smaller than the number of almost-satisfying assignments. We further show that the latter (smooth-ETH) assumption follows from the exponential hardness of solving constraint satisfaction problems over well-studied distributions, and, more generally, from the existence of any exponentially-hard locally-computable one-way function. This confirms a conjecture of Dinur (ECCC 2016).We also prove an analogous result in the cryptographic setting. Namely, we show that the existence of exponentially-hard locally-computable pseudorandom generator with linear stretch (el-PRG) follows from the existence of an exponentially-hard locally-computable almost regular one-way functions.None of the above assumptions (gap-ETH and el-PRG) was previously known to follow from the hardness of a search problem. Our results are based on a new construction of general (GL-type) hardcore functions that, for any exponentially-hard one-way function, output linearly many hardcore bits, can be locally computed, and consume only a linear amount of random bits. We also show that such hardcore functions have several other useful applications in cryptography and complexity theory.
{"title":"Exponentially-Hard Gap-CSP and Local PRG via Local Hardcore Functions","authors":"B. Applebaum","doi":"10.1109/FOCS.2017.82","DOIUrl":"https://doi.org/10.1109/FOCS.2017.82","url":null,"abstract":"The gap-ETH assumption (Dinur 2016; Manurangsi and Raghavendra 2016) asserts that it is exponentially-hard to distinguish between a satisfiable 3-CNF formula and a 3-CNF formula which is at most 0.99-satisfiable. We show that this assumption follows from the exponential hardness of finding a satisfying assignment for smooth 3-CNFs. Here smoothness means that the number of satisfying assignments is not much smaller than the number of almost-satisfying assignments. We further show that the latter (smooth-ETH) assumption follows from the exponential hardness of solving constraint satisfaction problems over well-studied distributions, and, more generally, from the existence of any exponentially-hard locally-computable one-way function. This confirms a conjecture of Dinur (ECCC 2016).We also prove an analogous result in the cryptographic setting. Namely, we show that the existence of exponentially-hard locally-computable pseudorandom generator with linear stretch (el-PRG) follows from the existence of an exponentially-hard locally-computable almost regular one-way functions.None of the above assumptions (gap-ETH and el-PRG) was previously known to follow from the hardness of a search problem. Our results are based on a new construction of general (GL-type) hardcore functions that, for any exponentially-hard one-way function, output linearly many hardcore bits, can be locally computed, and consume only a linear amount of random bits. We also show that such hardcore functions have several other useful applications in cryptography and complexity theory.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127443287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Consider the problem of estimating the (un)reliability of an n-vertex graph when edges fail with probability p. We show that the Recursive Contraction Algorithms for minimum cuts, essentially unchanged and running in n2+o(1) time, yields an unbiased estimator of constant relative variance (and thus an FPRAS with the same time bound) whenever pc
{"title":"Faster (and Still Pretty Simple) Unbiased Estimators for Network (Un)reliability","authors":"David R Karger","doi":"10.1109/FOCS.2017.75","DOIUrl":"https://doi.org/10.1109/FOCS.2017.75","url":null,"abstract":"Consider the problem of estimating the (un)reliability of an n-vertex graph when edges fail with probability p. We show that the Recursive Contraction Algorithms for minimum cuts, essentially unchanged and running in n2+o(1) time, yields an unbiased estimator of constant relative variance (and thus an FPRAS with the same time bound) whenever pc","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123044275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Non-malleable commitments are a fundamental cryptographic tool for preventing against (concurrent) man-in-the-middle attacks. Since their invention by Dolev, Dwork, and Naor in 1991, the round-complexity of non-malleable commitments has been extensively studied, leading up to constant-round concurrent non-malleable commitments based only on one-way functions, and even 3-round concurrent non-malleable commitments based on subexponential one-way functions.But constructions of two-round, or non-interactive, non-malleable commitments have so far remained elusive; the only known construction relied on a strong and non-falsifiable assumption with a non-malleability flavor. Additionally, a recent result by Pass shows the impossibility of basing two-round non-malleable commitments on falsifiable assumptions using a polynomial-time black-box security reduction.In this work, we show how to overcome this impossibility, using super-polynomial-time hardness assumptions. Our main result demonstrates the existence of a two-round concurrent non-malleable commitment based on sub-exponential standard-type assumptions—notably, assuming the existence of the following primitives (all with subexponential security): (1) non-interactive commitments, (2) ZAPs (i.e., 2-round witness indistinguishable proofs), (3) collision-resistant hash functions, and (4) a weak time-lock puzzle.Primitives (1),(2),(3) can be based on e.g., the discrete log assumption and the RSA assumption. Time-lock puzzles—puzzles that can be solved by brute-force in time 2^t, but cannot be solved significantly faster even using parallel computers—were proposed by Rivest, Shamir, and Wagner in 1996, and have been quite extensively studied since; the most popular instantiation relies on the assumption that 2^t repeated squarings mod N = pq require roughly 2^t parallel time. Our notion of a weak time-lock puzzle, requires only that the puzzle cannot be solved in parallel time 2^{t^≥ilon} (and thus we only need to rely on the relatively mild assumption that there are no huge} improvements in the parallel complexity of repeated squaring algorithms).We additionally show that if replacing assumption (2) for a non-interactive witness indistinguishable proof (NIWI), and (3) for auniform} collision-resistant hash function, then a non-interactive} (i.e., one-message) version of our protocolsatisfies concurrent non-malleability w.r.t. uniform attackers.
不可延展性承诺是防止(并发)中间人攻击的基本加密工具。自1991年Dolev、Dwork和Naor发明不可延展性行为的循环复杂度以来,人们对其进行了广泛的研究,导致了仅基于单向函数的常轮并发不可延性行为,甚至基于次指数单向函数的3轮并发不可延性行为。但迄今为止,两轮承诺或非互动、不可延展性承诺的构建仍然难以捉摸;唯一已知的结构依赖于一个强大的、不可证伪的假设,具有不可延展性。此外,Pass最近的一个结果表明,不可能使用多项式时间黑盒安全约简将两轮不可延展性承诺建立在可证伪假设上。在这项工作中,我们展示了如何使用超多项式时间硬度假设来克服这种不可能性。我们的主要结果证明了基于次指数标准型假设的两轮并发不可延展性承诺的存在性;值得注意的是,假设存在以下原语(都具有次指数安全性):(1)非交互式承诺,(2)zap(即2轮证人不可区分证明),(3)抗碰撞哈希函数,以及(4)弱时间锁难题。基元(1)、(2)、(3)可以基于离散对数假设和RSA假设。时间锁谜题—可以通过暴力破解时间2^t,但即使使用并行计算机也不能明显更快地解决的谜题—由Rivest, Shamir和Wagner在1996年提出,并得到了相当广泛的研究;最流行的实例依赖于假设2^t重复平方mod N = pq大约需要2^t并行时间。我们的弱时间锁难题的概念,只要求难题不能在并行时间2^{t^≥ilon}内解决(因此我们只需要依赖相对温和的假设,即重复平方算法的并行复杂性没有巨大的改进)。我们还表明,如果将假设(2)替换为非交互式见证不可区分证明(NIWI),并将假设(3)替换为统一的防碰撞哈希函数,那么我们协议的非交互式}(即单消息)版本满足并发的非延展性w.r.t.统一攻击者。
{"title":"Two-Round and Non-Interactive Concurrent Non-Malleable Commitments from Time-Lock Puzzles","authors":"Huijia Lin, R. Pass, Pratik Soni","doi":"10.1109/FOCS.2017.59","DOIUrl":"https://doi.org/10.1109/FOCS.2017.59","url":null,"abstract":"Non-malleable commitments are a fundamental cryptographic tool for preventing against (concurrent) man-in-the-middle attacks. Since their invention by Dolev, Dwork, and Naor in 1991, the round-complexity of non-malleable commitments has been extensively studied, leading up to constant-round concurrent non-malleable commitments based only on one-way functions, and even 3-round concurrent non-malleable commitments based on subexponential one-way functions.But constructions of two-round, or non-interactive, non-malleable commitments have so far remained elusive; the only known construction relied on a strong and non-falsifiable assumption with a non-malleability flavor. Additionally, a recent result by Pass shows the impossibility of basing two-round non-malleable commitments on falsifiable assumptions using a polynomial-time black-box security reduction.In this work, we show how to overcome this impossibility, using super-polynomial-time hardness assumptions. Our main result demonstrates the existence of a two-round concurrent non-malleable commitment based on sub-exponential standard-type assumptions—notably, assuming the existence of the following primitives (all with subexponential security): (1) non-interactive commitments, (2) ZAPs (i.e., 2-round witness indistinguishable proofs), (3) collision-resistant hash functions, and (4) a weak time-lock puzzle.Primitives (1),(2),(3) can be based on e.g., the discrete log assumption and the RSA assumption. Time-lock puzzles—puzzles that can be solved by brute-force in time 2^t, but cannot be solved significantly faster even using parallel computers—were proposed by Rivest, Shamir, and Wagner in 1996, and have been quite extensively studied since; the most popular instantiation relies on the assumption that 2^t repeated squarings mod N = pq require roughly 2^t parallel time. Our notion of a weak time-lock puzzle, requires only that the puzzle cannot be solved in parallel time 2^{t^≥ilon} (and thus we only need to rely on the relatively mild assumption that there are no huge} improvements in the parallel complexity of repeated squaring algorithms).We additionally show that if replacing assumption (2) for a non-interactive witness indistinguishable proof (NIWI), and (3) for auniform} collision-resistant hash function, then a non-interactive} (i.e., one-message) version of our protocolsatisfies concurrent non-malleability w.r.t. uniform attackers.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127378369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We give a quantum algorithm for solving semidefinite programs (SDPs). It has worst-case running time n^{frac{1}{2}} m^{frac{1}{2}} s^2 poly(log(n), log(m), R, r, 1/δ), with n and s the dimension and row-sparsity of the input matrices, respectively, m the number of constraints, δ the accuracy of the solution, and R, r upper bounds on the size of the optimal primal and dual solutions, respectively. This gives a square-root unconditional speed-up over any classical method for solving SDPs both in n and m. We prove the algorithm cannot be substantially improved (in terms of n and m) giving a Ω(n^{frac{1}{2}}+m^{frac{1}{2}}) quantum lower bound for solving semidefinite programs with constant s, R, r and δ. The quantum algorithm is constructed by a combination of quantum Gibbs sampling and the multiplicative weight method. In particular it is based on a classical algorithm of Arora and Kale for approximately solving SDPs. We present a modification of their algorithm to eliminate the need for solving an inner linear program which may be of independent interest.
{"title":"Quantum Speed-Ups for Solving Semidefinite Programs","authors":"F. Brandão, K. Svore","doi":"10.1109/FOCS.2017.45","DOIUrl":"https://doi.org/10.1109/FOCS.2017.45","url":null,"abstract":"We give a quantum algorithm for solving semidefinite programs (SDPs). It has worst-case running time n^{frac{1}{2}} m^{frac{1}{2}} s^2 poly(log(n), log(m), R, r, 1/δ), with n and s the dimension and row-sparsity of the input matrices, respectively, m the number of constraints, δ the accuracy of the solution, and R, r upper bounds on the size of the optimal primal and dual solutions, respectively. This gives a square-root unconditional speed-up over any classical method for solving SDPs both in n and m. We prove the algorithm cannot be substantially improved (in terms of n and m) giving a Ω(n^{frac{1}{2}}+m^{frac{1}{2}}) quantum lower bound for solving semidefinite programs with constant s, R, r and δ. The quantum algorithm is constructed by a combination of quantum Gibbs sampling and the multiplicative weight method. In particular it is based on a classical algorithm of Arora and Kale for approximately solving SDPs. We present a modification of their algorithm to eliminate the need for solving an inner linear program which may be of independent interest.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132161588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Interactive coding, pioneered by Schulman (FOCS 92, STOC 93), is concerned with making communication protocols resilient to adversarial noise. The canonical model allows the adversary to alter a small constant fraction of symbols, chosen at the adversarys discretion, as they pass through the communication channel. Braverman, Gelles, Mao, and Ostrovsky (2015) proposed a far-reaching generalization of this model, whereby the adversary can additionally manipulate the channel by removing and inserting symbols. They showed how to faithfully simulate any protocol in this model with corruption rate up to 1/18, using a constant-size alphabet and a constant-factor overhead in communication. We give an optimal simulation of any protocol in this generalized model of substitutions, insertions, and deletions, tolerating a corruption rate up to 1/4 while keeping the alphabet to a constant size and the communication overhead to a constant factor. Our corruption tolerance matches an impossibility result for corruption rate 1/4 which holds even for substitutions alone (Braverman and Rao, STOC 11).
{"title":"Optimal Interactive Coding for Insertions, Deletions, and Substitutions","authors":"Alexander A. Sherstov, Pei Wu","doi":"10.1109/FOCS.2017.30","DOIUrl":"https://doi.org/10.1109/FOCS.2017.30","url":null,"abstract":"Interactive coding, pioneered by Schulman (FOCS 92, STOC 93), is concerned with making communication protocols resilient to adversarial noise. The canonical model allows the adversary to alter a small constant fraction of symbols, chosen at the adversarys discretion, as they pass through the communication channel. Braverman, Gelles, Mao, and Ostrovsky (2015) proposed a far-reaching generalization of this model, whereby the adversary can additionally manipulate the channel by removing and inserting symbols. They showed how to faithfully simulate any protocol in this model with corruption rate up to 1/18, using a constant-size alphabet and a constant-factor overhead in communication. We give an optimal simulation of any protocol in this generalized model of substitutions, insertions, and deletions, tolerating a corruption rate up to 1/4 while keeping the alphabet to a constant size and the communication overhead to a constant factor. Our corruption tolerance matches an impossibility result for corruption rate 1/4 which holds even for substitutions alone (Braverman and Rao, STOC 11).","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130061042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We show how to obfuscate a large and expressive class of programs, which we call compute-and-compare programs, under the learning-with-errors (LWE) assumption. Each such program CC[f,y] is parametrized by an arbitrary polynomial-time computable function f along with a target value y and we define CC[f,y](x) to output 1 if f(x)=y and 0 otherwise. In other words, the program performs an arbitrary {computation} f and then compares its output against a target y. Our obfuscator satisfies distributional virtual-black-box security, which guarantees that the obfuscated program does not reveal any partial information about the function f or the target value y, as long as they are chosen from some distribution where y has sufficient pseudo-entropy given f. We also extend our result to multi-bit compute-and-compare programs MBCC[f,y,z](x) which output a message z if f(x)=y.Compute-and-compare programs are powerful enough to capture many interesting obfuscation tasks as special cases. This includes obfuscating {conjunctions, and therefore we improve on the prior work of Brakerski et al. (ITCS 16) which constructed a conjunction obfuscator under a non-standard entropic ring-LWE assumption, while here we obfuscate a significantly broader class of programs under standard LWE. We show that our obfuscator has several interesting applications. For example, we can take any encryption scheme and publish an obfuscated plaintext equality tester that allows users to check whether a ciphertext decrypts to some target value y; as long as y has sufficient pseudo-entropy this will not harm semantic security. We can also use our obfuscator to generically upgrade attribute-based encryption to predicate encryption with one-sided attribute-hiding security, and to upgrade witness encryption to indistinguishability obfuscation which is secure for all null circuits. Furthermore, we show that our obfuscator gives new circular-security counter-examples for public-key bit encryption and for unbounded length key cycles.Our result uses the graph-induced multi-linear maps of Gentry, Gorbunov and Halevi (TCC 15), but only in a carefully restricted manner which is provably secure under LWE. Our technique is inspired by ideas introduced in a recent work of Goyal, Koppula and Waters (EUROCRYPT 17) in a seemingly unrelated context.
{"title":"Obfuscating Compute-and-Compare Programs under LWE","authors":"D. Wichs, Giorgos Zirdelis","doi":"10.1109/FOCS.2017.61","DOIUrl":"https://doi.org/10.1109/FOCS.2017.61","url":null,"abstract":"We show how to obfuscate a large and expressive class of programs, which we call compute-and-compare programs, under the learning-with-errors (LWE) assumption. Each such program CC[f,y] is parametrized by an arbitrary polynomial-time computable function f along with a target value y and we define CC[f,y](x) to output 1 if f(x)=y and 0 otherwise. In other words, the program performs an arbitrary {computation} f and then compares its output against a target y. Our obfuscator satisfies distributional virtual-black-box security, which guarantees that the obfuscated program does not reveal any partial information about the function f or the target value y, as long as they are chosen from some distribution where y has sufficient pseudo-entropy given f. We also extend our result to multi-bit compute-and-compare programs MBCC[f,y,z](x) which output a message z if f(x)=y.Compute-and-compare programs are powerful enough to capture many interesting obfuscation tasks as special cases. This includes obfuscating {conjunctions, and therefore we improve on the prior work of Brakerski et al. (ITCS 16) which constructed a conjunction obfuscator under a non-standard entropic ring-LWE assumption, while here we obfuscate a significantly broader class of programs under standard LWE. We show that our obfuscator has several interesting applications. For example, we can take any encryption scheme and publish an obfuscated plaintext equality tester that allows users to check whether a ciphertext decrypts to some target value y; as long as y has sufficient pseudo-entropy this will not harm semantic security. We can also use our obfuscator to generically upgrade attribute-based encryption to predicate encryption with one-sided attribute-hiding security, and to upgrade witness encryption to indistinguishability obfuscation which is secure for all null circuits. Furthermore, we show that our obfuscator gives new circular-security counter-examples for public-key bit encryption and for unbounded length key cycles.Our result uses the graph-induced multi-linear maps of Gentry, Gorbunov and Halevi (TCC 15), but only in a carefully restricted manner which is provably secure under LWE. Our technique is inspired by ideas introduced in a recent work of Goyal, Koppula and Waters (EUROCRYPT 17) in a seemingly unrelated context.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114431389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amir Abboud, A. Backurs, K. Bringmann, Marvin Künnemann
Can we analyze data without decompressing it? As our data keeps growing, understanding the time complexity of problems on compressed inputs, rather than in convenient uncompressed forms, becomes more and more relevant. Suppose we are given a compression of size n of data that originally has size N, and we want to solve a problem with time complexity T(⋅). The naïve strategy of decompress-and-solve gives time T(N), whereas the gold standard is time T(n): to analyze the compression as efficiently as if the original data was small.We restrict our attention to data in the form of a string (text, files, genomes, etc.) and study the most ubiquitous tasks. While the challenge might seem to depend heavily on the specific compression scheme, most methods of practical relevance (Lempel-Ziv-family, dictionary methods, and others) can be unified under the elegant notion of Grammar-Compressions. A vast literature, across many disciplines, established this as an influential notion for Algorithm design.We introduce a direly needed framework for proving (conditional) lower bounds in this field, allowing us to assess whether decompress-and-solve can be improved, and by how much. Our main results are:• The O(nN√log(N/n)) bound for LCS and the O(min(N log N, nM)) bound for Pattern Matching with Wildcards are optimal up to N^{o(1)} factors, under the Strong Exponential Time Hypothesis. (Here, M denotes the uncompressed length of the compressed pattern.)• Decompress-and-solve is essentially optimal for Context-Free Grammar Parsing and RNA Folding, under the k-Clique conjecture.• We give an algorithm showing that decompress-and-solve is not optimal for Disjointness.
我们能在不解压的情况下分析数据吗?随着我们的数据不断增长,理解压缩输入问题的时间复杂性,而不是以方便的未压缩形式,变得越来越重要。假设我们对原始大小为n的数据进行大小为n的压缩,并且我们想要解决一个时间复杂度为T(⋅)的问题。解压缩和求解的naï 5策略需要时间T(N),而黄金标准是时间T(N):像原始数据很小一样有效地分析压缩。我们将注意力限制在字符串形式的数据(文本、文件、基因组等)上,并研究最普遍的任务。虽然这个挑战似乎很大程度上取决于特定的压缩方案,但大多数实际相关的方法(Lempel-Ziv-family、字典方法等)都可以统一在语法压缩的优雅概念之下。大量的文献,跨越许多学科,将这一概念确立为算法设计的一个有影响力的概念。我们引入了一个迫切需要的框架来证明这个领域的(有条件的)下界,允许我们评估解压缩和求解是否可以改进,以及改进多少。我们的主要结果是:•在强指数时间假设下,LCS的O(nN√log(N/ N))界和通配符模式匹配的O(min(N log N, nM))界在N^{O(1)}因子范围内是最优的。(这里,M表示压缩模式的未压缩长度)•在k-Clique猜想下,解压缩并求解本质上是上下文无关语法解析和RNA折叠的最佳选择。•我们给出了一种算法,表明解压缩求解对于不连接不是最优的。
{"title":"Fine-Grained Complexity of Analyzing Compressed Data: Quantifying Improvements over Decompress-and-Solve","authors":"Amir Abboud, A. Backurs, K. Bringmann, Marvin Künnemann","doi":"10.1109/FOCS.2017.26","DOIUrl":"https://doi.org/10.1109/FOCS.2017.26","url":null,"abstract":"Can we analyze data without decompressing it? As our data keeps growing, understanding the time complexity of problems on compressed inputs, rather than in convenient uncompressed forms, becomes more and more relevant. Suppose we are given a compression of size n of data that originally has size N, and we want to solve a problem with time complexity T(⋅). The naïve strategy of decompress-and-solve gives time T(N), whereas the gold standard is time T(n): to analyze the compression as efficiently as if the original data was small.We restrict our attention to data in the form of a string (text, files, genomes, etc.) and study the most ubiquitous tasks. While the challenge might seem to depend heavily on the specific compression scheme, most methods of practical relevance (Lempel-Ziv-family, dictionary methods, and others) can be unified under the elegant notion of Grammar-Compressions. A vast literature, across many disciplines, established this as an influential notion for Algorithm design.We introduce a direly needed framework for proving (conditional) lower bounds in this field, allowing us to assess whether decompress-and-solve can be improved, and by how much. Our main results are:• The O(nN√log(N/n)) bound for LCS and the O(min(N log N, nM)) bound for Pattern Matching with Wildcards are optimal up to N^{o(1)} factors, under the Strong Exponential Time Hypothesis. (Here, M denotes the uncompressed length of the compressed pattern.)• Decompress-and-solve is essentially optimal for Context-Free Grammar Parsing and RNA Folding, under the k-Clique conjecture.• We give an algorithm showing that decompress-and-solve is not optimal for Disjointness.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"156 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121674083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We prove a lower bound on the size of a small depth Frege refutation of the Tseitin contradiction on the grid. We conclude that polynomial size such refutations must use formulas of almost logarithmic depth.
{"title":"On Small-Depth Frege Proofs for Tseitin for Grids","authors":"J. Håstad","doi":"10.1145/3425606","DOIUrl":"https://doi.org/10.1145/3425606","url":null,"abstract":"We prove a lower bound on the size of a small depth Frege refutation of the Tseitin contradiction on the grid. We conclude that polynomial size such refutations must use formulas of almost logarithmic depth.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121772515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kun He, Liangpan Li, Xingwu Liu, Yuyi Wang, Mingji Xia
A tight criterion under which the abstract version Lovász Local Lemma (abstract-LLL) holds was given by Shearer [41] decades ago. However, little is known about that of the variable version LLL (variable-LLL) where events are generated by independent random variables, though variable- LLL naturally models and is enough for almost all applications of LLL. We introduce a necessary and sufficient criterion for variable-LLL, in terms of the probabilities of the events and the event-variable graph specifying the dependency among the events. Based on this new criterion, we obtain boundaries for two families of event-variable graphs, namely, cyclic and treelike bigraphs. These are the first two non-trivial cases where the variable-LLL boundary is fully determined. As a byproduct, we also provide a universal constructive method to find a set of events whose union has the maximum probability, given the probability vector and the event-variable graph.Though it is #P-hard in general to determine variable- LLL boundaries, we can to some extent decide whether a gap exists between a variable-LLL boundary and the corresponding abstract-LLL boundary. In particular, we show that the gap existence can be decided without solving Shearer’s conditions or checking our variable-LLL criterion. Equipped with this powerful theorem, we show that there is no gap if the base graph of the event-variable graph is a tree, while gap appears if the base graph has an induced cycle of length at least 4. The problem is almost completely solved except when the base graph has only 3-cliques, in which case we also get partial solutions.A set of reduction rules are established that facilitate to infer gap existence of a event-variable graph from known ones. As an application, various event-variable graphs, in particular combinatorial ones, are shown to be gapful/gapless.
{"title":"Variable-Version Lovász Local Lemma: Beyond Shearer's Bound","authors":"Kun He, Liangpan Li, Xingwu Liu, Yuyi Wang, Mingji Xia","doi":"10.1109/FOCS.2017.48","DOIUrl":"https://doi.org/10.1109/FOCS.2017.48","url":null,"abstract":"A tight criterion under which the abstract version Lovász Local Lemma (abstract-LLL) holds was given by Shearer [41] decades ago. However, little is known about that of the variable version LLL (variable-LLL) where events are generated by independent random variables, though variable- LLL naturally models and is enough for almost all applications of LLL. We introduce a necessary and sufficient criterion for variable-LLL, in terms of the probabilities of the events and the event-variable graph specifying the dependency among the events. Based on this new criterion, we obtain boundaries for two families of event-variable graphs, namely, cyclic and treelike bigraphs. These are the first two non-trivial cases where the variable-LLL boundary is fully determined. As a byproduct, we also provide a universal constructive method to find a set of events whose union has the maximum probability, given the probability vector and the event-variable graph.Though it is #P-hard in general to determine variable- LLL boundaries, we can to some extent decide whether a gap exists between a variable-LLL boundary and the corresponding abstract-LLL boundary. In particular, we show that the gap existence can be decided without solving Shearer’s conditions or checking our variable-LLL criterion. Equipped with this powerful theorem, we show that there is no gap if the base graph of the event-variable graph is a tree, while gap appears if the base graph has an induced cycle of length at least 4. The problem is almost completely solved except when the base graph has only 3-cliques, in which case we also get partial solutions.A set of reduction rules are established that facilitate to infer gap existence of a event-variable graph from known ones. As an application, various event-variable graphs, in particular combinatorial ones, are shown to be gapful/gapless.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"117 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134393624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We provide algorithms that learn simple auctions whose revenue is approximately optimal in multi-item multi-bidder settings, for a wide range of bidder valuations including unit-demand, additive, constrained additive, XOS, and subadditive. We obtain our learning results in two settings. The first is the commonly studied setting where sample access to the bidders distributions over valuations is given, for both regular distributions and arbitrary distributions with bounded support. Here, our algorithms require polynomially many samples in the number of items and bidders. The second is a more general max-min learning setting that we introduce, where we are given approximate distributions, and we seek to compute a mechanism whose revenue is approximately optimal simultaneously for all true distributions that are close to the ones we were given. These results are more general in that they imply the sample-based results, and are also applicable in settings where we have no sample access to the underlying distributions but have estimated them indirectly via market research or by observation of bidder behavior in previously run, potentially non-truthful auctions.All our results hold for valuation distributions satisfying the standard (and necessary) independence-across-items property. They also generalize and improve upon recent works of Goldner and Karlin cite{GoldnerK16} and Morgenstern and Roughgarden cite{MorgensternR16, which have provided algorithms that learn approximately optimal multi-item mechanisms in more restricted settings with additive, subadditive and unit-demand valuations using sample access to distributions. We generalize these results to the complete unit-demand, additive, and XOS setting, to i.i.d. subadditive bidders, and to the max-min setting.Our results are enabled by new uniform convergence bounds for hypotheses classes under product measures. Our bounds result in exponential savings in sample complexity compared to bounds derived by bounding the VC dimension and are of independent interest.
{"title":"Learning Multi-Item Auctions with (or without) Samples","authors":"Yang Cai, C. Daskalakis","doi":"10.1109/FOCS.2017.54","DOIUrl":"https://doi.org/10.1109/FOCS.2017.54","url":null,"abstract":"We provide algorithms that learn simple auctions whose revenue is approximately optimal in multi-item multi-bidder settings, for a wide range of bidder valuations including unit-demand, additive, constrained additive, XOS, and subadditive. We obtain our learning results in two settings. The first is the commonly studied setting where sample access to the bidders distributions over valuations is given, for both regular distributions and arbitrary distributions with bounded support. Here, our algorithms require polynomially many samples in the number of items and bidders. The second is a more general max-min learning setting that we introduce, where we are given approximate distributions, and we seek to compute a mechanism whose revenue is approximately optimal simultaneously for all true distributions that are close to the ones we were given. These results are more general in that they imply the sample-based results, and are also applicable in settings where we have no sample access to the underlying distributions but have estimated them indirectly via market research or by observation of bidder behavior in previously run, potentially non-truthful auctions.All our results hold for valuation distributions satisfying the standard (and necessary) independence-across-items property. They also generalize and improve upon recent works of Goldner and Karlin cite{GoldnerK16} and Morgenstern and Roughgarden cite{MorgensternR16, which have provided algorithms that learn approximately optimal multi-item mechanisms in more restricted settings with additive, subadditive and unit-demand valuations using sample access to distributions. We generalize these results to the complete unit-demand, additive, and XOS setting, to i.i.d. subadditive bidders, and to the max-min setting.Our results are enabled by new uniform convergence bounds for hypotheses classes under product measures. Our bounds result in exponential savings in sample complexity compared to bounds derived by bounding the VC dimension and are of independent interest.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131943718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}