首页 > 最新文献

2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)最新文献

英文 中文
Fast Learning Requires Good Memory: A Time-Space Lower Bound for Parity Learning 快速学习需要良好的记忆:奇偶性学习的时空下界
R. Raz
We prove that any algorithm for learning parities requires either a memory of quadratic size or an exponential number of samples. This proves a recent conjecture of Steinhardt, Valiant and Wager [15] and shows that for some learning problems a large storage space is crucial. More formally, in the problem of parity learning, an unknown string x ϵ {0,1}n was chosen uniformly at random. A learner tries to learn x from a stream of samples (a1, b1), (a2, b2)..., where each at is uniformly distributed over {0,1}n and bt is the inner product of at and x, modulo 2. We show that any algorithm for parity learning, that uses less than n2/25 bits of memory, requires an exponential number of samples. Previously, there was no non-trivial lower bound on the number of samples needed, for any learning problem, even if the allowed memory size is O(n) (where n is the space needed to store one sample). We also give an application of our result in the field of bounded-storage cryptography. We show an encryption scheme that requires a private key of length n, as well as time complexity of n per encryption/decryption of each bit, and is provenly and unconditionally secure as long as the attacker uses less than n2/25 memory bits and the scheme is used at most an exponential number of times. Previous works on bounded-storage cryptography assumed that the memory size used by the attacker is at most linear in the time needed for encryption/decryption.
我们证明了任何学习奇偶的算法要么需要二次型的内存,要么需要指数型的样本。这证明了Steinhardt, Valiant和Wager[15]最近的一个猜想,并表明对于一些学习问题来说,大的存储空间是至关重要的。更正式地说,在宇称学习问题中,一个未知的字符串x λ {0,1}n被随机均匀地选择。一个学习者试图从一系列样本(a1, b1), (a2, b2)…式中,每个at均匀分布于{0,1}n上,bt为at与x的内积,以2为模。我们表明,任何奇偶学习算法,只要使用少于n2/25位的内存,就需要指数级的样本数。以前,对于任何学习问题,即使允许的内存大小是O(n)(其中n是存储一个样本所需的空间),所需的样本数量没有非平凡的下界。最后给出了我们的结果在有界存储密码学领域的一个应用。我们展示了一种加密方案,该方案需要一个长度为n的私钥,以及每个比特的加密/解密的时间复杂度为n,并且只要攻击者使用少于n2/25内存位并且该方案最多使用指数次,该方案就被证明是无条件安全的。先前关于有界存储加密的工作假设攻击者使用的内存大小在加密/解密所需的时间内最多是线性的。
{"title":"Fast Learning Requires Good Memory: A Time-Space Lower Bound for Parity Learning","authors":"R. Raz","doi":"10.1145/3186563","DOIUrl":"https://doi.org/10.1145/3186563","url":null,"abstract":"We prove that any algorithm for learning parities requires either a memory of quadratic size or an exponential number of samples. This proves a recent conjecture of Steinhardt, Valiant and Wager [15] and shows that for some learning problems a large storage space is crucial. More formally, in the problem of parity learning, an unknown string x ϵ {0,1}n was chosen uniformly at random. A learner tries to learn x from a stream of samples (a1, b1), (a2, b2)..., where each at is uniformly distributed over {0,1}n and bt is the inner product of at and x, modulo 2. We show that any algorithm for parity learning, that uses less than n2/25 bits of memory, requires an exponential number of samples. Previously, there was no non-trivial lower bound on the number of samples needed, for any learning problem, even if the allowed memory size is O(n) (where n is the space needed to store one sample). We also give an application of our result in the field of bounded-storage cryptography. We show an encryption scheme that requires a private key of length n, as well as time complexity of n per encryption/decryption of each bit, and is provenly and unconditionally secure as long as the attacker uses less than n2/25 memory bits and the scheme is used at most an exponential number of times. Previous works on bounded-storage cryptography assumed that the memory size used by the attacker is at most linear in the time needed for encryption/decryption.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127321074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 77
A New Approach for Testing Properties of Discrete Distributions 一种检验离散分布性质的新方法
Pub Date : 2016-01-21 DOI: 10.1109/FOCS.2016.78
Ilias Diakonikolas, D. Kane
We study problems in distribution property testing: Given sample access to one or more unknown discrete distributions, we want to determine whether they have some global property or are epsilon-far from having the property in L1 distance (equivalently, total variation distance, or "statistical distance").In this work, we give a novel general approach for distribution testing. We describe two techniques: our first technique gives sample-optimal testers, while our second technique gives matching sample lower bounds. As a consequence, we resolve the sample complexity of a wide variety of testing problems. Our upper bounds are obtained via a modular reduction-based approach. Our approach yields optimal testers for numerous problemsby using a standard L2-identity tester as a black-box. Using this recipe, we obtain simple estimators for a wide range of problems, encompassing many problems previously studied in the TCS literature, namely: (1) identity testing to a fixed distribution, (2) closeness testing between two unknown distributions (with equal/unequal sample sizes), (3) independence testing (in any number of dimensions), (4) closeness testing for collections of distributions, and(5) testing histograms. For all of these problems, our testers are sample-optimal, up to constant factors. With the exception of (1), ours are the first sample-optimal testers for the corresponding problems. Moreover, our estimators are significantly simpler to state and analyze compared to previous results. As an important application of our reduction-based technique, we obtain the first adaptive algorithm for testing equivalence betweentwo unknown distributions. The sample complexity of our algorithm depends on the structure of the unknown distributions - as opposed to merely their domain size -and is significantly better compared to the worst-case optimal L1-tester in many natural instances. Moreover, our technique naturally generalizes to other metrics beyond the L1-distance. As an illustration of its flexibility, we use it to obtain the first near-optimal equivalence testerunder the Hellinger distance. Our lower bounds are obtained via a direct information-theoretic approach: Given a candidate hard instance, our proof proceeds by boundingthe mutual information between appropriate random variables. While this is a classical method in information theory, prior to our work, it had not been used in this context. Previous lower bounds relied either on the birthday paradox, oron moment-matching and were thus restricted to symmetric properties. Our lower bound approach does not suffer from any such restrictions and gives tight sample lower bounds for the aforementioned problems.
我们研究分布性质测试中的问题:给定一个或多个未知离散分布的样本访问权,我们想确定它们是否具有一些全局性质,或者在L1距离(等效地,总变异距离,或“统计距离”)中是否具有epsilon-far的性质。在这项工作中,我们给出了一种新的通用分布测试方法。我们描述了两种技术:我们的第一种技术给出了样本最优的测试器,而我们的第二种技术给出了匹配的样本下界。因此,我们解决了各种测试问题的样本复杂性。我们的上界是通过基于模约化的方法得到的。我们的方法通过使用标准的l2身份测试器作为黑盒,为许多问题生成最佳测试器。使用这个配方,我们获得了广泛问题的简单估计,包括以前在TCS文献中研究的许多问题,即:(1)对固定分布的同一性检验,(2)两个未知分布之间的紧密性检验(具有相等/不等样本量),(3)独立性检验(在任意数量的维度上),(4)分布集合的紧密性检验,以及(5)直方图检验。对于所有这些问题,我们的测试人员都是样本最优的,直到常数因素。除了(1),我们是第一个针对相应问题的样本最优测试者。此外,与以前的结果相比,我们的估计器的陈述和分析明显更简单。作为我们基于约简技术的一个重要应用,我们获得了第一个用于测试两个未知分布之间等价性的自适应算法。我们算法的样本复杂性取决于未知分布的结构——而不仅仅是它们的域大小——在许多自然实例中,与最坏情况下的最优l1测试器相比,我们的算法明显更好。此外,我们的技术自然地推广到l1距离以外的其他指标。为了说明它的灵活性,我们用它获得了海灵格距离下的第一个近似最优等效测试仪。我们的下界是通过直接的信息论方法得到的:给定一个候选的硬实例,我们的证明通过对适当的随机变量之间的互信息进行定界来进行。虽然这是信息论中的经典方法,但在我们的工作之前,它并没有在这种情况下使用。之前的下界要么依赖于生日悖论,要么依赖于矩匹配,因此仅限于对称性质。我们的下界方法不受任何此类限制,并为上述问题提供了严格的样本下界。
{"title":"A New Approach for Testing Properties of Discrete Distributions","authors":"Ilias Diakonikolas, D. Kane","doi":"10.1109/FOCS.2016.78","DOIUrl":"https://doi.org/10.1109/FOCS.2016.78","url":null,"abstract":"We study problems in distribution property testing: Given sample access to one or more unknown discrete distributions, we want to determine whether they have some global property or are epsilon-far from having the property in L1 distance (equivalently, total variation distance, or \"statistical distance\").In this work, we give a novel general approach for distribution testing. We describe two techniques: our first technique gives sample-optimal testers, while our second technique gives matching sample lower bounds. As a consequence, we resolve the sample complexity of a wide variety of testing problems. Our upper bounds are obtained via a modular reduction-based approach. Our approach yields optimal testers for numerous problemsby using a standard L2-identity tester as a black-box. Using this recipe, we obtain simple estimators for a wide range of problems, encompassing many problems previously studied in the TCS literature, namely: (1) identity testing to a fixed distribution, (2) closeness testing between two unknown distributions (with equal/unequal sample sizes), (3) independence testing (in any number of dimensions), (4) closeness testing for collections of distributions, and(5) testing histograms. For all of these problems, our testers are sample-optimal, up to constant factors. With the exception of (1), ours are the first sample-optimal testers for the corresponding problems. Moreover, our estimators are significantly simpler to state and analyze compared to previous results. As an important application of our reduction-based technique, we obtain the first adaptive algorithm for testing equivalence betweentwo unknown distributions. The sample complexity of our algorithm depends on the structure of the unknown distributions - as opposed to merely their domain size -and is significantly better compared to the worst-case optimal L1-tester in many natural instances. Moreover, our technique naturally generalizes to other metrics beyond the L1-distance. As an illustration of its flexibility, we use it to obtain the first near-optimal equivalence testerunder the Hellinger distance. Our lower bounds are obtained via a direct information-theoretic approach: Given a candidate hard instance, our proof proceeds by boundingthe mutual information between appropriate random variables. While this is a classical method in information theory, prior to our work, it had not been used in this context. Previous lower bounds relied either on the birthday paradox, oron moment-matching and were thus restricted to symmetric properties. Our lower bound approach does not suffer from any such restrictions and gives tight sample lower bounds for the aforementioned problems.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130277052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 149
Simulated Quantum Annealing Can Be Exponentially Faster Than Classical Simulated Annealing 模拟量子退火可以比经典模拟退火指数快
Pub Date : 2016-01-12 DOI: 10.1109/FOCS.2016.81
E. Crosson, A. Harrow
Can quantum computers solve optimization problems much more quickly than classical computers? One major piece of evidence for this proposition has been the fact that Quantum Annealing (QA) finds the minimum of some cost functions exponentially more quickly than classical Simulated Annealing (SA). One such cost function is the simple “Hamming weight with a spike” function in which the input is an n-bit string and the objective function is simply the Hamming weight, plus a tall thin barrier centered around Hamming weight n/4. While the global minimum of this cost function can be found by inspection, it is also a plausible toy model of the sort of local minima that arise in realworld optimization problems. It was shown by Farhi, Goldstone and Gutmann [1] that for this example SA takes exponential time and QA takes polynomial time, and the same result was generalized by Reichardt [2] to include barriers with width nζ and height nα for ζ + α ≤ 1/2. This advantage could be explained in terms of quantummechanical “tunneling.” Our work considers a classical algorithm known as Simulated Quantum Annealing (SQA) which relates certain quantum systems to classical Markov chains. By proving that these chains mix rapidly, we show that SQA runs in polynomial time on the Hamming weight with spike problem in much of the parameter regime where QA achieves an exponential advantage over SA. While our analysis only covers this toy model, it can be seen as evidence against the prospect of exponential quantum speedup using tunneling. Our technical contributions include extending the canonical path method for analyzing Markov chains to cover the case when not all vertices can be connected by low-congestion paths. We also develop methods for taking advantage of warm starts and for relating the quantum state in QA to the probability distribution in SQA. These techniques may be of use in future studies of SQA or of rapidly mixing Markov chains in general.
量子计算机能比经典计算机更快地解决优化问题吗?这一命题的一个主要证据是量子退火(QA)比经典模拟退火(SA)以指数速度更快地找到一些成本函数的最小值。一个这样的代价函数是简单的“带尖峰的汉明权值”函数,其中输入是一个n位字符串,目标函数就是汉明权值,加上一个以汉明权值n/4为中心的又高又薄的屏障。虽然可以通过检查找到这个代价函数的全局最小值,但它也是现实优化问题中出现的那种局部最小值的似是而非的玩具模型。Farhi, Goldstone和Gutmann[1]表明,对于这个例子,SA需要指数时间,QA需要多项式时间,Reichardt[2]将同样的结果推广到包括ζ + α≤1/2时宽度为nζ,高度为nα的屏障。这种优势可以用量子力学的“隧道效应”来解释。我们的工作考虑了一种被称为模拟量子退火(SQA)的经典算法,它将某些量子系统与经典马尔可夫链联系起来。通过证明这些链快速混合,我们表明SQA在带有峰值问题的Hamming权值上以多项式时间运行,其中QA比SA具有指数优势。虽然我们的分析只涵盖了这个玩具模型,但它可以被视为反对使用隧道的指数量子加速前景的证据。我们的技术贡献包括扩展用于分析马尔可夫链的规范路径方法,以涵盖并非所有顶点都可以通过低拥塞路径连接的情况。我们还开发了利用热启动的方法,以及将QA中的量子态与SQA中的概率分布联系起来的方法。这些技术可以用于未来SQA或快速混合马尔可夫链的研究。
{"title":"Simulated Quantum Annealing Can Be Exponentially Faster Than Classical Simulated Annealing","authors":"E. Crosson, A. Harrow","doi":"10.1109/FOCS.2016.81","DOIUrl":"https://doi.org/10.1109/FOCS.2016.81","url":null,"abstract":"Can quantum computers solve optimization problems much more quickly than classical computers? One major piece of evidence for this proposition has been the fact that Quantum Annealing (QA) finds the minimum of some cost functions exponentially more quickly than classical Simulated Annealing (SA). One such cost function is the simple “Hamming weight with a spike” function in which the input is an n-bit string and the objective function is simply the Hamming weight, plus a tall thin barrier centered around Hamming weight n/4. While the global minimum of this cost function can be found by inspection, it is also a plausible toy model of the sort of local minima that arise in realworld optimization problems. It was shown by Farhi, Goldstone and Gutmann [1] that for this example SA takes exponential time and QA takes polynomial time, and the same result was generalized by Reichardt [2] to include barriers with width nζ and height nα for ζ + α ≤ 1/2. This advantage could be explained in terms of quantummechanical “tunneling.” Our work considers a classical algorithm known as Simulated Quantum Annealing (SQA) which relates certain quantum systems to classical Markov chains. By proving that these chains mix rapidly, we show that SQA runs in polynomial time on the Hamming weight with spike problem in much of the parameter regime where QA achieves an exponential advantage over SA. While our analysis only covers this toy model, it can be seen as evidence against the prospect of exponential quantum speedup using tunneling. Our technical contributions include extending the canonical path method for analyzing Markov chains to cover the case when not all vertices can be connected by low-congestion paths. We also develop methods for taking advantage of warm starts and for relating the quantum state in QA to the probability distribution in SQA. These techniques may be of use in future studies of SQA or of rapidly mixing Markov chains in general.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116028688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 97
Rectangular Kronecker Coefficients and Plethysms in Geometric Complexity Theory 几何复杂性理论中的矩形克罗内克系数和体积
Pub Date : 2015-12-11 DOI: 10.1109/FOCS.2016.50
Christian Ikenmeyer, G. Panova
The geometric complexity theory program is an approach to separate algebraic complexity classes, more precisely to show the superpolynomial growth of the determinantal complexity dc(perm) of the permanent polynomial. Mulmuley and Sohoni showed that the vanishing behaviour of rectangular Kronecker coefficients could in principle be used to show some lower bounds on dc(perm) and they conjectured that superpolynomial lower bounds on dc(perm) could be shown in this way. In this paper we disprove this conjecture by Mulmuley and Sohoni, i.e., we prove that the vanishing of rectangular Kronecker coefficients cannot be used to prove superpolynomial lower bounds on dc(perm).
几何复杂性理论程序是一种分离代数复杂性类的方法,更准确地表示永久多项式的行列式复杂性dc(perm)的超多项式增长。Mulmuley和Sohoni表明,矩形Kronecker系数的消失行为原则上可以用来表示dc(perm)的一些下界,他们推测dc(perm)的超多项式下界可以用这种方式表示。本文反驳了Mulmuley和Sohoni的这一猜想,即证明了矩形Kronecker系数的消失不能用来证明dc(perm)上的超多项式下界。
{"title":"Rectangular Kronecker Coefficients and Plethysms in Geometric Complexity Theory","authors":"Christian Ikenmeyer, G. Panova","doi":"10.1109/FOCS.2016.50","DOIUrl":"https://doi.org/10.1109/FOCS.2016.50","url":null,"abstract":"The geometric complexity theory program is an approach to separate algebraic complexity classes, more precisely to show the superpolynomial growth of the determinantal complexity dc(perm) of the permanent polynomial. Mulmuley and Sohoni showed that the vanishing behaviour of rectangular Kronecker coefficients could in principle be used to show some lower bounds on dc(perm) and they conjectured that superpolynomial lower bounds on dc(perm) could be shown in this way. In this paper we disprove this conjecture by Mulmuley and Sohoni, i.e., we prove that the vanishing of rectangular Kronecker coefficients cannot be used to prove superpolynomial lower bounds on dc(perm).","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125990465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 52
Which Regular Expression Patterns Are Hard to Match? 哪些正则表达式模式难以匹配?
Pub Date : 2015-11-22 DOI: 10.1109/FOCS.2016.56
A. Backurs, P. Indyk
Regular expressions constitute a fundamental notion in formal language theory and are frequently used in computer science to define search patterns. In particular, regular expression matching and membership testing are widely used computational primitives, employed in many programming languages and text processing utilities. A classic algorithm for these problems constructs and simulates a non-deterministic finite automaton corresponding to the expression, resulting in an O(m n) running time (where m is the length of the pattern and n is the length of the text). This running time can be improved slightly (by a polylogarithmic factor), but no significantly faster solutions are known. At the same time, much faster algorithms exist for various special cases of regular expressions, including dictionary matching, wildcard matching, subset matching, word break problem etc. In this paper, we show that the complexity of regular expression matching can be characterized based on its depth (when interpreted as a formula). Our results hold for expressions involving concatenation, OR, Kleene star and Kleene plus. For regular expressions of depth two (involving any combination of the above operators), we show the following dichotomy: matching and membership testing can be solved in near-linear time, except for "concatenations of stars", which cannot be solved in strongly sub-quadratic time assuming the Strong Exponential Time Hypothesis (SETH). For regular expressions of depth three the picture is more complex. Nevertheless, we show that all problems can either be solved in strongly sub-quadratic time, or cannot be solved in strongly sub-quadratic time assuming SETH. An intriguing special case of membership testing involves regular expressions of the form "a star of an OR of concatenations", e.g., [a|ab|bc]*. This corresponds to the so-called word break problem, for which a dynamic programming algorithm with a runtime of (roughly) O(n √m) is known. We show that the latter bound is not tight and improve the runtime to O(n m0.44...).
正则表达式是形式语言理论中的一个基本概念,在计算机科学中经常用于定义搜索模式。特别是,正则表达式匹配和成员关系测试是广泛使用的计算原语,在许多编程语言和文本处理实用程序中使用。这些问题的经典算法构建并模拟了与表达式对应的非确定性有限自动机,导致运行时间为O(m n)(其中m是模式的长度,n是文本的长度)。这个运行时间可以稍微改进(通过多对数因子),但是没有明显更快的解决方案。同时,对于正则表达式的各种特殊情况,包括字典匹配、通配符匹配、子集匹配、断行问题等,存在更快的算法。在本文中,我们证明了正则表达式匹配的复杂性可以基于其深度来表征(当被解释为公式时)。我们的结果适用于涉及串联、OR、Kleene星和Kleene加的表达式。对于深度2的正则表达式(涉及上述算子的任何组合),我们证明了以下二分法:匹配和隶属性检验可以在近线性时间内解决,除了“星星的连接”,它不能在强次二次时间内解决,假设强指数时间假设(SETH)。对于深度为3的正则表达式,情况更为复杂。然而,我们证明了所有问题要么可以在强次二次时间内解决,要么不能在假设SETH的强次二次时间内解决。成员性测试的一个有趣的特殊情况涉及到形式为“连接的OR的a *”的正则表达式,例如,[a|ab|bc]*。这对应于所谓的断字问题,对于这个问题,已知的动态规划算法的运行时间(大致)为O(n√m)。我们证明了后一个边界并不紧,并将运行时间提高到O(n m0.44…)。
{"title":"Which Regular Expression Patterns Are Hard to Match?","authors":"A. Backurs, P. Indyk","doi":"10.1109/FOCS.2016.56","DOIUrl":"https://doi.org/10.1109/FOCS.2016.56","url":null,"abstract":"Regular expressions constitute a fundamental notion in formal language theory and are frequently used in computer science to define search patterns. In particular, regular expression matching and membership testing are widely used computational primitives, employed in many programming languages and text processing utilities. A classic algorithm for these problems constructs and simulates a non-deterministic finite automaton corresponding to the expression, resulting in an O(m n) running time (where m is the length of the pattern and n is the length of the text). This running time can be improved slightly (by a polylogarithmic factor), but no significantly faster solutions are known. At the same time, much faster algorithms exist for various special cases of regular expressions, including dictionary matching, wildcard matching, subset matching, word break problem etc. In this paper, we show that the complexity of regular expression matching can be characterized based on its depth (when interpreted as a formula). Our results hold for expressions involving concatenation, OR, Kleene star and Kleene plus. For regular expressions of depth two (involving any combination of the above operators), we show the following dichotomy: matching and membership testing can be solved in near-linear time, except for \"concatenations of stars\", which cannot be solved in strongly sub-quadratic time assuming the Strong Exponential Time Hypothesis (SETH). For regular expressions of depth three the picture is more complex. Nevertheless, we show that all problems can either be solved in strongly sub-quadratic time, or cannot be solved in strongly sub-quadratic time assuming SETH. An intriguing special case of membership testing involves regular expressions of the form \"a star of an OR of concatenations\", e.g., [a|ab|bc]*. This corresponds to the so-called word break problem, for which a dynamic programming algorithm with a runtime of (roughly) O(n √m) is known. We show that the latter bound is not tight and improve the runtime to O(n m0.44...).","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116750749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 89
Optimizing Star-Convex Functions 优化星-凸函数
Pub Date : 2015-11-13 DOI: 10.1109/FOCS.2016.71
Jasper C. H. Lee, Paul Valiant
Star-convexity is a significant relaxation of the notion of convexity, that allows for functions that do not have (sub)gradients at most points, and may even be discontinuous everywhere except at the global optimum. We introduce a polynomial time algorithm for optimizing the class of star-convex functions, under no Lipschitz or other smoothness assumptions whatsoever, and no restrictions except exponential boundedness on a region about the origin, and Lebesgue measurability. The algorithm's performance is polynomial in the requested number of digits of accuracy and the dimension of the search domain. This contrasts with the previous best known algorithm of Nesterov and Polyak which has exponential dependence on the number of digits of accuracy, but only n! dependence on the dimension n (where ! is the matrix multiplication exponent), and which further requires Lipschitz second differentiability of the function [1].
星凸性是凸性概念的一个重要的放宽,它允许函数在大多数点上没有(子)梯度,甚至可能在除全局最优处以外的任何地方都是不连续的。我们介绍了一种优化星凸函数类的多项式时间算法,在没有Lipschitz或其他平滑假设的情况下,除了在关于原点的区域上的指数有界性和Lebesgue可测量性之外,没有任何限制。该算法的性能在请求的精度位数和搜索域的维数上是多项式的。这与之前最著名的Nesterov和Polyak算法形成鲜明对比,该算法对精度位数具有指数依赖性,但只有n!依赖于维数n(其中!为矩阵乘法指数),进一步要求函数具有Lipschitz二阶可微性[1]。
{"title":"Optimizing Star-Convex Functions","authors":"Jasper C. H. Lee, Paul Valiant","doi":"10.1109/FOCS.2016.71","DOIUrl":"https://doi.org/10.1109/FOCS.2016.71","url":null,"abstract":"Star-convexity is a significant relaxation of the notion of convexity, that allows for functions that do not have (sub)gradients at most points, and may even be discontinuous everywhere except at the global optimum. We introduce a polynomial time algorithm for optimizing the class of star-convex functions, under no Lipschitz or other smoothness assumptions whatsoever, and no restrictions except exponential boundedness on a region about the origin, and Lebesgue measurability. The algorithm's performance is polynomial in the requested number of digits of accuracy and the dimension of the search domain. This contrasts with the previous best known algorithm of Nesterov and Polyak which has exponential dependence on the number of digits of accuracy, but only n! dependence on the dimension n (where ! is the matrix multiplication exponent), and which further requires Lipschitz second differentiability of the function [1].","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123695115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
A Deterministic Polynomial Time Algorithm for Non-commutative Rational Identity Testing 非交换有理恒等式检验的确定性多项式时间算法
Pub Date : 2015-11-11 DOI: 10.1109/FOCS.2016.95
A. Garg, L. Gurvits, R. Oliveira, A. Wigderson
Symbolic matrices in non-commuting variables, and the related structural and algorithmic questions, have a remarkable number of diverse origins and motivations. They arise independently in (commutative) invariant theory and representation theory, linear algebra, optimization, linear system theory, quantum information theory, and naturally in non-commutative algebra. In this paper we present a deterministic polynomial time algorithm for testing if a symbolic matrix in non-commuting variables over Q is invertible or not. The analogous question for commuting variables is the celebrated polynomial identity testing (PIT) for symbolic determinants. In contrast to the commutative case, which has an efficient probabilistic algorithm, the best previous algorithm for the non-commutative setting required exponential time [1] (whether or not randomization is allowed). The main (simple!) technical contribution of this paper is an analysis of an existing “operator scaling” algorithm due to Gurvits [2], which solved some special cases of the same problem we do (these already include optimization problems like matroid intersection). This analysis of the running time of Gurvits' algorithm combines results from some of these different fields. It lower bounds a parameter of quantum maps called capacity, via degree bounds from algebraic geometry on the Left Right group action, which in turn is relevant due to certain characterization of the free skew (non-commutative) field. Via the known connections above, our algorithm efficiently solves several problems in different areas which had only exponential-time algorithms prior to this work. These include the “word problem” for the free skew field (namely identity testing for rational expressions over non-commuting variables), testing if a quantum operator is “rank decreasing”, and the membership problem in the null-cone of a natural group action arising in Geometric Complexity Theory (GCT). Moreover, extending our algorithm to actually compute the non-commutative rank of a symbolic matrix, yields an efficient factor-2 approximation to the standard commutative rank. This naturally suggests the challenge to improve this approximation factor, noting that a fully polynomial approximation scheme may lead to a deterministic PIT algorithm. Finally, our algorithm may also be viewed as efficiently solving a family of structured systems of quadratic equations, which seem general enough to encode interesting decision and optimization problems1.
非交换变量中的符号矩阵,以及相关的结构和算法问题,有许多不同的起源和动机。它们独立出现在(交换)不变理论和表示理论、线性代数、最优化、线性系统理论、量子信息论中,自然也出现在非交换代数中。本文提出了一种确定性多项式时间算法,用于检验Q上非交换变量的符号矩阵是否可逆。交换变量的类似问题是符号行列式的著名多项式恒等检验(PIT)。与可交换的情况相比,它有一个有效的概率算法,而非可交换设置的最佳先前算法需要指数时间[1](无论是否允许随机化)。本文的主要(简单的)技术贡献是分析了Gurvits[2]提出的一种现有的“算子缩放”算法,该算法解决了我们所做的相同问题的一些特殊情况(这些已经包括优化问题,如矩阵相交)。Gurvits算法的运行时间分析结合了这些不同领域的结果。它的下界是量子映射的一个参数,称为容量,通过代数几何上的左右群作用的度界,这反过来又与自由偏斜(非交换)场的某些特征相关。通过上述已知的连接,我们的算法有效地解决了在此工作之前只有指数时间算法的不同领域的几个问题。这些问题包括自由偏场的“字问题”(即非交换变量上有理表达式的恒等检验),量子算子是否“秩递减”的检验,以及几何复杂性理论(GCT)中自然群作用的零锥中的隶属性问题。此外,将我们的算法扩展到实际计算符号矩阵的非交换秩,可以得到标准交换秩的有效因子2近似值。这自然表明了改进这个近似因子的挑战,注意到一个完全多项式的近似方案可能导致确定性的PIT算法。最后,我们的算法也可以被视为有效地解决了一组二次方程的结构化系统,这些系统似乎足够通用,可以编码有趣的决策和优化问题1。
{"title":"A Deterministic Polynomial Time Algorithm for Non-commutative Rational Identity Testing","authors":"A. Garg, L. Gurvits, R. Oliveira, A. Wigderson","doi":"10.1109/FOCS.2016.95","DOIUrl":"https://doi.org/10.1109/FOCS.2016.95","url":null,"abstract":"Symbolic matrices in non-commuting variables, and the related structural and algorithmic questions, have a remarkable number of diverse origins and motivations. They arise independently in (commutative) invariant theory and representation theory, linear algebra, optimization, linear system theory, quantum information theory, and naturally in non-commutative algebra. In this paper we present a deterministic polynomial time algorithm for testing if a symbolic matrix in non-commuting variables over Q is invertible or not. The analogous question for commuting variables is the celebrated polynomial identity testing (PIT) for symbolic determinants. In contrast to the commutative case, which has an efficient probabilistic algorithm, the best previous algorithm for the non-commutative setting required exponential time [1] (whether or not randomization is allowed). The main (simple!) technical contribution of this paper is an analysis of an existing “operator scaling” algorithm due to Gurvits [2], which solved some special cases of the same problem we do (these already include optimization problems like matroid intersection). This analysis of the running time of Gurvits' algorithm combines results from some of these different fields. It lower bounds a parameter of quantum maps called capacity, via degree bounds from algebraic geometry on the Left Right group action, which in turn is relevant due to certain characterization of the free skew (non-commutative) field. Via the known connections above, our algorithm efficiently solves several problems in different areas which had only exponential-time algorithms prior to this work. These include the “word problem” for the free skew field (namely identity testing for rational expressions over non-commuting variables), testing if a quantum operator is “rank decreasing”, and the membership problem in the null-cone of a natural group action arising in Geometric Complexity Theory (GCT). Moreover, extending our algorithm to actually compute the non-commutative rank of a symbolic matrix, yields an efficient factor-2 approximation to the standard commutative rank. This naturally suggests the challenge to improve this approximation factor, noting that a fully polynomial approximation scheme may lead to a deterministic PIT algorithm. Finally, our algorithm may also be viewed as efficiently solving a family of structured systems of quadratic equations, which seem general enough to encode interesting decision and optimization problems1.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116879401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 110
Local Conflict Coloring 局部冲突着色
Pub Date : 2015-11-04 DOI: 10.1109/FOCS.2016.73
P. Fraigniaud, Marc Heinrich, A. Kosowski
Locally finding a solution to symmetry-breaking tasks such as vertex-coloring, edge-coloring, maximal matching, maximal independent set, etc., is a long-standing challenge in distributed network computing. More recently, it has also become a challenge in the framework of centralized local computation. We introduce conflict coloring as a general symmetry-breaking task that includes all the aforementioned tasks as specific instantiations - conflict coloring includes all locally checkable labeling tasks from [Naor & Stockmeyer, STOC 1993]. Conflict coloring is characterized by two parameters l and d, where the former measures the amount of freedom given to the nodes for selecting their colors, and the latter measures the number of constraints which colors of adjacent nodes are subject to. We show that, in the standard LOCAL model for distributed network computing, if l/d > Δ, then conflict coloring can be solved in Õ(√Δ)+log*n rounds in n-node graphs with maximum degree Δ, where Õ ignores the polylog factors in Δ. The dependency in n is optimal, as a consequence of the Ω(log*n) lower bound by [Linial, SIAM J. Comp. 1992] for (Δ + 1)-coloring. An important special case of our result is a significant improvement over the best known algorithm for distributed (Δ + 1)-coloring due to [Barenboim, PODC 2015], which required Õ(Δ3/4) + log*n rounds. Improvements for other variants of coloring, including (Δ + 1)-list-coloring, (2Δ-1)-edge-coloring, coloring with forbidden color distances, etc., also follow from our general result on conflict coloring. Likewise, in the framework of centralized local computation algorithms (LCAs), our general result yields an LCA which requires a smaller number of probes than the previously best known algorithm for vertex-coloring, and works for a wide range of coloring problems.
局部求解顶点着色、边缘着色、最大匹配、最大独立集等对称性破坏任务是分布式网络计算中一个长期存在的难题。最近,它也成为集中式本地计算框架中的一个挑战。我们将冲突着色作为一般的对称性破坏任务引入,其中包括所有上述任务作为具体实例-冲突着色包括来自[Naor & Stockmeyer, STOC 1993]的所有局部可检查标记任务。冲突着色由两个参数l和d来表征,其中l和d度量给节点选择颜色的自由度,d度量相邻节点的颜色所受的约束数量。我们证明,在分布式网络计算的标准LOCAL模型中,如果l/d > Δ,则冲突着色可以在最大度为Δ的n节点图中Õ(√Δ)+log*n轮中解决,其中Õ忽略了Δ中的多对数因子。由于[Linial, SIAM J. Comp. 1992]对(Δ + 1)着色的Ω(log*n)下界,n的依赖性是最优的。我们的结果的一个重要的特殊情况是,由于[Barenboim, PODC 2015],分布式(Δ + 1)着色算法比最著名的算法有了显著的改进,该算法需要Õ(Δ3/4) + log*n轮。对其他着色变体的改进,包括(Δ + 1)-列表着色,(2Δ-1)-边缘着色,禁止颜色距离的着色等,也遵循我们对冲突着色的一般结果。同样,在集中式局部计算算法(LCA)的框架中,我们的一般结果产生的LCA比以前最著名的顶点着色算法需要更少的探针数量,并且适用于广泛的着色问题。
{"title":"Local Conflict Coloring","authors":"P. Fraigniaud, Marc Heinrich, A. Kosowski","doi":"10.1109/FOCS.2016.73","DOIUrl":"https://doi.org/10.1109/FOCS.2016.73","url":null,"abstract":"Locally finding a solution to symmetry-breaking tasks such as vertex-coloring, edge-coloring, maximal matching, maximal independent set, etc., is a long-standing challenge in distributed network computing. More recently, it has also become a challenge in the framework of centralized local computation. We introduce conflict coloring as a general symmetry-breaking task that includes all the aforementioned tasks as specific instantiations - conflict coloring includes all locally checkable labeling tasks from [Naor & Stockmeyer, STOC 1993]. Conflict coloring is characterized by two parameters l and d, where the former measures the amount of freedom given to the nodes for selecting their colors, and the latter measures the number of constraints which colors of adjacent nodes are subject to. We show that, in the standard LOCAL model for distributed network computing, if l/d > Δ, then conflict coloring can be solved in Õ(√Δ)+log*n rounds in n-node graphs with maximum degree Δ, where Õ ignores the polylog factors in Δ. The dependency in n is optimal, as a consequence of the Ω(log*n) lower bound by [Linial, SIAM J. Comp. 1992] for (Δ + 1)-coloring. An important special case of our result is a significant improvement over the best known algorithm for distributed (Δ + 1)-coloring due to [Barenboim, PODC 2015], which required Õ(Δ3/4) + log*n rounds. Improvements for other variants of coloring, including (Δ + 1)-list-coloring, (2Δ-1)-edge-coloring, coloring with forbidden color distances, etc., also follow from our general result on conflict coloring. Likewise, in the framework of centralized local computation algorithms (LCAs), our general result yields an LCA which requires a smaller number of probes than the previously best known algorithm for vertex-coloring, and works for a wide range of coloring problems.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124790294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 123
Learning in Auctions: Regret is Hard, Envy is Easy 在拍卖中学习:后悔很难,嫉妒很容易
Pub Date : 2015-11-04 DOI: 10.1109/FOCS.2016.31
C. Daskalakis, Vasilis Syrgkanis
An extensive body of recent work studies the welfare guarantees of simple and prevalent combinatorial auction formats, such as selling m items via simultaneous second price auctions (SiSPAs) [1], [2], [3]. These guarantees hold even when the auctions are repeatedly executed and the players use no-regret learning algorithms to choose their actions. Unfortunately, off-the-shelf no-regret learning algorithms for these auctions are computationally inefficient as the number of actions available to the players becomes exponential. We show that this obstacle is inevitable: there are no polynomial-time no-regret learning algorithms for SiSPAs, unless RP ⊇ NP, even when the bidders are unit-demand. Our lower bound raises the question of how good outcomes polynomially-bounded bidders may discover in such auctions. To answer this question, we propose a novel concept of learning in auctions, termed "no-envy learning." This notion is founded upon Walrasian equilibrium, and we show that it is both efficiently implementable and results in approximately optimal welfare, even when the bidders have valuations from the broad class of fractionally subadditive (XOS) valuations (assuming demand oracle access to the valuations) or coverage valuations (even without demand oracles). No-envy learning outcomes are a relaxation of no-regret learning outcomes, which maintain their approximate welfare optimality while endowing them with computational tractability. Our positive and negative results extend to several auction formats that have been studied in the literature via the smoothness paradigm. Our positive results for XOS valuations are enabled by a novel Follow-The-Perturbed-Leader algorithm for settings where the number of experts and states of nature are both infinite, and the payoff function of the learner is non-linear. We show that this algorithm has applications outside of auction settings, establishing significant gains in a recent application of no-regret learning in security games. Our efficient learning result for coverage valuations is based on a novel use of convex rounding schemes and a reduction to online convex optimization.
最近的大量工作研究了简单和普遍的组合拍卖形式的福利保障,例如通过同步第二价格拍卖(SiSPAs)[1],[2],[3]出售m件物品。即使拍卖被反复执行,玩家使用无悔学习算法来选择他们的行动,这些保证仍然有效。不幸的是,这些拍卖现成的无悔学习算法在计算上效率很低,因为玩家可用的行动数量呈指数增长。我们证明了这个障碍是不可避免的:除非RP是单位需求的,否则没有多项式时间的无遗憾学习算法。我们的下界提出了一个问题,即多项式有界的竞标者在这样的拍卖中可能会发现多么好的结果。为了回答这个问题,我们提出了一个新的拍卖学习概念,称为“无嫉妒学习”。这个概念是建立在瓦尔拉斯均衡的基础上的,我们证明了它既可以有效地实现,也可以产生近似最优的福利,即使竞标者的估值来自广泛的分数次加性(XOS)估值(假设需求预测器访问估值)或覆盖估值(即使没有需求预测器)。无嫉妒学习结果是对无后悔学习结果的一种放松,在保持其近似福利最优的同时赋予其计算可追溯性。我们的正面和负面结果扩展到文献中通过平滑范式研究的几种拍卖格式。我们对XOS估值的积极结果是通过一种新颖的跟随受扰领导者算法实现的,该算法适用于专家数量和自然状态都是无限的情况,并且学习者的收益函数是非线性的。我们展示了该算法在拍卖设置之外的应用,在最近的安全游戏中无悔学习的应用中取得了重大进展。我们对覆盖率评估的有效学习结果是基于凸舍入方案的新使用和对在线凸优化的简化。
{"title":"Learning in Auctions: Regret is Hard, Envy is Easy","authors":"C. Daskalakis, Vasilis Syrgkanis","doi":"10.1109/FOCS.2016.31","DOIUrl":"https://doi.org/10.1109/FOCS.2016.31","url":null,"abstract":"An extensive body of recent work studies the welfare guarantees of simple and prevalent combinatorial auction formats, such as selling m items via simultaneous second price auctions (SiSPAs) [1], [2], [3]. These guarantees hold even when the auctions are repeatedly executed and the players use no-regret learning algorithms to choose their actions. Unfortunately, off-the-shelf no-regret learning algorithms for these auctions are computationally inefficient as the number of actions available to the players becomes exponential. We show that this obstacle is inevitable: there are no polynomial-time no-regret learning algorithms for SiSPAs, unless RP ⊇ NP, even when the bidders are unit-demand. Our lower bound raises the question of how good outcomes polynomially-bounded bidders may discover in such auctions. To answer this question, we propose a novel concept of learning in auctions, termed \"no-envy learning.\" This notion is founded upon Walrasian equilibrium, and we show that it is both efficiently implementable and results in approximately optimal welfare, even when the bidders have valuations from the broad class of fractionally subadditive (XOS) valuations (assuming demand oracle access to the valuations) or coverage valuations (even without demand oracles). No-envy learning outcomes are a relaxation of no-regret learning outcomes, which maintain their approximate welfare optimality while endowing them with computational tractability. Our positive and negative results extend to several auction formats that have been studied in the literature via the smoothness paradigm. Our positive results for XOS valuations are enabled by a novel Follow-The-Perturbed-Leader algorithm for settings where the number of experts and states of nature are both infinite, and the payoff function of the learner is non-linear. We show that this algorithm has applications outside of auction settings, establishing significant gains in a recent application of no-regret learning in security games. Our efficient learning result for coverage valuations is based on a novel use of convex rounding schemes and a reduction to online convex optimization.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131469964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
The Constant Inapproximability of the Parameterized Dominating Set Problem 参数化支配集问题的常不可逼近性
Pub Date : 2015-10-31 DOI: 10.1109/FOCS.2016.61
Yijia Chen, Bingkai Lin
We prove that there is no fpt-algorithm that can approximate the dominating set problem with any constant ratio, unless FPT = W[1]. Our hardness reduction is built on the second author's recent W[1]-hardness proof of the biclique problem [25]. This yields, among other things, a proof without the PCP machinery that the classical dominating set problem has no polynomial time constant approximation under the exponential time hypothesis.
我们证明了不存在能近似任意常数比支配集问题的FPT -算法,除非FPT = W[1]。我们的硬度还原是建立在第二作者最近的W[1]-对双曲线问题[25]的硬度证明之上的。这证明了经典支配集问题在指数时间假设下没有多项式时间常数近似,而不需要PCP机制。
{"title":"The Constant Inapproximability of the Parameterized Dominating Set Problem","authors":"Yijia Chen, Bingkai Lin","doi":"10.1109/FOCS.2016.61","DOIUrl":"https://doi.org/10.1109/FOCS.2016.61","url":null,"abstract":"We prove that there is no fpt-algorithm that can approximate the dominating set problem with any constant ratio, unless FPT = W[1]. Our hardness reduction is built on the second author's recent W[1]-hardness proof of the biclique problem [25]. This yields, among other things, a proof without the PCP machinery that the classical dominating set problem has no polynomial time constant approximation under the exponential time hypothesis.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122886358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
期刊
2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1