We show that there is an equation of degree at most poly(n) for the (Zariski closure of the) set of the non-rigid matrices: That is, we show that for every large enough field 𝔽, there is a non-zero n2-variate polynomial P ε 𝔽[x1, 1, ..., xn, n] of degree at most poly(n) such that every matrix M that can be written as a sum of a matrix of rank at most n/100 and a matrix of sparsity at most n2/100 satisfies P(M) = 0. This confirms a conjecture of Gesmundo, Hauenstein, Ikenmeyer, and Landsberg [9] and improves the best upper bound known for this problem down from exp (n2) [9, 12] to poly(n). We also show a similar polynomial degree bound for the (Zariski closure of the) set of all matrices M such that the linear transformation represented by M can be computed by an algebraic circuit with at most n2/200 edges (without any restriction on the depth). As far as we are aware, no such bound was known prior to this work when the depth of the circuits is unbounded. Our methods are elementary and short and rely on a polynomial map of Shpilka and Volkovich [21] to construct low-degree “universal” maps for non-rigid matrices and small linear circuits. Combining this construction with a simple dimension counting argument to show that any such polynomial map has a low-degree annihilating polynomial completes the proof. As a corollary, we show that any derandomization of the polynomial identity testing problem will imply new circuit lower bounds. A similar (but incomparable) theorem was proved by Kabanets and Impagliazzo [11].
{"title":"A Polynomial Degree Bound on Equations for Non-rigid Matrices and Small Linear Circuits","authors":"Ben lee Volk, Mrinal Kumar","doi":"10.1145/3543685","DOIUrl":"https://doi.org/10.1145/3543685","url":null,"abstract":"We show that there is an equation of degree at most poly(n) for the (Zariski closure of the) set of the non-rigid matrices: That is, we show that for every large enough field 𝔽, there is a non-zero n2-variate polynomial P ε 𝔽[x1, 1, ..., xn, n] of degree at most poly(n) such that every matrix M that can be written as a sum of a matrix of rank at most n/100 and a matrix of sparsity at most n2/100 satisfies P(M) = 0. This confirms a conjecture of Gesmundo, Hauenstein, Ikenmeyer, and Landsberg [9] and improves the best upper bound known for this problem down from exp (n2) [9, 12] to poly(n). We also show a similar polynomial degree bound for the (Zariski closure of the) set of all matrices M such that the linear transformation represented by M can be computed by an algebraic circuit with at most n2/200 edges (without any restriction on the depth). As far as we are aware, no such bound was known prior to this work when the depth of the circuits is unbounded. Our methods are elementary and short and rely on a polynomial map of Shpilka and Volkovich [21] to construct low-degree “universal” maps for non-rigid matrices and small linear circuits. Combining this construction with a simple dimension counting argument to show that any such polynomial map has a low-degree annihilating polynomial completes the proof. As a corollary, we show that any derandomization of the polynomial identity testing problem will imply new circuit lower bounds. A similar (but incomparable) theorem was proved by Kabanets and Impagliazzo [11].","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123385989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work, we study the problem of testing subsequence-freeness. For a given subsequence (word) w = w1 … wk, a sequence (text) T = t1 … tn is said to contain w if there exist indices 1 ≤ i1 < … < ik ≤ n such that tij = wj for every 1 ≤ j ≤ k. Otherwise, T is w-free. While a large majority of the research in property testing deals with algorithms that perform queries, here we consider sample-based testing (with one-sided error). In the “standard” sample-based model (i.e., under the uniform distribution), the algorithm is given samples (i, ti) where i is distributed uniformly independently at random. The algorithm should distinguish between the case that T is w-free, and the case that T is ε-far from being w-free (i.e., more than an ε-fraction of its symbols should be modified so as to make it w-free). Freitag, Price, and Swartworth (Proceedings of RANDOM, 2017) showed that O((k2 log k)ε) samples suffice for this testing task. We obtain the following results. – The number of samples sufficient for one-sided error sample-based testing (under the uniform distribution) is O(kε). This upper bound builds on a characterization that we present for the distance of a text T from w-freeness in terms of the maximum number of copies of w in T, where these copies should obey certain restrictions. – We prove a matching lower bound, which holds for every word w. This implies that the above upper bound is tight. – The same upper bound holds in the more general distribution-free sample-based model. In this model, the algorithm receives samples (i, ti) where i is distributed according to an arbitrary distribution p (and the distance from w-freeness is measured with respect to p). We highlight the fact that while we require that the testing algorithm work for every distribution and when only provided with samples, the complexity we get matches a known lower bound for a special case of the seemingly easier problem of testing subsequence-freeness with one-sided error under the uniform distribution and with queries (Canonne et al., Theory of Computing, 2019).
在这项工作中,我们研究了子序列自由度的测试问题。对于给定的子序列(word) w = w1…wk,如果存在索引1≤i1 <…< ik≤n,且对于每一个1≤j≤k, tij = wj,则称序列(text) T = t1…tn包含w,否则T不含w。虽然绝大多数属性测试研究涉及执行查询的算法,但在这里我们考虑基于样本的测试(具有单侧误差)。在“标准”样本模型中(即均匀分布下),算法给定样本(i, ti),其中i均匀独立随机分布。算法应该区分T是无w的情况,以及T是ε-远不是无w的情况(即,应该修改其符号的ε-分数以使其无w)。Freitag, Price和Swartworth (Proceedings of RANDOM, 2017)表明,O((k2 log k)ε)样本足以完成该测试任务。我们得到以下结果。-在均匀分布下,足以进行单侧误差样本检验的样本数量为0 (kε)。这个上限建立在一个表征上,我们用w在T中的最大拷贝数来表示文本T与w自由的距离,这些拷贝应该遵守一定的限制。-我们证明了一个匹配的下界,它适用于每个单词w。这意味着上面的上界是紧的。同样的上界适用于更一般的无分布的基于样本的模型。在该模型中,算法接收样本(i, ti),其中i根据任意分布p分布(并且相对于p测量到w-free的距离)。我们强调的事实是,虽然我们要求测试算法适用于每个分布,并且仅提供样本时,我们得到的复杂性匹配一个已知的下界,这是一个看似更容易的问题,即在均匀分布和查询下测试单侧错误的子序列自由性(Canonne等人,Theory of Computing, 2019)。
{"title":"Optimal Distribution-Free Sample-Based Testing of Subsequence-Freeness with One-Sided Error","authors":"D. Ron, Asaf Rosin","doi":"10.1145/3512750","DOIUrl":"https://doi.org/10.1145/3512750","url":null,"abstract":"In this work, we study the problem of testing subsequence-freeness. For a given subsequence (word) w = w1 … wk, a sequence (text) T = t1 … tn is said to contain w if there exist indices 1 ≤ i1 < … < ik ≤ n such that tij = wj for every 1 ≤ j ≤ k. Otherwise, T is w-free. While a large majority of the research in property testing deals with algorithms that perform queries, here we consider sample-based testing (with one-sided error). In the “standard” sample-based model (i.e., under the uniform distribution), the algorithm is given samples (i, ti) where i is distributed uniformly independently at random. The algorithm should distinguish between the case that T is w-free, and the case that T is ε-far from being w-free (i.e., more than an ε-fraction of its symbols should be modified so as to make it w-free). Freitag, Price, and Swartworth (Proceedings of RANDOM, 2017) showed that O((k2 log k)ε) samples suffice for this testing task. We obtain the following results. – The number of samples sufficient for one-sided error sample-based testing (under the uniform distribution) is O(kε). This upper bound builds on a characterization that we present for the distance of a text T from w-freeness in terms of the maximum number of copies of w in T, where these copies should obey certain restrictions. – We prove a matching lower bound, which holds for every word w. This implies that the above upper bound is tight. – The same upper bound holds in the more general distribution-free sample-based model. In this model, the algorithm receives samples (i, ti) where i is distributed according to an arbitrary distribution p (and the distance from w-freeness is measured with respect to p). We highlight the fact that while we require that the testing algorithm work for every distribution and when only provided with samples, the complexity we get matches a known lower bound for a special case of the seemingly easier problem of testing subsequence-freeness with one-sided error under the uniform distribution and with queries (Canonne et al., Theory of Computing, 2019).","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"171 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116274673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We prove that the OR function on {-1,1}n can be pointwise approximated with error ε by a polynomial of degree O(k) and weight 2O(n log (1/ε)/k), for any k ≥ √n log (1/ε). This result is tight for any k ≤ (1-Ω (1))n. Previous results were either not tight or had ε = Ω (1). In general, we obtain a tight approximate degree-weight result for any symmetric function. Building on this, we also obtain an approximate degree-weight result for bounded-width CNF. For these two classes no such result was known. We prove that the ( mathsf {OR} ) function on ( lbrace -1,1rbrace ^n ) can be pointwise approximated with error ( epsilon ) by a polynomial of degree ( O(k) ) and weight ( 2^{O(n log (1/epsilon) /k)} ) , for any ( k ge sqrt {n log (1/epsilon)} ) . This result is tight for any ( k le (1-Omega (1))n ) . Previous results were either not tight or had ( epsilon = Omega (1) ) . In general, we obtain a tight approximate degree-weight result for any symmetric function. Building on this, we also obtain an approximate degree-weight result for bounded-width ( mathsf {CNF} ) . For these two classes no such result was known. One motivation for such results comes from the study of indistinguishability. Two distributions ( P ) , ( Q ) over ( n ) -bit strings are ( (k,delta) ) -indistinguishable if their projections on any ( k ) bits have statistical distance at most ( delta ) . The above approximations give values of ( (k,delta) ) that suffice to fool ( mathsf {OR} ) , symmetric functions, and bounded-width ( mathsf {CNF} ) , and the first result is tight for all ( k ) while the second result is tight for ( k le (1-Omega (1))n ) . We also show that any two ( (k, delta) ) -indistinguishable distributions are ( O(n^{k/2}delta) ) -close to two distributions that are ( (k,0) ) -indistinguishable, improving the previous bound of ( O(n)^k delta ) . Finally, we present proofs of some known approximate degree lower bounds in the language of indistinguishability, which we find more intuitive.
{"title":"Approximate Degree, Weight, and Indistinguishability","authors":"Xuangui Huang, Emanuele Viola","doi":"10.1145/3492338","DOIUrl":"https://doi.org/10.1145/3492338","url":null,"abstract":"We prove that the OR function on {-1,1}n can be pointwise approximated with error ε by a polynomial of degree O(k) and weight 2O(n log (1/ε)/k), for any k ≥ √n log (1/ε). This result is tight for any k ≤ (1-Ω (1))n. Previous results were either not tight or had ε = Ω (1). In general, we obtain a tight approximate degree-weight result for any symmetric function. Building on this, we also obtain an approximate degree-weight result for bounded-width CNF. For these two classes no such result was known. We prove that the ( mathsf {OR} ) function on ( lbrace -1,1rbrace ^n ) can be pointwise approximated with error ( epsilon ) by a polynomial of degree ( O(k) ) and weight ( 2^{O(n log (1/epsilon) /k)} ) , for any ( k ge sqrt {n log (1/epsilon)} ) . This result is tight for any ( k le (1-Omega (1))n ) . Previous results were either not tight or had ( epsilon = Omega (1) ) . In general, we obtain a tight approximate degree-weight result for any symmetric function. Building on this, we also obtain an approximate degree-weight result for bounded-width ( mathsf {CNF} ) . For these two classes no such result was known. One motivation for such results comes from the study of indistinguishability. Two distributions ( P ) , ( Q ) over ( n ) -bit strings are ( (k,delta) ) -indistinguishable if their projections on any ( k ) bits have statistical distance at most ( delta ) . The above approximations give values of ( (k,delta) ) that suffice to fool ( mathsf {OR} ) , symmetric functions, and bounded-width ( mathsf {CNF} ) , and the first result is tight for all ( k ) while the second result is tight for ( k le (1-Omega (1))n ) . We also show that any two ( (k, delta) ) -indistinguishable distributions are ( O(n^{k/2}delta) ) -close to two distributions that are ( (k,0) ) -indistinguishable, improving the previous bound of ( O(n)^k delta ) . Finally, we present proofs of some known approximate degree lower bounds in the language of indistinguishability, which we find more intuitive.","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134239889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the fine-grained complexity of NP-complete satisfiability (SAT) problems and constraint satisfaction problems (CSPs) in the context of the strong exponential-time hypothesis(SETH), showing non-trivial lower and upper bounds on the running time. Here, by a non-trivial lower bound for a problem SAT (Γ) (respectively CSP (Γ)) with constraint language Γ, we mean a value c0 > 1 such that the problem cannot be solved in time O(cn) for any c
{"title":"The (Coarse) Fine-Grained Structure of NP-Hard SAT and CSP Problems","authors":"Victor Lagerkvist, Magnus Wahlström","doi":"10.1145/3492336","DOIUrl":"https://doi.org/10.1145/3492336","url":null,"abstract":"We study the fine-grained complexity of NP-complete satisfiability (SAT) problems and constraint satisfaction problems (CSPs) in the context of the strong exponential-time hypothesis(SETH), showing non-trivial lower and upper bounds on the running time. Here, by a non-trivial lower bound for a problem SAT (Γ) (respectively CSP (Γ)) with constraint language Γ, we mean a value c0 > 1 such that the problem cannot be solved in time O(cn) for any c <c0 unless SETH is false, while a non-trivial upper bound is simply an algorithm for the problem running in time O(cn) for some c< 2. Such lower bounds have proven extremely elusive, and except for cases where c0=2 effectively no such previous bound was known. We achieve this by employing an algebraic framework, studying constraint languages Γ in terms of their algebraic properties. We uncover a powerful algebraic framework where a mild restriction on the allowed constraints offers a concise algebraic characterization. On the relational side we restrict ourselves to Boolean languages closed under variable negation and partial assignment, called sign-symmetric languages. On the algebraic side this results in a description via partial operations arising from system of identities, with a close connection to operations resulting in tractable CSPs, such as near unanimity operations and edge operations. Using this connection we construct improved algorithms for several interesting classes of sign-symmetric languages, and prove explicit lower bounds under SETH. Thus, we find the first example of an NP-complete SAT problem with a non-trivial algorithm which also admits a non-trivial lower bound under SETH. This suggests a dichotomy conjecture with a close connection to the CSP dichotomy theorem: an NP-complete SAT problem admits an improved algorithm if and only if it admits a non-trivial partial invariant of the above form.","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"18 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124605620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Fomin, P. Golovach, D. Lokshtanov, Fahad Panolan, Saket Saurabh, M. Zehavi
Parameterization above a guarantee is a successful paradigm in Parameterized Complexity. To the best of our knowledge, all fixed-parameter tractable problems in this paradigm share an additive form defined as follows. Given an instance (I,k) of some (parameterized) problem π with a guarantee g(I), decide whether I admits a solution of size at least (or at most) k + g(I). Here, g(I) is usually a lower bound on the minimum size of a solution. Since its introduction in 1999 for MAX SAT and MAX CUT (with g(I) being half the number of clauses and half the number of edges, respectively, in the input), analysis of parameterization above a guarantee has become a very active and fruitful topic of research. We highlight a multiplicative form of parameterization above (or, rather, times) a guarantee: Given an instance (I,k) of some (parameterized) problem π with a guarantee g(I), decide whether I admits a solution of size at least (or at most) k · g(I). In particular, we study the Long Cycle problem with a multiplicative parameterization above the girth g(I) of the input graph, which is the most natural guarantee for this problem, and provide a fixed-parameter algorithm. Apart from being of independent interest, this exemplifies how parameterization above a multiplicative guarantee can arise naturally. We also show that, for any fixed constant ε > 0, multiplicative parameterization above g(I)1+ε of Long Cycle yields para-NP-hardness, thus our parameterization is tight in this sense. We complement our main result with the design (or refutation of the existence) of fixed-parameter algorithms as well as kernelization algorithms for additional problems parameterized multiplicatively above girth.
{"title":"Multiplicative Parameterization Above a Guarantee","authors":"F. Fomin, P. Golovach, D. Lokshtanov, Fahad Panolan, Saket Saurabh, M. Zehavi","doi":"10.1145/3460956","DOIUrl":"https://doi.org/10.1145/3460956","url":null,"abstract":"Parameterization above a guarantee is a successful paradigm in Parameterized Complexity. To the best of our knowledge, all fixed-parameter tractable problems in this paradigm share an additive form defined as follows. Given an instance (I,k) of some (parameterized) problem π with a guarantee g(I), decide whether I admits a solution of size at least (or at most) k + g(I). Here, g(I) is usually a lower bound on the minimum size of a solution. Since its introduction in 1999 for MAX SAT and MAX CUT (with g(I) being half the number of clauses and half the number of edges, respectively, in the input), analysis of parameterization above a guarantee has become a very active and fruitful topic of research. We highlight a multiplicative form of parameterization above (or, rather, times) a guarantee: Given an instance (I,k) of some (parameterized) problem π with a guarantee g(I), decide whether I admits a solution of size at least (or at most) k · g(I). In particular, we study the Long Cycle problem with a multiplicative parameterization above the girth g(I) of the input graph, which is the most natural guarantee for this problem, and provide a fixed-parameter algorithm. Apart from being of independent interest, this exemplifies how parameterization above a multiplicative guarantee can arise naturally. We also show that, for any fixed constant ε > 0, multiplicative parameterization above g(I)1+ε of Long Cycle yields para-NP-hardness, thus our parameterization is tight in this sense. We complement our main result with the design (or refutation of the existence) of fixed-parameter algorithms as well as kernelization algorithms for additional problems parameterized multiplicatively above girth.","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129221681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Valentine Kabanets, Sajin Koroth, Zhenjian Lu, Dimitrios Myrisiotis, I. Oliveira
The class FORMULA[s]∘G consists of Boolean functions computable by size-s De Morgan formulas whose leaves are any Boolean functions from a class G. We give lower bounds and (SAT, Learning, and pseudorandom generators (PRGs)) algorithms for FORMULA[n1.99]∘G, for classes G of functions with low communication complexity. Let R(k)G be the maximum k-party number-on-forehead randomized communication complexity of a function in G. Among other results, we show the following: • The Generalized Inner Product function GIPkn cannot be computed in FORMULA[s]° G on more than 1/2+ε fraction of inputs for s=o(n2/k⋅4k⋅R(k)(G)⋅log(n/ε)⋅log(1/ε))2). This significantly extends the lower bounds against bipartite formulas obtained by [62]. As a corollary, we get an average-case lower bound for GIPkn against FORMULA[n1.99]∘PTFk−1, i.e., sub-quadratic-size De Morgan formulas with degree-k-1) PTF (polynomial threshold function) gates at the bottom. Previously, it was open whether a super-linear lower bound holds for AND of PTFs.• There is a PRG of seed length n/2+O(s⋅R(2)(G)⋅log(s/ε)⋅log(1/ε)) that ε-fools FORMULA[s]∘G. For the special case of FORMULA[s]∘LTF, i.e., size-s formulas with LTF (linear threshold function) gates at the bottom, we get the better seed length O(n1/2⋅s1/4⋅log(n)⋅log(n/ε)). In particular, this provides the first non-trivial PRG (with seed length o(n)) for intersections of n halfspaces in the regime where ε≤1/n, complementing a recent result of [45].• There exists a randomized 2n-t #SAT algorithm for FORMULA[s]∘G, where t=Ω(n√s⋅log2(s)⋅R(2)(G))/1/2. In particular, this implies a nontrivial #SAT algorithm for FORMULA[n1.99]∘LTF.• The Minimum Circuit Size Problem is not in FORMULA[n1.99]∘XOR; thereby making progress on hardness magnification, in connection with results from [14, 46]. On the algorithmic side, we show that the concept class FORMULA[n1.99]∘XOR can be PAC-learned in time 2O(n/log n).
{"title":"Algorithms and Lower Bounds for De Morgan Formulas of Low-Communication Leaf Gates","authors":"Valentine Kabanets, Sajin Koroth, Zhenjian Lu, Dimitrios Myrisiotis, I. Oliveira","doi":"10.1145/3470861","DOIUrl":"https://doi.org/10.1145/3470861","url":null,"abstract":"The class FORMULA[s]∘G consists of Boolean functions computable by size-s De Morgan formulas whose leaves are any Boolean functions from a class G. We give lower bounds and (SAT, Learning, and pseudorandom generators (PRGs)) algorithms for FORMULA[n1.99]∘G, for classes G of functions with low communication complexity. Let R(k)G be the maximum k-party number-on-forehead randomized communication complexity of a function in G. Among other results, we show the following: • The Generalized Inner Product function GIPkn cannot be computed in FORMULA[s]° G on more than 1/2+ε fraction of inputs for s=o(n2/k⋅4k⋅R(k)(G)⋅log(n/ε)⋅log(1/ε))2). This significantly extends the lower bounds against bipartite formulas obtained by [62]. As a corollary, we get an average-case lower bound for GIPkn against FORMULA[n1.99]∘PTFk−1, i.e., sub-quadratic-size De Morgan formulas with degree-k-1) PTF (polynomial threshold function) gates at the bottom. Previously, it was open whether a super-linear lower bound holds for AND of PTFs.• There is a PRG of seed length n/2+O(s⋅R(2)(G)⋅log(s/ε)⋅log(1/ε)) that ε-fools FORMULA[s]∘G. For the special case of FORMULA[s]∘LTF, i.e., size-s formulas with LTF (linear threshold function) gates at the bottom, we get the better seed length O(n1/2⋅s1/4⋅log(n)⋅log(n/ε)). In particular, this provides the first non-trivial PRG (with seed length o(n)) for intersections of n halfspaces in the regime where ε≤1/n, complementing a recent result of [45].• There exists a randomized 2n-t #SAT algorithm for FORMULA[s]∘G, where t=Ω(n√s⋅log2(s)⋅R(2)(G))/1/2. In particular, this implies a nontrivial #SAT algorithm for FORMULA[n1.99]∘LTF.• The Minimum Circuit Size Problem is not in FORMULA[n1.99]∘XOR; thereby making progress on hardness magnification, in connection with results from [14, 46]. On the algorithmic side, we show that the concept class FORMULA[n1.99]∘XOR can be PAC-learned in time 2O(n/log n).","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125858582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graph homomorphism has been an important research topic since its introduction [20]. Stated in the language of binary relational structures in that paper [20], Lovász proved a fundamental theorem that, for a graph H given by its 0-1 valued adjacency matrix, the graph homomorphism function G ↦ hom(G, H) determines the isomorphism type of H. In the past 50 years, various extensions have been proved by many researchers [1, 15, 21, 24, 26]. These extend the basic 0-1 case to admit vertex and edge weights; but these extensions all have some restrictions such as all vertex weights must be positive. In this article, we prove a general form of this theorem where H can have arbitrary vertex and edge weights. A noteworthy aspect is that we prove this by a surprisingly simple and unified argument. This bypasses various technical obstacles and unifies and extends all previous known versions of this theorem on graphs. The constructive proof of our theorem can be used to make various complexity dichotomy theorems for graph homomorphism effective in the following sense: it provides an algorithm that for any H either outputs a P-time algorithm solving hom(&sdot, H) or a P-time reduction from a canonical #P-hard problem to hom(&sdot, H).
{"title":"On a Theorem of Lovász that (&sdot, H) Determines the Isomorphism Type of H","authors":"Jin-Yi Cai, A. Govorov","doi":"10.1145/3448641","DOIUrl":"https://doi.org/10.1145/3448641","url":null,"abstract":"Graph homomorphism has been an important research topic since its introduction [20]. Stated in the language of binary relational structures in that paper [20], Lovász proved a fundamental theorem that, for a graph H given by its 0-1 valued adjacency matrix, the graph homomorphism function G ↦ hom(G, H) determines the isomorphism type of H. In the past 50 years, various extensions have been proved by many researchers [1, 15, 21, 24, 26]. These extend the basic 0-1 case to admit vertex and edge weights; but these extensions all have some restrictions such as all vertex weights must be positive. In this article, we prove a general form of this theorem where H can have arbitrary vertex and edge weights. A noteworthy aspect is that we prove this by a surprisingly simple and unified argument. This bypasses various technical obstacles and unifies and extends all previous known versions of this theorem on graphs. The constructive proof of our theorem can be used to make various complexity dichotomy theorems for graph homomorphism effective in the following sense: it provides an algorithm that for any H either outputs a P-time algorithm solving hom(&sdot, H) or a P-time reduction from a canonical #P-hard problem to hom(&sdot, H).","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"21 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121030075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We prove that for every distribution D on n bits with Shannon entropy ≥ n − a, at most O(2da logd+1g)/γ5 of the bits Di can be predicted with advantage γ by an AC0 circuit of size g and depth D that is a function of all of the bits of D except Di. This answers a question by Meir and Wigderson, who proved a corresponding result for decision trees. We also show that there are distributions D with entropy ≥ n − O(1) such that any subset of O(n/ log n) bits of D on can be distinguished from uniform by a circuit of depth 2 and size poly(n). This separates the notions of predictability and distinguishability in this context.
{"title":"AC0 Unpredictability","authors":"Emanuele Viola","doi":"10.1145/3442362","DOIUrl":"https://doi.org/10.1145/3442362","url":null,"abstract":"We prove that for every distribution D on n bits with Shannon entropy ≥ n − a, at most O(2da logd+1g)/γ5 of the bits Di can be predicted with advantage γ by an AC0 circuit of size g and depth D that is a function of all of the bits of D except Di. This answers a question by Meir and Wigderson, who proved a corresponding result for decision trees. We also show that there are distributions D with entropy ≥ n − O(1) such that any subset of O(n/ log n) bits of D on can be distinguished from uniform by a circuit of depth 2 and size poly(n). This separates the notions of predictability and distinguishability in this context.","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124890099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, we introduce a general framework for fine-grained reductions of approximate counting problems to their decision versions. (Thus, we use an oracle that decides whether any witness exists to multiplicatively approximate the number of witnesses with minimal overhead.) This mirrors a foundational result of Sipser (STOC 1983) and Stockmeyer (SICOMP 1985) in the polynomial-time setting, and a similar result of Müller (IWPEC 2006) in the FPT setting. Using our framework, we obtain such reductions for some of the most important problems in fine-grained complexity: the Orthogonal Vectors problem, 3SUM, and the Negative-Weight Triangle problem (which is closely related to All-Pairs Shortest Path). While all these problems have simple algorithms over which it is conjectured that no polynomial improvement is possible, our reductions would remain interesting even if these conjectures were proved; they have only polylogarithmic overhead and can therefore be applied to subpolynomial improvements such as the n3/ exp(Θ (√ log n))-time algorithm for the Negative-Weight Triangle problem due to Williams (STOC 2014). Our framework is also general enough to apply to versions of the problems for which more efficient algorithms are known. For example, the Orthogonal Vectors problem over GF(m)d for constant m can be solved in time n · poly (d) by a result of Williams and Yu (SODA 2014); our result implies that we can approximately count the number of orthogonal pairs with essentially the same running time. We also provide a fine-grained reduction from approximate #SAT to SAT. Suppose the Strong Exponential Time Hypothesis (SETH) is false, so that for some 1 < c < 2 and all k there is an O(cn)-time algorithm for k-SAT. Then we prove that for all k, there is an O(( c + o(1))n)-time algorithm for approximate #k-SAT. In particular, our result implies that the Exponential Time Hypothesis (ETH) is equivalent to the seemingly weaker statement that there is no algorithm to approximate #3-SAT to within a factor of 1+ɛ in time 2o(n)/ ɛ2 (taking ɛ > 0 as part of the input).
{"title":"Fine-Grained Reductions from Approximate Counting to Decision","authors":"Holger Dell, John Lapinskas","doi":"10.1145/3442352","DOIUrl":"https://doi.org/10.1145/3442352","url":null,"abstract":"In this article, we introduce a general framework for fine-grained reductions of approximate counting problems to their decision versions. (Thus, we use an oracle that decides whether any witness exists to multiplicatively approximate the number of witnesses with minimal overhead.) This mirrors a foundational result of Sipser (STOC 1983) and Stockmeyer (SICOMP 1985) in the polynomial-time setting, and a similar result of Müller (IWPEC 2006) in the FPT setting. Using our framework, we obtain such reductions for some of the most important problems in fine-grained complexity: the Orthogonal Vectors problem, 3SUM, and the Negative-Weight Triangle problem (which is closely related to All-Pairs Shortest Path). While all these problems have simple algorithms over which it is conjectured that no polynomial improvement is possible, our reductions would remain interesting even if these conjectures were proved; they have only polylogarithmic overhead and can therefore be applied to subpolynomial improvements such as the n3/ exp(Θ (√ log n))-time algorithm for the Negative-Weight Triangle problem due to Williams (STOC 2014). Our framework is also general enough to apply to versions of the problems for which more efficient algorithms are known. For example, the Orthogonal Vectors problem over GF(m)d for constant m can be solved in time n · poly (d) by a result of Williams and Yu (SODA 2014); our result implies that we can approximately count the number of orthogonal pairs with essentially the same running time. We also provide a fine-grained reduction from approximate #SAT to SAT. Suppose the Strong Exponential Time Hypothesis (SETH) is false, so that for some 1 < c < 2 and all k there is an O(cn)-time algorithm for k-SAT. Then we prove that for all k, there is an O(( c + o(1))n)-time algorithm for approximate #k-SAT. In particular, our result implies that the Exponential Time Hypothesis (ETH) is equivalent to the seemingly weaker statement that there is no algorithm to approximate #3-SAT to within a factor of 1+ɛ in time 2o(n)/ ɛ2 (taking ɛ > 0 as part of the input).","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"266 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121814142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Florent Becker, T. Besson, J. Durand-Lose, Aurélien Emmanuel, Mohammad-Hadi Foroughmand-Araabi, S. Goliaei, Shahrzad Heydarshahi
Signal machines form an abstract and idealized model of collision computing. Based on dimensionless signals moving on the real line, they model particle/signal dynamics in Cellular Automata. Each particle, or signal, moves at constant speed in continuous time and space. When signals meet, they get replaced by other signals. A signal machine defines the types of available signals, their speeds, and the rules for replacement in collision. A signal machine A simulates another one B if all the space-time diagrams of B can be generated from space-time diagrams of A by removing some signals and renaming other signals according to local information. Given any finite set of speeds S we construct a signal machine that is able to simulate any signal machine whose speeds belong to S. Each signal is simulated by a macro-signal, a ray of parallel signals. Each macro-signal has a main signal located exactly where the simulated signal would be, as well as auxiliary signals that encode its id and the collision rules of the simulated machine. The simulation of a collision, a macro-collision, consists of two phases. In the first phase, macro-signals are shrunk, and then the macro-signals involved in the collision are identified and it is ensured that no other macro-signal comes too close. If some do, the process is aborted and the macro-signals are shrunk, so that the correct macro-collision will eventually be restarted and successfully initiated. Otherwise, the second phase starts: the appropriate collision rule is found and new macro-signals are generated accordingly. Considering all finite sets of speeds S and their corresponding simulators provides an intrinsically universal family of signal machines.
{"title":"Abstract Geometrical Computation 10","authors":"Florent Becker, T. Besson, J. Durand-Lose, Aurélien Emmanuel, Mohammad-Hadi Foroughmand-Araabi, S. Goliaei, Shahrzad Heydarshahi","doi":"10.1145/3442359","DOIUrl":"https://doi.org/10.1145/3442359","url":null,"abstract":"Signal machines form an abstract and idealized model of collision computing. Based on dimensionless signals moving on the real line, they model particle/signal dynamics in Cellular Automata. Each particle, or signal, moves at constant speed in continuous time and space. When signals meet, they get replaced by other signals. A signal machine defines the types of available signals, their speeds, and the rules for replacement in collision. A signal machine A simulates another one B if all the space-time diagrams of B can be generated from space-time diagrams of A by removing some signals and renaming other signals according to local information. Given any finite set of speeds S we construct a signal machine that is able to simulate any signal machine whose speeds belong to S. Each signal is simulated by a macro-signal, a ray of parallel signals. Each macro-signal has a main signal located exactly where the simulated signal would be, as well as auxiliary signals that encode its id and the collision rules of the simulated machine. The simulation of a collision, a macro-collision, consists of two phases. In the first phase, macro-signals are shrunk, and then the macro-signals involved in the collision are identified and it is ensured that no other macro-signal comes too close. If some do, the process is aborted and the macro-signals are shrunk, so that the correct macro-collision will eventually be restarted and successfully initiated. Otherwise, the second phase starts: the appropriate collision rule is found and new macro-signals are generated accordingly. Considering all finite sets of speeds S and their corresponding simulators provides an intrinsically universal family of signal machines.","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114274775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}