We consider the standard ILP Feasibility problem: given an integer linear program of the form {Ax = b, x ⩾ 0}, where A is an integer matrix with k rows and ℓ columns, x is a vector of ℓ variables, and b is a vector of k integers, we ask whether there exists x ∈ N ℓ that satisfies Ax = b. Each row of A specifies one linear constraint on x; our goal is to study the complexity of ILP Feasibility when both k, the number of constraints, and ‖A‖∞, the largest absolute value of an entry in A, are small. Papadimitriou was the first to give a fixed-parameter algorithm for ILP Feasibility under parameterization by the number of constraints that runs in time ((‖A‖∞ + ‖b‖∞) ⋅ k)O(k2). This was very recently improved by Eisenbrand and Weismantel, who used the Steinitz lemma to design an algorithm with running time (k‖A‖∞)O(k) ⋅ log ‖b‖∞, which was subsequently refined by Jansen and Rohwedder to O(√ k‖A‖∞)k ⋅ log (‖ A‖∞ + ‖b‖∞) ⋅ log ‖A‖∞. We prove that for {0, 1}-matrices A, the running time of the algorithm of Eisenbrand and Weismantel is probably optimal: an algorithm with running time 2o(k log k) ⋅ (ℓ + ‖b‖∞)o(k) would contradict the exponential time hypothesis. This improves previous non-tight lower bounds of Fomin et al. We then consider integer linear programs that may have many constraints, but they need to be structured in a “shallow” way. Precisely, we consider the parameter dual treedepth of the matrix A, denoted tdD(A), which is the treedepth of the graph over the rows of A, where two rows are adjacent if in some column they simultaneously contain a non-zero entry. It was recently shown by Koutecký et al. that ILP Feasibility can be solved in time ‖A‖∞2O(tdD(A)) ⋅ (k + ℓ + log ‖b‖∞)O(1). We present a streamlined proof of this fact and prove that, again, this running time is probably optimal: even assuming that all entries of A and b are in {−1, 0, 1}, the existence of an algorithm with running time 22o(tdD(A)) ⋅ (k + ℓ)O(1) would contradict the exponential time hypothesis.
{"title":"Tight Complexity Lower Bounds for Integer Linear Programming with Few Constraints","authors":"D. Knop, Michal Pilipczuk, Marcin Wrochna","doi":"10.1145/3397484","DOIUrl":"https://doi.org/10.1145/3397484","url":null,"abstract":"We consider the standard ILP Feasibility problem: given an integer linear program of the form {Ax = b, x ⩾ 0}, where A is an integer matrix with k rows and ℓ columns, x is a vector of ℓ variables, and b is a vector of k integers, we ask whether there exists x ∈ N ℓ that satisfies Ax = b. Each row of A specifies one linear constraint on x; our goal is to study the complexity of ILP Feasibility when both k, the number of constraints, and ‖A‖∞, the largest absolute value of an entry in A, are small. Papadimitriou was the first to give a fixed-parameter algorithm for ILP Feasibility under parameterization by the number of constraints that runs in time ((‖A‖∞ + ‖b‖∞) ⋅ k)O(k2). This was very recently improved by Eisenbrand and Weismantel, who used the Steinitz lemma to design an algorithm with running time (k‖A‖∞)O(k) ⋅ log ‖b‖∞, which was subsequently refined by Jansen and Rohwedder to O(√ k‖A‖∞)k ⋅ log (‖ A‖∞ + ‖b‖∞) ⋅ log ‖A‖∞. We prove that for {0, 1}-matrices A, the running time of the algorithm of Eisenbrand and Weismantel is probably optimal: an algorithm with running time 2o(k log k) ⋅ (ℓ + ‖b‖∞)o(k) would contradict the exponential time hypothesis. This improves previous non-tight lower bounds of Fomin et al. We then consider integer linear programs that may have many constraints, but they need to be structured in a “shallow” way. Precisely, we consider the parameter dual treedepth of the matrix A, denoted tdD(A), which is the treedepth of the graph over the rows of A, where two rows are adjacent if in some column they simultaneously contain a non-zero entry. It was recently shown by Koutecký et al. that ILP Feasibility can be solved in time ‖A‖∞2O(tdD(A)) ⋅ (k + ℓ + log ‖b‖∞)O(1). We present a streamlined proof of this fact and prove that, again, this running time is probably optimal: even assuming that all entries of A and b are in {−1, 0, 1}, the existence of an algorithm with running time 22o(tdD(A)) ⋅ (k + ℓ)O(1) would contradict the exponential time hypothesis.","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115192929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the Ising model, we consider the problem of estimating the covariance of the spins at two specified vertices. In the ferromagnetic case, it is easy to obtain an additive approximation to this covariance by repeatedly sampling from the relevant Gibbs distribution. However, we desire a multiplicative approximation, and it is not clear how to achieve this by sampling, given that the covariance can be exponentially small. Our main contribution is a fully polynomial time randomised approximation scheme (FPRAS) for the covariance in the ferromagnetic case. We also show that the restriction to the ferromagnetic case is essential—there is no FPRAS for multiplicatively estimating the covariance of an antiferromagnetic Ising model unless RP = #P. In fact, we show that even determining the sign of the covariance is #P-hard in the antiferromagnetic case.
{"title":"Approximating Pairwise Correlations in the Ising Model","authors":"L. A. Goldberg, M. Jerrum","doi":"10.1145/3337785","DOIUrl":"https://doi.org/10.1145/3337785","url":null,"abstract":"In the Ising model, we consider the problem of estimating the covariance of the spins at two specified vertices. In the ferromagnetic case, it is easy to obtain an additive approximation to this covariance by repeatedly sampling from the relevant Gibbs distribution. However, we desire a multiplicative approximation, and it is not clear how to achieve this by sampling, given that the covariance can be exponentially small. Our main contribution is a fully polynomial time randomised approximation scheme (FPRAS) for the covariance in the ferromagnetic case. We also show that the restriction to the ferromagnetic case is essential—there is no FPRAS for multiplicatively estimating the covariance of an antiferromagnetic Ising model unless RP = #P. In fact, we show that even determining the sign of the covariance is #P-hard in the antiferromagnetic case.","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132162781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Srinivasan Arunachalam, Sourav Chakraborty, M. Koucký, Nitin Saurabh, R. D. Wolf
Given a Boolean function f:{ -1,1} ^{n}→ { -1,1, define the Fourier distribution to be the distribution on subsets of [n], where each S ⊆ [n] is sampled with probability f ˆ (S)2. The Fourier Entropy-influence (FEI) conjecture of Friedgut and Kalai [28] seeks to relate two fundamental measures associated with the Fourier distribution: does there exist a universal constant C > 0 such that H(fˆ2) ≤ C ⋅ Inf (f), where H(fˆ2) is the Shannon entropy of the Fourier distribution of f and Inf(f) is the total influence of f In this article, we present three new contributions toward the FEI conjecture: (1) Our first contribution shows that H(fˆ2) ≤ 2 ⋅ aUC⊕(f), where aUC⊕(f) is the average unambiguous parity-certificate complexity of f. This improves upon several bounds shown by Chakraborty et al. [20]. We further improve this bound for unambiguous DNFs. We also discuss how our work makes Mansour's conjecture for DNFs a natural next step toward resolution of the FEI conjecture.(2) We next consider the weaker Fourier Min-entropy-influence (FMEI) conjecture posed by O'Donnell and others [50, 53], which asks if H ∞ fˆ2) ≤ C ⋅ Inf(f), where H ∞ fˆ2) is the min-entropy of the Fourier distribution. We show H∞(fˆ2) ≤ 2⋅Cmin⊕(f), where Cmin⊕(f) is the minimum parity-certificate complexity of f. We also show that for all ε≥0, we have H∞(fˆ2)≤2 log(∥fˆ∥1,ε/(1−ε)), where ∥fˆ∥1,ε is the approximate spectral norm of f. As a corollary, we verify the FMEI conjecture for the class of read-k DNFs (for constant k).(3) Our third contribution is to better understand implications of the FEI conjecture for the structure of polynomials that 1/3-approximate a Boolean function on the Boolean cube. We pose a conjecture: no flat polynomial(whose non-zero Fourier coefficients have the same magnitude) of degree d and sparsity 2ω(d) can 1/3-approximate a Boolean function. This conjecture is known to be true assuming FEI, and we prove the conjecture unconditionally (i.e., without assuming the FEI conjecture) for a class of polynomials. We discuss an intriguing connection between our conjecture and the constant for the Bohnenblust-Hille inequality, which has been extensively studied in functional analysis.
{"title":"Improved Bounds on Fourier Entropy and Min-entropy","authors":"Srinivasan Arunachalam, Sourav Chakraborty, M. Koucký, Nitin Saurabh, R. D. Wolf","doi":"10.1145/3470860","DOIUrl":"https://doi.org/10.1145/3470860","url":null,"abstract":"Given a Boolean function f:{ -1,1} ^{n}→ { -1,1, define the Fourier distribution to be the distribution on subsets of [n], where each S ⊆ [n] is sampled with probability f ˆ (S)2. The Fourier Entropy-influence (FEI) conjecture of Friedgut and Kalai [28] seeks to relate two fundamental measures associated with the Fourier distribution: does there exist a universal constant C > 0 such that H(fˆ2) ≤ C ⋅ Inf (f), where H(fˆ2) is the Shannon entropy of the Fourier distribution of f and Inf(f) is the total influence of f In this article, we present three new contributions toward the FEI conjecture: (1) Our first contribution shows that H(fˆ2) ≤ 2 ⋅ aUC⊕(f), where aUC⊕(f) is the average unambiguous parity-certificate complexity of f. This improves upon several bounds shown by Chakraborty et al. [20]. We further improve this bound for unambiguous DNFs. We also discuss how our work makes Mansour's conjecture for DNFs a natural next step toward resolution of the FEI conjecture.(2) We next consider the weaker Fourier Min-entropy-influence (FMEI) conjecture posed by O'Donnell and others [50, 53], which asks if H ∞ fˆ2) ≤ C ⋅ Inf(f), where H ∞ fˆ2) is the min-entropy of the Fourier distribution. We show H∞(fˆ2) ≤ 2⋅Cmin⊕(f), where Cmin⊕(f) is the minimum parity-certificate complexity of f. We also show that for all ε≥0, we have H∞(fˆ2)≤2 log(∥fˆ∥1,ε/(1−ε)), where ∥fˆ∥1,ε is the approximate spectral norm of f. As a corollary, we verify the FMEI conjecture for the class of read-k DNFs (for constant k).(3) Our third contribution is to better understand implications of the FEI conjecture for the structure of polynomials that 1/3-approximate a Boolean function on the Boolean cube. We pose a conjecture: no flat polynomial(whose non-zero Fourier coefficients have the same magnitude) of degree d and sparsity 2ω(d) can 1/3-approximate a Boolean function. This conjecture is known to be true assuming FEI, and we prove the conjecture unconditionally (i.e., without assuming the FEI conjecture) for a class of polynomials. We discuss an intriguing connection between our conjecture and the constant for the Bohnenblust-Hille inequality, which has been extensively studied in functional analysis.","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127100835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We characterize several complexity measures for the resolution of Tseitin formulas in terms of a two person cop-robber game. Our game is a slight variation of the one Seymour and Thomas used in order to characterize the tree-width parameter. For any undirected graph, by counting the number of cops needed in our game in order to catch a robber in it, we are able to exactly characterize the width, variable space, and depth measures for the resolution of the Tseitin formula corresponding to that graph. We also give an exact game characterization of resolution variable space for any formula. We show that our game can be played in a monotone way. This implies that the associated resolution measures on Tseitin formulas correspond exactly to those under the restriction of Davis-Putnam resolution, implying that this kind of resolution is optimal on Tseitin formulas for all the considered measures. Using our characterizations, we improve the existing complexity bounds for Tseitin formulas showing that resolution width, depth, and variable space coincide up to a logarithmic factor, and that variable space is bounded by the clause space times a logarithmic factor.
{"title":"Cops-Robber Games and the Resolution of Tseitin Formulas","authors":"Nicola Galesi, N. Talebanfard, J. Torán","doi":"10.1145/3378667","DOIUrl":"https://doi.org/10.1145/3378667","url":null,"abstract":"We characterize several complexity measures for the resolution of Tseitin formulas in terms of a two person cop-robber game. Our game is a slight variation of the one Seymour and Thomas used in order to characterize the tree-width parameter. For any undirected graph, by counting the number of cops needed in our game in order to catch a robber in it, we are able to exactly characterize the width, variable space, and depth measures for the resolution of the Tseitin formula corresponding to that graph. We also give an exact game characterization of resolution variable space for any formula. We show that our game can be played in a monotone way. This implies that the associated resolution measures on Tseitin formulas correspond exactly to those under the restriction of Davis-Putnam resolution, implying that this kind of resolution is optimal on Tseitin formulas for all the considered measures. Using our characterizations, we improve the existing complexity bounds for Tseitin formulas showing that resolution width, depth, and variable space coincide up to a logarithmic factor, and that variable space is bounded by the clause space times a logarithmic factor.","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126109497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Let G be a graph that contains an induced subgraph H. A retraction from G to H is a homomorphism from G to H that is the identity function on H. Retractions are very well studied: Given H, the complexity of deciding whether there is a retraction from an input graph G to H is completely classified, in the sense that it is known for which H this problem is tractable (assuming P ≠ NP). Similarly, the complexity of (exactly) counting retractions from G to H is classified (assuming FP ≠ #P). However, almost nothing is known about approximately counting retractions. Our first contribution is to give a complete trichotomy for approximately counting retractions to graphs without short cycles. The result is as follows: (1) Approximately counting retractions to a graph H of girth at least 5 is in FP if every connected component of H is a star, a single looped vertex, or an edge with two loops. (2) Otherwise, if every component is an irreflexive caterpillar or a partially bristled reflexive path, then approximately counting retractions to H is equivalent to approximately counting the independent sets of a bipartite graph—a problem that is complete in the approximate counting complexity class RH Π 1. (3) Finally, if none of these hold, then approximately counting retractions to H is equivalent to approximately counting the satisfying assignments of a Boolean formula. Our second contribution is to locate the retraction counting problem for each H in the complexity landscape of related approximate counting problems. Interestingly, our results are in contrast to the situation in the exact counting context. We show that the problem of approximately counting retractions is separated both from the problem of approximately counting homomorphisms and from the problem of approximately counting list homomorphisms—whereas for exact counting all three of these problems are interreducible. We also show that the number of retractions is at least as hard to approximate as both the number of surjective homomorphisms and the number of compactions. In contrast, exactly counting compactions is the hardest of all of these exact counting problems.
{"title":"The Complexity of Approximately Counting Retractions","authors":"Jacob Focke, L. A. Goldberg, Stanislav Živný","doi":"10.1145/3397472","DOIUrl":"https://doi.org/10.1145/3397472","url":null,"abstract":"Let G be a graph that contains an induced subgraph H. A retraction from G to H is a homomorphism from G to H that is the identity function on H. Retractions are very well studied: Given H, the complexity of deciding whether there is a retraction from an input graph G to H is completely classified, in the sense that it is known for which H this problem is tractable (assuming P ≠ NP). Similarly, the complexity of (exactly) counting retractions from G to H is classified (assuming FP ≠ #P). However, almost nothing is known about approximately counting retractions. Our first contribution is to give a complete trichotomy for approximately counting retractions to graphs without short cycles. The result is as follows: (1) Approximately counting retractions to a graph H of girth at least 5 is in FP if every connected component of H is a star, a single looped vertex, or an edge with two loops. (2) Otherwise, if every component is an irreflexive caterpillar or a partially bristled reflexive path, then approximately counting retractions to H is equivalent to approximately counting the independent sets of a bipartite graph—a problem that is complete in the approximate counting complexity class RH Π 1. (3) Finally, if none of these hold, then approximately counting retractions to H is equivalent to approximately counting the satisfying assignments of a Boolean formula. Our second contribution is to locate the retraction counting problem for each H in the complexity landscape of related approximate counting problems. Interestingly, our results are in contrast to the situation in the exact counting context. We show that the problem of approximately counting retractions is separated both from the problem of approximately counting homomorphisms and from the problem of approximately counting list homomorphisms—whereas for exact counting all three of these problems are interreducible. We also show that the number of retractions is at least as hard to approximate as both the number of surjective homomorphisms and the number of compactions. In contrast, exactly counting compactions is the hardest of all of these exact counting problems.","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124273610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ivona Bezáková, Andreas Galanis, L. A. Goldberg, Daniel Stefankovic
We study the problem of approximating the value of the matching polynomial on graphs with edge parameter γ, where γ takes arbitrary values in the complex plane. When γ is a positive real, Jerrum and Sinclair showed that the problem admits an FPRAS on general graphs. For general complex values of γ, Patel and Regts, building on methods developed by Barvinok, showed that the problem admits an FPTAS on graphs of maximum degree Δ as long as γ is not a negative real number less than or equal to −1/(4(Δ −1)). Our first main result completes the picture for the approximability of the matching polynomial on bounded degree graphs. We show that for all Δ ≥ 3 and all real γ less than −1/(4(Δ −1)), the problem of approximating the value of the matching polynomial on graphs of maximum degree Δ with edge parameter γ is #P-hard. We then explore whether the maximum degree parameter can be replaced by the connective constant. Sinclair et al. showed that for positive real γ, it is possible to approximate the value of the matching polynomial using a correlation decay algorithm on graphs with bounded connective constant (and potentially unbounded maximum degree). We first show that this result does not extend in general in the complex plane; in particular, the problem is #P-hard on graphs with bounded connective constant for a dense set of γ values on the negative real axis. Nevertheless, we show that the result does extend for any complex value γ that does not lie on the negative real axis. Our analysis accounts for complex values of γ using geodesic distances in the complex plane in the metric defined by an appropriate density function.
{"title":"The Complexity of Approximating the Matching Polynomial in the Complex Plane","authors":"Ivona Bezáková, Andreas Galanis, L. A. Goldberg, Daniel Stefankovic","doi":"10.1145/3448645","DOIUrl":"https://doi.org/10.1145/3448645","url":null,"abstract":"We study the problem of approximating the value of the matching polynomial on graphs with edge parameter γ, where γ takes arbitrary values in the complex plane. When γ is a positive real, Jerrum and Sinclair showed that the problem admits an FPRAS on general graphs. For general complex values of γ, Patel and Regts, building on methods developed by Barvinok, showed that the problem admits an FPTAS on graphs of maximum degree Δ as long as γ is not a negative real number less than or equal to −1/(4(Δ −1)). Our first main result completes the picture for the approximability of the matching polynomial on bounded degree graphs. We show that for all Δ ≥ 3 and all real γ less than −1/(4(Δ −1)), the problem of approximating the value of the matching polynomial on graphs of maximum degree Δ with edge parameter γ is #P-hard. We then explore whether the maximum degree parameter can be replaced by the connective constant. Sinclair et al. showed that for positive real γ, it is possible to approximate the value of the matching polynomial using a correlation decay algorithm on graphs with bounded connective constant (and potentially unbounded maximum degree). We first show that this result does not extend in general in the complex plane; in particular, the problem is #P-hard on graphs with bounded connective constant for a dense set of γ values on the negative real axis. Nevertheless, we show that the result does extend for any complex value γ that does not lie on the negative real axis. Our analysis accounts for complex values of γ using geodesic distances in the complex plane in the metric defined by an appropriate density function.","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127865846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Non-signaling strategies are collections of distributions with certain non-local correlations. They have been studied in physics as a strict generalization of quantum strategies to understand the power and limitations of nature’s apparent non-locality. Recently, they have received attention in theoretical computer science due to connections to Complexity and Cryptography. We initiate the study of Property Testing against non-signaling strategies, focusing first on the classical problem of linearity testing (Blum, Luby, and Rubinfeld; JCSS 1993). We prove that any non-signaling strategy that passes the linearity test with high probability must be close to a quasi-distribution over linear functions. Quasi-distributions generalize the notion of probability distributions over global objects (such as functions) by allowing negative probabilities, while at the same time requiring that “local views” follow standard distributions (with non-negative probabilities). Quasi-distributions arise naturally in the study of quantum mechanics as a tool to describe various non-local phenomena. Our analysis of the linearity test relies on Fourier analytic techniques applied to quasi-distributions. Along the way, we also establish general equivalences between non-signaling strategies and quasi-distributions, which we believe will provide a useful perspective on the study of Property Testing against non-signaling strategies beyond linearity testing.
非信令策略是具有一定非局部相关性的分布的集合。它们在物理学中被研究作为量子策略的严格推广,以理解自然的明显非定域性的力量和局限性。最近,由于与复杂性和密码学的联系,它们在理论计算机科学中受到了关注。我们开始研究针对非信号策略的性能测试,首先关注线性测试的经典问题(Blum, Luby, and Rubinfeld;JCSS 1993)。我们证明了任何高概率通过线性检验的非信号策略必须接近于线性函数上的拟分布。准分布通过允许负概率将概率分布的概念推广到全局对象(如函数)上,同时要求“局部视图”遵循标准分布(具有非负概率)。准分布作为描述各种非局域现象的工具,在量子力学研究中自然出现。我们对线性测试的分析依赖于应用于拟分布的傅立叶分析技术。在此过程中,我们还建立了非信号策略和准分布之间的一般等价关系,我们相信这将为线性测试之外的非信号策略的属性测试研究提供有用的视角。
{"title":"Testing Linearity against Non-signaling Strategies","authors":"A. Chiesa, Peter Manohar, Igor Shinkar","doi":"10.1145/3397474","DOIUrl":"https://doi.org/10.1145/3397474","url":null,"abstract":"Non-signaling strategies are collections of distributions with certain non-local correlations. They have been studied in physics as a strict generalization of quantum strategies to understand the power and limitations of nature’s apparent non-locality. Recently, they have received attention in theoretical computer science due to connections to Complexity and Cryptography. We initiate the study of Property Testing against non-signaling strategies, focusing first on the classical problem of linearity testing (Blum, Luby, and Rubinfeld; JCSS 1993). We prove that any non-signaling strategy that passes the linearity test with high probability must be close to a quasi-distribution over linear functions. Quasi-distributions generalize the notion of probability distributions over global objects (such as functions) by allowing negative probabilities, while at the same time requiring that “local views” follow standard distributions (with non-negative probabilities). Quasi-distributions arise naturally in the study of quantum mechanics as a tool to describe various non-local phenomena. Our analysis of the linearity test relies on Fourier analytic techniques applied to quasi-distributions. Along the way, we also establish general equivalences between non-signaling strategies and quasi-distributions, which we believe will provide a useful perspective on the study of Property Testing against non-signaling strategies beyond linearity testing.","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128023890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Let Xm,ϵ be the distribution over m bits X1,…,Xm where the Xi are independent and each Xi equals 1 with probability (1−ϵ)/2 and 0 with probability (1 − ϵ)/2. We consider the smallest value ϵ* of ϵ such that the distributions Xm, ϵ and Xm, 0 can be distinguished with constant advantage by a function f : {0,1}m → S, which is the product of k functions f1,f2,…, fk on disjoint inputs of n bits, where each fi : {0,1}n → S and m = nk. We prove that ϵ* = Θ(1/√n log k) if S = [−1,1], while ϵ* = Θ(1/√nk) if S is the set of unit-norm complex numbers.
{"title":"The Coin Problem for Product Tests","authors":"Chin Ho Lee, Emanuele Viola","doi":"10.1145/3201787","DOIUrl":"https://doi.org/10.1145/3201787","url":null,"abstract":"Let Xm,ϵ be the distribution over m bits X1,…,Xm where the Xi are independent and each Xi equals 1 with probability (1−ϵ)/2 and 0 with probability (1 − ϵ)/2. We consider the smallest value ϵ* of ϵ such that the distributions Xm, ϵ and Xm, 0 can be distinguished with constant advantage by a function f : {0,1}m → S, which is the product of k functions f1,f2,…, fk on disjoint inputs of n bits, where each fi : {0,1}n → S and m = nk. We prove that ϵ* = Θ(1/√n log k) if S = [−1,1], while ϵ* = Θ(1/√nk) if S is the set of unit-norm complex numbers.","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122198597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The minrank over a field F of a graph G on the vertex set { 1,2,… ,n} is the minimum possible rank of a matrix M ∈ Fn × n such that Mi, i ≠ 0 for every i, and Mi, j =0 for every distinct non-adjacent vertices i and j in G. For an integer n, a graph H, and a field F, let g(n,H, F) denote the maximum possible minrank over F of an n-vertex graph whose complement contains no copy of H. In this article, we study this quantity for various graphs H and fields F. For finite fields, we prove by a probabilistic argument a general lower bound on g(n,H,F), which yields a nearly tight bound of Ω (√ n/ log n) for the triangle H=K3. For the real field, we prove by an explicit construction that for every non-bipartite graph H, g(n,H, R) ≥ nδ for some δ = δ (H)> 0. As a by-product of this construction, we disprove a conjecture of Codenotti et al. [11]. The results are motivated by questions in information theory, circuit complexity, and geometry.
{"title":"On Minrank and Forbidden Subgraphs","authors":"I. Haviv","doi":"10.1145/3322817","DOIUrl":"https://doi.org/10.1145/3322817","url":null,"abstract":"The minrank over a field F of a graph G on the vertex set { 1,2,… ,n} is the minimum possible rank of a matrix M ∈ Fn × n such that Mi, i ≠ 0 for every i, and Mi, j =0 for every distinct non-adjacent vertices i and j in G. For an integer n, a graph H, and a field F, let g(n,H, F) denote the maximum possible minrank over F of an n-vertex graph whose complement contains no copy of H. In this article, we study this quantity for various graphs H and fields F. For finite fields, we prove by a probabilistic argument a general lower bound on g(n,H,F), which yields a nearly tight bound of Ω (√ n/ log n) for the triangle H=K3. For the real field, we prove by an explicit construction that for every non-bipartite graph H, g(n,H, R) ≥ nδ for some δ = δ (H)> 0. As a by-product of this construction, we disprove a conjecture of Codenotti et al. [11]. The results are motivated by questions in information theory, circuit complexity, and geometry.","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131592258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, we study the identity testing problem of arithmetic read-once formulas (ROFs) and some related models. An ROF is a formula (a circuit whose underlying graph is a tree) in which the operations are { +, × } and such that every input variable labels at most one leaf. We obtain the first polynomial-time deterministic identity testing algorithm that operates in the black-box setting for ROFs, as well as some other related models. As an application, we obtain the first polynomial-time deterministic reconstruction algorithm for such formulas. Our results are obtained by improving and extending the analysis of the algorithm of Shpilka and Yolkovich [51].
{"title":"Complete Derandomization of Identity Testing and Reconstruction of Read-Once Formulas","authors":"Daniel Minahan, Ilya Volkovich","doi":"10.1145/3196836","DOIUrl":"https://doi.org/10.1145/3196836","url":null,"abstract":"In this article, we study the identity testing problem of arithmetic read-once formulas (ROFs) and some related models. An ROF is a formula (a circuit whose underlying graph is a tree) in which the operations are { +, × } and such that every input variable labels at most one leaf. We obtain the first polynomial-time deterministic identity testing algorithm that operates in the black-box setting for ROFs, as well as some other related models. As an application, we obtain the first polynomial-time deterministic reconstruction algorithm for such formulas. Our results are obtained by improving and extending the analysis of the algorithm of Shpilka and Yolkovich [51].","PeriodicalId":198744,"journal":{"name":"ACM Transactions on Computation Theory (TOCT)","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124180066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}