Pub Date : 2023-05-01DOI: 10.4230/LIPIcs.ICALP.2022.100
Theodoros Papamakarios, A. Razborov
We identify two new big clusters of proof complexity measures equivalent up to polynomial and log n factors. The first cluster contains, among others, the logarithm of tree-like resolution size, regularized (that is, multiplied by the logarithm of proof length) clause and monomial space, and clause space, both ordinary and regularized, in regular and tree-like resolution. As a consequence, separating clause or monomial space from the (logarithm of) tree-like resolution size is the same as showing a strong trade-off between clause or monomial space and proof length, and is the same as showing a super-critical trade-off between clause space and depth. The second cluster contains width, Σ 2 space (a generalization of clause space to depth 2 Frege systems), both ordinary and regularized, as well as the logarithm of tree-like size in the system R (log). As an application of some of these simulations, we improve a known size-space trade-off for polynomial calculus with resolution. In terms of lower bounds, we show a quadratic lower bound on tree-like resolution size for formulas refutable in clause space 4. We introduce on our way yet another proof complexity measure intermediate between depth and the logarithm of tree-like size that might be of independent interest.
{"title":"Space characterizations of complexity measures and size-space trade-offs in propositional proof systems","authors":"Theodoros Papamakarios, A. Razborov","doi":"10.4230/LIPIcs.ICALP.2022.100","DOIUrl":"https://doi.org/10.4230/LIPIcs.ICALP.2022.100","url":null,"abstract":"We identify two new big clusters of proof complexity measures equivalent up to polynomial and log n factors. The first cluster contains, among others, the logarithm of tree-like resolution size, regularized (that is, multiplied by the logarithm of proof length) clause and monomial space, and clause space, both ordinary and regularized, in regular and tree-like resolution. As a consequence, separating clause or monomial space from the (logarithm of) tree-like resolution size is the same as showing a strong trade-off between clause or monomial space and proof length, and is the same as showing a super-critical trade-off between clause space and depth. The second cluster contains width, Σ 2 space (a generalization of clause space to depth 2 Frege systems), both ordinary and regularized, as well as the logarithm of tree-like size in the system R (log). As an application of some of these simulations, we improve a known size-space trade-off for polynomial calculus with resolution. In terms of lower bounds, we show a quadratic lower bound on tree-like resolution size for formulas refutable in clause space 4. We introduce on our way yet another proof complexity measure intermediate between depth and the logarithm of tree-like size that might be of independent interest.","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81788328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-19DOI: 10.48550/arXiv.2304.09422
Marc Vinyals, Chunxiao Li, Noah Fleming, A. Kolokolova, Vijay Ganesh
In their seminal work, Atserias et al. and independently Pipatsrisawat and Darwiche in 2009 showed that CDCL solvers can simulate resolution proofs with polynomial overhead. However, previous work does not address the tightness of the simulation, i.e., the question of how large this overhead needs to be. In this paper, we address this question by focusing on an important property of proofs generated by CDCL solvers that employ standard learning schemes, namely that the derivation of a learned clause has at least one inference where a literal appears in both premises (aka, a merge literal). Specifically, we show that proofs of this kind can simulate resolution proofs with at most a linear overhead, but there also exist formulas where such overhead is necessary or, more precisely, that there exist formulas with resolution proofs of linear length that require quadratic CDCL proofs.
{"title":"Limits of CDCL Learning via Merge Resolution","authors":"Marc Vinyals, Chunxiao Li, Noah Fleming, A. Kolokolova, Vijay Ganesh","doi":"10.48550/arXiv.2304.09422","DOIUrl":"https://doi.org/10.48550/arXiv.2304.09422","url":null,"abstract":"In their seminal work, Atserias et al. and independently Pipatsrisawat and Darwiche in 2009 showed that CDCL solvers can simulate resolution proofs with polynomial overhead. However, previous work does not address the tightness of the simulation, i.e., the question of how large this overhead needs to be. In this paper, we address this question by focusing on an important property of proofs generated by CDCL solvers that employ standard learning schemes, namely that the derivation of a learned clause has at least one inference where a literal appears in both premises (aka, a merge literal). Specifically, we show that proofs of this kind can simulate resolution proofs with at most a linear overhead, but there also exist formulas where such overhead is necessary or, more precisely, that there exist formulas with resolution proofs of linear length that require quadratic CDCL proofs.","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84258791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-19DOI: 10.48550/arXiv.2304.09445
Omar Alrabiah, V. Guruswami, Ray Li
Reed--Solomon codes are a classic family of error-correcting codes consisting of evaluations of low-degree polynomials over a finite field on some sequence of distinct field elements. They are widely known for their optimal unique-decoding capabilities, but their list-decoding capabilities are not fully understood. Given the prevalence of Reed-Solomon codes, a fundamental question in coding theory is determining if Reed--Solomon codes can optimally achieve list-decoding capacity. A recent breakthrough by Brakensiek, Gopi, and Makam, established that Reed--Solomon codes are combinatorially list-decodable all the way to capacity. However, their results hold for randomly-punctured Reed--Solomon codes over an exponentially large field size $2^{O(n)}$, where $n$ is the block length of the code. A natural question is whether Reed--Solomon codes can still achieve capacity over smaller fields. Recently, Guo and Zhang showed that Reed--Solomon codes are list-decodable to capacity with field size $O(n^2)$. We show that Reed--Solomon codes are list-decodable to capacity with linear field size $O(n)$, which is optimal up to the constant factor. We also give evidence that the ratio between the alphabet size $q$ and code length $n$ cannot be bounded by an absolute constant. Our techniques also show that random linear codes are list-decodable up to (the alphabet-independent) capacity with optimal list-size $O(1/varepsilon)$ and near-optimal alphabet size $2^{O(1/varepsilon^2)}$, where $varepsilon$ is the gap to capacity. As far as we are aware, list-decoding up to capacity with optimal list-size $O(1/varepsilon)$ was previously not known to be achievable with any linear code over a constant alphabet size (even non-constructively). Our proofs are based on the ideas of Guo and Zhang, and we additionally exploit symmetries of reduced intersection matrices.
{"title":"Randomly punctured Reed-Solomon codes achieve list-decoding capacity over linear-sized fields","authors":"Omar Alrabiah, V. Guruswami, Ray Li","doi":"10.48550/arXiv.2304.09445","DOIUrl":"https://doi.org/10.48550/arXiv.2304.09445","url":null,"abstract":"Reed--Solomon codes are a classic family of error-correcting codes consisting of evaluations of low-degree polynomials over a finite field on some sequence of distinct field elements. They are widely known for their optimal unique-decoding capabilities, but their list-decoding capabilities are not fully understood. Given the prevalence of Reed-Solomon codes, a fundamental question in coding theory is determining if Reed--Solomon codes can optimally achieve list-decoding capacity. A recent breakthrough by Brakensiek, Gopi, and Makam, established that Reed--Solomon codes are combinatorially list-decodable all the way to capacity. However, their results hold for randomly-punctured Reed--Solomon codes over an exponentially large field size $2^{O(n)}$, where $n$ is the block length of the code. A natural question is whether Reed--Solomon codes can still achieve capacity over smaller fields. Recently, Guo and Zhang showed that Reed--Solomon codes are list-decodable to capacity with field size $O(n^2)$. We show that Reed--Solomon codes are list-decodable to capacity with linear field size $O(n)$, which is optimal up to the constant factor. We also give evidence that the ratio between the alphabet size $q$ and code length $n$ cannot be bounded by an absolute constant. Our techniques also show that random linear codes are list-decodable up to (the alphabet-independent) capacity with optimal list-size $O(1/varepsilon)$ and near-optimal alphabet size $2^{O(1/varepsilon^2)}$, where $varepsilon$ is the gap to capacity. As far as we are aware, list-decoding up to capacity with optimal list-size $O(1/varepsilon)$ was previously not known to be achievable with any linear code over a constant alphabet size (even non-constructively). Our proofs are based on the ideas of Guo and Zhang, and we additionally exploit symmetries of reduced intersection matrices.","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"55 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84875191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-05DOI: 10.48550/arXiv.2304.02770
Vinayak Kumar
We initiate the study of generalized AC0 circuits comprised of negations and arbitrary unbounded fan-in gates that only need to be constant over inputs of Hamming weight $ge k$, which we denote GC0$(k)$. The gate set of this class includes biased LTFs like the $k$-$OR$ (output $1$ iff $ge k$ bits are 1) and $k$-$AND$ (output $0$ iff $ge k$ bits are 0), and thus can be seen as an interpolation between AC0 and TC0. We establish a tight multi-switching lemma for GC0$(k)$ circuits, which bounds the probability that several depth-2 GC0$(k)$ circuits do not simultaneously simplify under a random restriction. We also establish a new depth reduction lemma such that coupled with our multi-switching lemma, we can show many results obtained from the multi-switching lemma for depth-$d$ size-$s$ AC0 circuits lifts to depth-$d$ size-$s^{.99}$ GC0$(.01log s)$ circuits with no loss in parameters (other than hidden constants). Our result has the following applications: 1.Size-$2^{Omega(n^{1/d})}$ depth-$d$ GC0$(Omega(n^{1/d}))$ circuits do not correlate with parity (extending a result of H{aa}stad (SICOMP, 2014)). 2. Size-$n^{Omega(log n)}$ GC0$(Omega(log^2 n))$ circuits with $n^{.249}$ arbitrary threshold gates or $n^{.499}$ arbitrary symmetric gates exhibit exponentially small correlation against an explicit function (extending a result of Tan and Servedio (RANDOM, 2019)). 3. There is a seed length $O((log m)^{d-1}log(m/varepsilon)loglog(m))$ pseudorandom generator against size-$m$ depth-$d$ GC0$(log m)$ circuits, matching the AC0 lower bound of H{aa}stad stad up to a $loglog m$ factor (extending a result of Lyu (CCC, 2022)). 4. Size-$m$ GC0$(log m)$ circuits have exponentially small Fourier tails (extending a result of Tal (CCC, 2017)).
{"title":"Tight Correlation Bounds for Circuits Between AC0 and TC0","authors":"Vinayak Kumar","doi":"10.48550/arXiv.2304.02770","DOIUrl":"https://doi.org/10.48550/arXiv.2304.02770","url":null,"abstract":"We initiate the study of generalized AC0 circuits comprised of negations and arbitrary unbounded fan-in gates that only need to be constant over inputs of Hamming weight $ge k$, which we denote GC0$(k)$. The gate set of this class includes biased LTFs like the $k$-$OR$ (output $1$ iff $ge k$ bits are 1) and $k$-$AND$ (output $0$ iff $ge k$ bits are 0), and thus can be seen as an interpolation between AC0 and TC0. We establish a tight multi-switching lemma for GC0$(k)$ circuits, which bounds the probability that several depth-2 GC0$(k)$ circuits do not simultaneously simplify under a random restriction. We also establish a new depth reduction lemma such that coupled with our multi-switching lemma, we can show many results obtained from the multi-switching lemma for depth-$d$ size-$s$ AC0 circuits lifts to depth-$d$ size-$s^{.99}$ GC0$(.01log s)$ circuits with no loss in parameters (other than hidden constants). Our result has the following applications: 1.Size-$2^{Omega(n^{1/d})}$ depth-$d$ GC0$(Omega(n^{1/d}))$ circuits do not correlate with parity (extending a result of H{aa}stad (SICOMP, 2014)). 2. Size-$n^{Omega(log n)}$ GC0$(Omega(log^2 n))$ circuits with $n^{.249}$ arbitrary threshold gates or $n^{.499}$ arbitrary symmetric gates exhibit exponentially small correlation against an explicit function (extending a result of Tan and Servedio (RANDOM, 2019)). 3. There is a seed length $O((log m)^{d-1}log(m/varepsilon)loglog(m))$ pseudorandom generator against size-$m$ depth-$d$ GC0$(log m)$ circuits, matching the AC0 lower bound of H{aa}stad stad up to a $loglog m$ factor (extending a result of Lyu (CCC, 2022)). 4. Size-$m$ GC0$(log m)$ circuits have exponentially small Fourier tails (extending a result of Tal (CCC, 2017)).","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"34 7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82790030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-04DOI: 10.48550/arXiv.2304.01608
Yotam Dikstein, Irit Dinur
We give new bounds on the cosystolic expansion constants of several families of high dimensional expanders, and the known coboundary expansion constants of order complexes of homogeneous geometric lattices, including the spherical building of $SL_n(F_q)$. The improvement applies to the high dimensional expanders constructed by Lubotzky, Samuels and Vishne, and by Kaufman and Oppenheim. Our new expansion constants do not depend on the degree of the complex nor on its dimension, nor on the group of coefficients. This implies improved bounds on Gromov's topological overlap constant, and on Dinur and Meshulam's cover stability, which may have applications for agreement testing. In comparison, existing bounds decay exponentially with the ambient dimension (for spherical buildings) and in addition decay linearly with the degree (for all known bounded-degree high dimensional expanders). Our results are based on several new techniques: * We develop a new"color-restriction"technique which enables proving dimension-free expansion by restricting a multi-partite complex to small random subsets of its color classes. * We give a new"spectral"proof for Evra and Kaufman's local-to-global theorem, deriving better bounds and getting rid of the dependence on the degree. This theorem bounds the cosystolic expansion of a complex using coboundary expansion and spectral expansion of the links. * We derive absolute bounds on the coboundary expansion of the spherical building (and any order complex of a homogeneous geometric lattice) by constructing a novel family of very short cones.
{"title":"Coboundary and cosystolic expansion without dependence on dimension or degree","authors":"Yotam Dikstein, Irit Dinur","doi":"10.48550/arXiv.2304.01608","DOIUrl":"https://doi.org/10.48550/arXiv.2304.01608","url":null,"abstract":"We give new bounds on the cosystolic expansion constants of several families of high dimensional expanders, and the known coboundary expansion constants of order complexes of homogeneous geometric lattices, including the spherical building of $SL_n(F_q)$. The improvement applies to the high dimensional expanders constructed by Lubotzky, Samuels and Vishne, and by Kaufman and Oppenheim. Our new expansion constants do not depend on the degree of the complex nor on its dimension, nor on the group of coefficients. This implies improved bounds on Gromov's topological overlap constant, and on Dinur and Meshulam's cover stability, which may have applications for agreement testing. In comparison, existing bounds decay exponentially with the ambient dimension (for spherical buildings) and in addition decay linearly with the degree (for all known bounded-degree high dimensional expanders). Our results are based on several new techniques: * We develop a new\"color-restriction\"technique which enables proving dimension-free expansion by restricting a multi-partite complex to small random subsets of its color classes. * We give a new\"spectral\"proof for Evra and Kaufman's local-to-global theorem, deriving better bounds and getting rid of the dependence on the degree. This theorem bounds the cosystolic expansion of a complex using coboundary expansion and spectral expansion of the links. * We derive absolute bounds on the coboundary expansion of the spherical building (and any order complex of a homogeneous geometric lattice) by constructing a novel family of very short cones.","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"100 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76068533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-03DOI: 10.48550/arXiv.2304.01191
Sumanta Ghosh, P. Harsha, Simao Herdade, Mrinal Kumar, Ramprasad Saptharishi
We design nearly-linear time numerical algorithms for the problem of multivariate multipoint evaluation over the fields of rational, real and complex numbers. We consider both emph{exact} and emph{approximate} versions of the algorithm. The input to the algorithms are (1) coefficients of an $m$-variate polynomial $f$ with degree $d$ in each variable, and (2) points $a_1,..., a_N$ each of whose coordinate has value bounded by one and bit-complexity $s$. * Approximate version: Given additionally an accuracy parameter $t$, the algorithm computes rational numbers $beta_1,ldots, beta_N$ such that $|f(a_i) - beta_i| leq frac{1}{2^t}$ for all $i$, and has a running time of $((Nm + d^m)(s + t))^{1 + o(1)}$ for all $m$ and all sufficiently large $d$. * Exact version (when over rationals): Given additionally a bound $c$ on the bit-complexity of all evaluations, the algorithm computes the rational numbers $f(a_1), ... , f(a_N)$, in time $((Nm + d^m)(s + c))^{1 + o(1)}$ for all $m$ and all sufficiently large $d$. . Prior to this work, a nearly-linear time algorithm for multivariate multipoint evaluation (exact or approximate) over any infinite field appears to be known only for the case of univariate polynomials, and was discovered in a recent work of Moroz (FOCS 2021). In this work, we extend this result from the univariate to the multivariate setting. However, our algorithm is based on ideas that seem to be conceptually different from those of Moroz (FOCS 2021) and crucially relies on a recent algorithm of Bhargava, Ghosh, Guo, Kumar&Umans (FOCS 2022) for multivariate multipoint evaluation over finite fields, and known efficient algorithms for the problems of rational number reconstruction and fast Chinese remaindering in computational number theory.
{"title":"Fast Numerical Multivariate Multipoint Evaluation","authors":"Sumanta Ghosh, P. Harsha, Simao Herdade, Mrinal Kumar, Ramprasad Saptharishi","doi":"10.48550/arXiv.2304.01191","DOIUrl":"https://doi.org/10.48550/arXiv.2304.01191","url":null,"abstract":"We design nearly-linear time numerical algorithms for the problem of multivariate multipoint evaluation over the fields of rational, real and complex numbers. We consider both emph{exact} and emph{approximate} versions of the algorithm. The input to the algorithms are (1) coefficients of an $m$-variate polynomial $f$ with degree $d$ in each variable, and (2) points $a_1,..., a_N$ each of whose coordinate has value bounded by one and bit-complexity $s$. * Approximate version: Given additionally an accuracy parameter $t$, the algorithm computes rational numbers $beta_1,ldots, beta_N$ such that $|f(a_i) - beta_i| leq frac{1}{2^t}$ for all $i$, and has a running time of $((Nm + d^m)(s + t))^{1 + o(1)}$ for all $m$ and all sufficiently large $d$. * Exact version (when over rationals): Given additionally a bound $c$ on the bit-complexity of all evaluations, the algorithm computes the rational numbers $f(a_1), ... , f(a_N)$, in time $((Nm + d^m)(s + c))^{1 + o(1)}$ for all $m$ and all sufficiently large $d$. . Prior to this work, a nearly-linear time algorithm for multivariate multipoint evaluation (exact or approximate) over any infinite field appears to be known only for the case of univariate polynomials, and was discovered in a recent work of Moroz (FOCS 2021). In this work, we extend this result from the univariate to the multivariate setting. However, our algorithm is based on ideas that seem to be conceptually different from those of Moroz (FOCS 2021) and crucially relies on a recent algorithm of Bhargava, Ghosh, Guo, Kumar&Umans (FOCS 2022) for multivariate multipoint evaluation over finite fields, and known efficient algorithms for the problems of rational number reconstruction and fast Chinese remaindering in computational number theory.","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"84 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88961500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-03DOI: 10.48550/arXiv.2304.01416
Hadley Black, Deeparnab Chakrabarty, C. Seshadhri
Monotonicity testing of Boolean functions on the hypergrid, $f:[n]^d to {0,1}$, is a classic topic in property testing. Determining the non-adaptive complexity of this problem is an important open question. For arbitrary $n$, [Black-Chakrabarty-Seshadhri, SODA 2020] describe a tester with query complexity $widetilde{O}(varepsilon^{-4/3}d^{5/6})$. This complexity is independent of $n$, but has a suboptimal dependence on $d$. Recently, [Braverman-Khot-Kindler-Minzer, ITCS 2023] and [Black-Chakrabarty-Seshadhri, STOC 2023] describe $widetilde{O}(varepsilon^{-2} n^3sqrt{d})$ and $widetilde{O}(varepsilon^{-2} nsqrt{d})$-query testers, respectively. These testers have an almost optimal dependence on $d$, but a suboptimal polynomial dependence on $n$. In this paper, we describe a non-adaptive, one-sided monotonicity tester with query complexity $O(varepsilon^{-2} d^{1/2 + o(1)})$, independent of $n$. Up to the $d^{o(1)}$-factors, our result resolves the non-adaptive complexity of monotonicity testing for Boolean functions on hypergrids. The independence of $n$ yields a non-adaptive, one-sided $O(varepsilon^{-2} d^{1/2 + o(1)})$-query monotonicity tester for Boolean functions $f:mathbb{R}^d to {0,1}$ associated with an arbitrary product measure.
超网格($f:[n]^d to {0,1}$)上布尔函数的单调性测试是性能测试中的一个经典课题。确定该问题的非自适应复杂性是一个重要的开放性问题。对于任意$n$, [Black-Chakrabarty-Seshadhri, SODA 2020]描述了一个具有查询复杂性的测试器$widetilde{O}(varepsilon^{-4/3}d^{5/6})$。这种复杂性与$n$无关,但对$d$的依赖性不是最优的。最近,[Braverman-Khot-Kindler-Minzer, ITCS 2023]和[Black-Chakrabarty-Seshadhri, STOC 2023]分别描述了$widetilde{O}(varepsilon^{-2} n^3sqrt{d})$和$widetilde{O}(varepsilon^{-2} nsqrt{d})$ -查询测试器。这些测试人员对$d$的依赖几乎是最优的,但对$n$的依赖是次优的多项式。在本文中,我们描述了一个查询复杂度$O(varepsilon^{-2} d^{1/2 + o(1)})$独立于$n$的非自适应单侧单调性测试器。直到$d^{o(1)}$ -因子,我们的结果解决了超网格上布尔函数单调性测试的非自适应复杂性。$n$的独立性为与任意产品度量相关联的布尔函数$f:mathbb{R}^d to {0,1}$产生了一个非自适应的单向$O(varepsilon^{-2} d^{1/2 + o(1)})$查询单调性测试器。
{"title":"A d1/2+o(1) Monotonicity Tester for Boolean Functions on d-Dimensional Hypergrids","authors":"Hadley Black, Deeparnab Chakrabarty, C. Seshadhri","doi":"10.48550/arXiv.2304.01416","DOIUrl":"https://doi.org/10.48550/arXiv.2304.01416","url":null,"abstract":"Monotonicity testing of Boolean functions on the hypergrid, $f:[n]^d to {0,1}$, is a classic topic in property testing. Determining the non-adaptive complexity of this problem is an important open question. For arbitrary $n$, [Black-Chakrabarty-Seshadhri, SODA 2020] describe a tester with query complexity $widetilde{O}(varepsilon^{-4/3}d^{5/6})$. This complexity is independent of $n$, but has a suboptimal dependence on $d$. Recently, [Braverman-Khot-Kindler-Minzer, ITCS 2023] and [Black-Chakrabarty-Seshadhri, STOC 2023] describe $widetilde{O}(varepsilon^{-2} n^3sqrt{d})$ and $widetilde{O}(varepsilon^{-2} nsqrt{d})$-query testers, respectively. These testers have an almost optimal dependence on $d$, but a suboptimal polynomial dependence on $n$. In this paper, we describe a non-adaptive, one-sided monotonicity tester with query complexity $O(varepsilon^{-2} d^{1/2 + o(1)})$, independent of $n$. Up to the $d^{o(1)}$-factors, our result resolves the non-adaptive complexity of monotonicity testing for Boolean functions on hypergrids. The independence of $n$ yields a non-adaptive, one-sided $O(varepsilon^{-2} d^{1/2 + o(1)})$-query monotonicity tester for Boolean functions $f:mathbb{R}^d to {0,1}$ associated with an arbitrary product measure.","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"178 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74164972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01DOI: 10.48550/arXiv.2304.00391
Lila Fontes, Sophie Laplante, M. Laurière, Alexandre Nolin
We study the two-party communication complexity of functions with large outputs, and show that the communication complexity can greatly vary depending on what output model is considered. We study a variety of output models, ranging from the open model, in which an external observer can compute the outcome, to the XOR model, in which the outcome of the protocol should be the bitwise XOR of the players' local outputs. This model is inspired by XOR games, which are widely studied two-player quantum games. We focus on the question of error-reduction in these new output models. For functions of output size k, applying standard error reduction techniques in the XOR model would introduce an additional cost linear in k. We show that no dependency on k is necessary. Similarly, standard randomness removal techniques, incur a multiplicative cost of $2^k$ in the XOR model. We show how to reduce this factor to O(k). In addition, we prove analogous error reduction and randomness removal results in the other models, separate all models from each other, and show that some natural problems, including Set Intersection and Find the First Difference, separate the models when the Hamming weights of their inputs is bounded. Finally, we show how to use the rank lower bound technique for our weak output models.
{"title":"The communication complexity of functions with large outputs","authors":"Lila Fontes, Sophie Laplante, M. Laurière, Alexandre Nolin","doi":"10.48550/arXiv.2304.00391","DOIUrl":"https://doi.org/10.48550/arXiv.2304.00391","url":null,"abstract":"We study the two-party communication complexity of functions with large outputs, and show that the communication complexity can greatly vary depending on what output model is considered. We study a variety of output models, ranging from the open model, in which an external observer can compute the outcome, to the XOR model, in which the outcome of the protocol should be the bitwise XOR of the players' local outputs. This model is inspired by XOR games, which are widely studied two-player quantum games. We focus on the question of error-reduction in these new output models. For functions of output size k, applying standard error reduction techniques in the XOR model would introduce an additional cost linear in k. We show that no dependency on k is necessary. Similarly, standard randomness removal techniques, incur a multiplicative cost of $2^k$ in the XOR model. We show how to reduce this factor to O(k). In addition, we prove analogous error reduction and randomness removal results in the other models, separate all models from each other, and show that some natural problems, including Set Intersection and Find the First Difference, separate the models when the Hamming weights of their inputs is bounded. Finally, we show how to use the rank lower bound technique for our weak output models.","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87400822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-29DOI: 10.48550/arXiv.2303.16413
Edward Pyne, R. Raz, Wei Zhan
Let $mathcal{L}$ be a language that can be decided in linear space and let $epsilon>0$ be any constant. Let $mathcal{A}$ be the exponential hardness assumption that for every $n$, membership in $mathcal{L}$ for inputs of length~$n$ cannot be decided by circuits of size smaller than $2^{epsilon n}$. We prove that for every function $f :{0,1}^* rightarrow {0,1}$, computable by a randomized logspace algorithm $R$, there exists a deterministic logspace algorithm $D$ (attempting to compute $f$), such that on every input $x$ of length $n$, the algorithm $D$ outputs one of the following: 1: The correct value $f(x)$. 2: The string: ``I am unable to compute $f(x)$ because the hardness assumption $mathcal{A}$ is false'', followed by a (provenly correct) circuit of size smaller than $2^{epsilon n'}$ for membership in $mathcal{L}$ for inputs of length~$n'$, for some $n' = Theta (log n)$; that is, a circuit that refutes $mathcal{A}$. Our next result is a universal derandomizer for $BPL$: We give a deterministic algorithm $U$ that takes as an input a randomized logspace algorithm $R$ and an input $x$ and simulates the computation of $R$ on $x$, deteriministically. Under the widely believed assumption $BPL=L$, the space used by $U$ is at most $C_R cdot log n$ (where $C_R$ is a constant depending on~$R$). Moreover, for every constant $c geq 1$, if $BPLsubseteq SPACE[(log(n))^{c}]$ then the space used by $U$ is at most $C_R cdot (log(n))^{c}$. Finally, we prove that if optimal hitting sets for ordered branching programs exist then there is a deterministic logspace algorithm that, given a black-box access to an ordered branching program $B$ of size $n$, estimates the probability that $B$ accepts on a uniformly random input. This extends the result of (Cheng and Hoza CCC 2020), who proved that an optimal hitting set implies a white-box two-sided derandomization.
{"title":"Certified Hardness vs. Randomness for Log-Space","authors":"Edward Pyne, R. Raz, Wei Zhan","doi":"10.48550/arXiv.2303.16413","DOIUrl":"https://doi.org/10.48550/arXiv.2303.16413","url":null,"abstract":"Let $mathcal{L}$ be a language that can be decided in linear space and let $epsilon>0$ be any constant. Let $mathcal{A}$ be the exponential hardness assumption that for every $n$, membership in $mathcal{L}$ for inputs of length~$n$ cannot be decided by circuits of size smaller than $2^{epsilon n}$. We prove that for every function $f :{0,1}^* rightarrow {0,1}$, computable by a randomized logspace algorithm $R$, there exists a deterministic logspace algorithm $D$ (attempting to compute $f$), such that on every input $x$ of length $n$, the algorithm $D$ outputs one of the following: 1: The correct value $f(x)$. 2: The string: ``I am unable to compute $f(x)$ because the hardness assumption $mathcal{A}$ is false'', followed by a (provenly correct) circuit of size smaller than $2^{epsilon n'}$ for membership in $mathcal{L}$ for inputs of length~$n'$, for some $n' = Theta (log n)$; that is, a circuit that refutes $mathcal{A}$. Our next result is a universal derandomizer for $BPL$: We give a deterministic algorithm $U$ that takes as an input a randomized logspace algorithm $R$ and an input $x$ and simulates the computation of $R$ on $x$, deteriministically. Under the widely believed assumption $BPL=L$, the space used by $U$ is at most $C_R cdot log n$ (where $C_R$ is a constant depending on~$R$). Moreover, for every constant $c geq 1$, if $BPLsubseteq SPACE[(log(n))^{c}]$ then the space used by $U$ is at most $C_R cdot (log(n))^{c}$. Finally, we prove that if optimal hitting sets for ordered branching programs exist then there is a deterministic logspace algorithm that, given a black-box access to an ordered branching program $B$ of size $n$, estimates the probability that $B$ accepts on a uniformly random input. This extends the result of (Cheng and Hoza CCC 2020), who proved that an optimal hitting set implies a white-box two-sided derandomization.","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"129 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80247730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-13DOI: 10.48550/arXiv.2303.06802
Xin Li
A long line of work in the past two decades or so established close connections between several different pseudorandom objects and applications. These connections essentially show that an asymptotically optimal construction of one central object will lead to asymptotically optimal solutions to all the others. However, despite considerable effort, previous works can get close but still lack one final step to achieve truly asymptotically optimal constructions. In this paper we provide the last missing link, thus simultaneously achieving explicit, asymptotically optimal constructions and solutions for various well studied extractors and applications, that have been the subjects of long lines of research. Our results include: Asymptotically optimal seeded non-malleable extractors, which in turn give two source extractors for asymptotically optimal min-entropy of $O(log n)$, explicit constructions of $K$-Ramsey graphs on $N$ vertices with $K=log^{O(1)} N$, and truly optimal privacy amplification protocols with an active adversary. Two source non-malleable extractors and affine non-malleable extractors for some linear min-entropy with exponentially small error, which in turn give the first explicit construction of non-malleable codes against $2$-split state tampering and affine tampering with constant rate and emph{exponentially} small error. Explicit extractors for affine sources, sumset sources, interleaved sources, and small space sources that achieve asymptotically optimal min-entropy of $O(log n)$ or $2s+O(log n)$ (for space $s$ sources). An explicit function that requires strongly linear read once branching programs of size $2^{n-O(log n)}$, which is optimal up to the constant in $O(cdot)$. Previously, even for standard read once branching programs, the best known size lower bound for an explicit function is $2^{n-O(log^2 n)}$.
{"title":"Two Source Extractors for Asymptotically Optimal Entropy, and (Many) More","authors":"Xin Li","doi":"10.48550/arXiv.2303.06802","DOIUrl":"https://doi.org/10.48550/arXiv.2303.06802","url":null,"abstract":"A long line of work in the past two decades or so established close connections between several different pseudorandom objects and applications. These connections essentially show that an asymptotically optimal construction of one central object will lead to asymptotically optimal solutions to all the others. However, despite considerable effort, previous works can get close but still lack one final step to achieve truly asymptotically optimal constructions. In this paper we provide the last missing link, thus simultaneously achieving explicit, asymptotically optimal constructions and solutions for various well studied extractors and applications, that have been the subjects of long lines of research. Our results include: Asymptotically optimal seeded non-malleable extractors, which in turn give two source extractors for asymptotically optimal min-entropy of $O(log n)$, explicit constructions of $K$-Ramsey graphs on $N$ vertices with $K=log^{O(1)} N$, and truly optimal privacy amplification protocols with an active adversary. Two source non-malleable extractors and affine non-malleable extractors for some linear min-entropy with exponentially small error, which in turn give the first explicit construction of non-malleable codes against $2$-split state tampering and affine tampering with constant rate and emph{exponentially} small error. Explicit extractors for affine sources, sumset sources, interleaved sources, and small space sources that achieve asymptotically optimal min-entropy of $O(log n)$ or $2s+O(log n)$ (for space $s$ sources). An explicit function that requires strongly linear read once branching programs of size $2^{n-O(log n)}$, which is optimal up to the constant in $O(cdot)$. Previously, even for standard read once branching programs, the best known size lower bound for an explicit function is $2^{n-O(log^2 n)}$.","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91238587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}