A sunflower is a collection of sets whose pairwise intersections are identical. In this article, we shall go sunflower-picking. We find sunflowers in several seemingly unrelated fields, before turning to discuss recent progress on the famous sunflower conjecture of Erdős and Rado, made by Alweiss, Lovett, Wu, and Zhang, as well as a related resolution of the threshold vs expectation threshold conjecture of Kahn and Kalai discovered by Park and Pham. We give short proofs for both of these results.
{"title":"Sunflowers: from soil to oil","authors":"Anup Rao","doi":"10.1090/bull/1777","DOIUrl":"https://doi.org/10.1090/bull/1777","url":null,"abstract":"A sunflower is a collection of sets whose pairwise intersections are identical. In this article, we shall go sunflower-picking. We find sunflowers in several seemingly unrelated fields, before turning to discuss recent progress on the famous sunflower conjecture of Erdős and Rado, made by Alweiss, Lovett, Wu, and Zhang, as well as a related resolution of the threshold vs expectation threshold conjecture of Kahn and Kalai discovered by Park and Pham. We give short proofs for both of these results.","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85287163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-24DOI: 10.48550/arXiv.2208.11642
J. Krajícek
Cook and Reckhow 1979 pointed out that NP is not closed under complementation iff there is no propositional proof system that admits polynomial size proofs of all tautologies. Theory of proof complexity generators aims at constructing sets of tautologies hard for strong and possibly for all proof systems. We focus at a conjecture from K.2004 in foundations of the theory that there is a proof complexity generator hard for all proof systems. This can be equivalently formulated (for p-time generators) without a reference to proof complexity notions as follows: * There exist a p-time function $g$ stretching each input by one bit such that its range intersects all infinite NP sets. We consider several facets of this conjecture, including its links to bounded arithmetic (witnessing and independence results), to time-bounded Kolmogorov complexity, to feasible disjunction property of propositional proof systems and to complexity of proof search. We argue that a specific gadget generator from K.2009 is a good candidate for $g$. We define a new hardness property of generators, the $bigvee$-hardness, and shows that one specific gadget generator is the $bigvee$-hardest (w.r.t. any sufficiently strong proof system). We define the class of feasibly infinite NP sets and show, assuming a hypothesis from circuit complexity, that the conjecture holds for all feasibly infinite NP sets.
Cook and Reckhow 1979指出,如果没有命题证明系统允许所有重言式的多项式大小证明,那么NP在互补下是不封闭的。证明复杂性生成理论旨在构造难以实现强证明的重言式集,并可能实现所有证明系统的重言式集。我们关注K.2004在理论基础中的一个猜想,即所有证明系统都存在一个证明复杂性生成器。这可以等价地表述(对于p-时间生成器),而不涉及证明复杂性概念如下:*存在一个p-时间函数$g$将每个输入延伸1位,使得其值域与所有无限NP集相交。我们考虑了这个猜想的几个方面,包括它与有界算术(见证和独立结果)的联系,与有界Kolmogorov复杂度的联系,与命题证明系统的可行分离性的联系以及与证明搜索的复杂性的联系。我们认为来自K.2009的一个特定的小工具生成器是$g$的一个很好的候选者。我们定义了一个新的生成器的硬度属性,$bigvee$-硬度,并证明了一个特定的小部件生成器是$bigvee$-最难的(任何足够强的证明系统都是如此)。我们定义了可行无限NP集的类别,并给出了一个从电路复杂度出发的假设,证明了该猜想对所有可行无限NP集都成立。
{"title":"On the existence of strong proof complexity generators","authors":"J. Krajícek","doi":"10.48550/arXiv.2208.11642","DOIUrl":"https://doi.org/10.48550/arXiv.2208.11642","url":null,"abstract":"Cook and Reckhow 1979 pointed out that NP is not closed under complementation iff there is no propositional proof system that admits polynomial size proofs of all tautologies. Theory of proof complexity generators aims at constructing sets of tautologies hard for strong and possibly for all proof systems. We focus at a conjecture from K.2004 in foundations of the theory that there is a proof complexity generator hard for all proof systems. This can be equivalently formulated (for p-time generators) without a reference to proof complexity notions as follows: * There exist a p-time function $g$ stretching each input by one bit such that its range intersects all infinite NP sets. We consider several facets of this conjecture, including its links to bounded arithmetic (witnessing and independence results), to time-bounded Kolmogorov complexity, to feasible disjunction property of propositional proof systems and to complexity of proof search. We argue that a specific gadget generator from K.2009 is a good candidate for $g$. We define a new hardness property of generators, the $bigvee$-hardness, and shows that one specific gadget generator is the $bigvee$-hardest (w.r.t. any sufficiently strong proof system). We define the class of feasibly infinite NP sets and show, assuming a hypothesis from circuit complexity, that the conjecture holds for all feasibly infinite NP sets.","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81515715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-17DOI: 10.48550/arXiv.2208.08394
Natalia Dobrokhotova-Maikova, A. Kozachinskiy, V. Podolskii
In this paper, we address sorting networks that are constructed from comparators of arity $k>2$. That is, in our setting the arity of the comparators -- or, in other words, the number of inputs that can be sorted at the unit cost -- is a parameter. We study its relationship with two other parameters -- $n$, the number of inputs, and $d$, the depth. This model received considerable attention. Partly, its motivation is to better understand the structure of sorting networks. In particular, sorting networks with large arity are related to recursive constructions of ordinary sorting networks. Additionally, studies of this model have natural correspondence with a recent line of work on constructing circuits for majority functions from majority gates of lower fan-in. Motivated by these questions, we obtain the first lower bounds on the arity of constant-depth sorting networks. More precisely, we consider sorting networks of depth $d$ up to 4, and determine the minimal $k$ for which there is such a network with comparators of arity $k$. For depths $d=1,2$ we observe that $k=n$. For $d=3$ we show that $k = lceil frac n2 rceil$. For $d=4$ the minimal arity becomes sublinear: $k = Theta(n^{2/3})$. This contrasts with the case of majority circuits, in which $k = O(n^{2/3})$ is achievable already for depth $d=3$.
{"title":"Constant-Depth Sorting Networks","authors":"Natalia Dobrokhotova-Maikova, A. Kozachinskiy, V. Podolskii","doi":"10.48550/arXiv.2208.08394","DOIUrl":"https://doi.org/10.48550/arXiv.2208.08394","url":null,"abstract":"In this paper, we address sorting networks that are constructed from comparators of arity $k>2$. That is, in our setting the arity of the comparators -- or, in other words, the number of inputs that can be sorted at the unit cost -- is a parameter. We study its relationship with two other parameters -- $n$, the number of inputs, and $d$, the depth. This model received considerable attention. Partly, its motivation is to better understand the structure of sorting networks. In particular, sorting networks with large arity are related to recursive constructions of ordinary sorting networks. Additionally, studies of this model have natural correspondence with a recent line of work on constructing circuits for majority functions from majority gates of lower fan-in. Motivated by these questions, we obtain the first lower bounds on the arity of constant-depth sorting networks. More precisely, we consider sorting networks of depth $d$ up to 4, and determine the minimal $k$ for which there is such a network with comparators of arity $k$. For depths $d=1,2$ we observe that $k=n$. For $d=3$ we show that $k = lceil frac n2 rceil$. For $d=4$ the minimal arity becomes sublinear: $k = Theta(n^{2/3})$. This contrasts with the case of majority circuits, in which $k = O(n^{2/3})$ is achievable already for depth $d=3$.","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85782977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-16DOI: 10.48550/arXiv.2208.07730
Hao Wu
We revisit the direct sum questions in communication complexity which asks whether the resource needed to solve $n$ communication problems together is (approximately) the sum of resources needed to solve these problems separately. Our work starts with the observation that Dinur and Meir's fortification lemma can be generalized to a general fortification lemma for a sub-additive measure over set. By applying this lemma to the case of cover number, we obtain a dual form of cover number, called"$delta$-fooling set"which is a generalized fooling set. Any rectangle which contains enough number of elements from a $delta$-fooling set can not be monochromatic. With this fact, we are able to reprove the classic direct sum theorem of cover number with a simple double counting argument. Formally, let $S subseteq (Atimes B) times O$ and $T subseteq (Ptimes Q) times Z$ be two communication problems, $ log mathsf{Cov}left(Stimes Tright) geq log mathsf{Cov}left(Sright) + logmathsf{Cov}(T) -loglog|P||Q|-4.$ where $mathsf{Cov}$ denotes the cover number. One issue of current deterministic direct sum theorems about communication complexity is that they provide no information when $n$ is small, especially when $n=2$. In this work, we prove a new direct sum theorem about protocol size which imply a better direct sum theorem for two functions in terms of protocol size. Formally, let $mathsf{L}$ denotes complexity of the protocol size of a communication problem, given a communication problem $F:A times B rightarrow {0,1}$, $ logmathsf{L}left(Ftimes Fright)geq log mathsf{L}left(Fright) +Omegaleft(sqrt{logmathsf{L}left(Fright)}right)-loglog|A||B| -4$. All our results are obtained in a similar way using the $delta$-fooling set to construct a hardcore for the direct sum problem.
我们重新审视通信复杂性中的直接和问题,它询问一起解决$n$通信问题所需的资源是否(近似)是单独解决这些问题所需的资源总和。我们的工作从观察到Dinur和Meir的强化引理可以推广到集合上的次加性测度的一般强化引理开始。将此引理应用于覆盖数的情况,得到了覆盖数的对偶形式,称为“$delta$ -愚弄集”,它是一种广义的愚弄集。任何包含$delta$ -fool集合中足够数量元素的矩形都不可能是单色的。利用这一事实,我们可以用一个简单的重复计数论证来反驳经典的复数直和定理。形式上,设$S subseteq (Atimes B) times O$和$T subseteq (Ptimes Q) times Z$为两个通信问题,$ log mathsf{Cov}left(Stimes Tright) geq log mathsf{Cov}left(Sright) + logmathsf{Cov}(T) -loglog|P||Q|-4.$其中$mathsf{Cov}$表示封面号。当前关于通信复杂性的确定性直接和定理的一个问题是,当$n$很小时,特别是$n=2$很小时,它们不提供任何信息。在这项工作中,我们证明了一个新的关于协议大小的直接和定理,它暗示了一个更好的关于协议大小的两个函数的直接和定理。形式上,让$mathsf{L}$表示通信问题的协议大小的复杂性,给定通信问题$F:A times B rightarrow {0,1}$, $ logmathsf{L}left(Ftimes Fright)geq log mathsf{L}left(Fright) +Omegaleft(sqrt{logmathsf{L}left(Fright)}right)-loglog|A||B| -4$。我们所有的结果都是以类似的方式获得的,使用$delta$ -愚弄集来构建直接和问题的核心。
{"title":"Direct Sum Theorems From Fortification","authors":"Hao Wu","doi":"10.48550/arXiv.2208.07730","DOIUrl":"https://doi.org/10.48550/arXiv.2208.07730","url":null,"abstract":"We revisit the direct sum questions in communication complexity which asks whether the resource needed to solve $n$ communication problems together is (approximately) the sum of resources needed to solve these problems separately. Our work starts with the observation that Dinur and Meir's fortification lemma can be generalized to a general fortification lemma for a sub-additive measure over set. By applying this lemma to the case of cover number, we obtain a dual form of cover number, called\"$delta$-fooling set\"which is a generalized fooling set. Any rectangle which contains enough number of elements from a $delta$-fooling set can not be monochromatic. With this fact, we are able to reprove the classic direct sum theorem of cover number with a simple double counting argument. Formally, let $S subseteq (Atimes B) times O$ and $T subseteq (Ptimes Q) times Z$ be two communication problems, $ log mathsf{Cov}left(Stimes Tright) geq log mathsf{Cov}left(Sright) + logmathsf{Cov}(T) -loglog|P||Q|-4.$ where $mathsf{Cov}$ denotes the cover number. One issue of current deterministic direct sum theorems about communication complexity is that they provide no information when $n$ is small, especially when $n=2$. In this work, we prove a new direct sum theorem about protocol size which imply a better direct sum theorem for two functions in terms of protocol size. Formally, let $mathsf{L}$ denotes complexity of the protocol size of a communication problem, given a communication problem $F:A times B rightarrow {0,1}$, $ logmathsf{L}left(Ftimes Fright)geq log mathsf{L}left(Fright) +Omegaleft(sqrt{logmathsf{L}left(Fright)}right)-loglog|A||B| -4$. All our results are obtained in a similar way using the $delta$-fooling set to construct a hardcore for the direct sum problem.","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"82 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83396482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-29DOI: 10.48550/arXiv.2208.00029
Mika Göös, Siddhartha Jain
The Collision problem is to decide whether a given list of numbers ( x 1 , . . . , x n ) ∈ [ n ] n is 1-to-1 or 2-to-1 when promised one of them is the case. We show an n Ω(1) randomised communication lower bound for the natural two-party version of Collision where Alice holds the first half of the bits of each x i and Bob holds the second half. As an application, we also show a similar lower bound for a weak bit-pigeonhole search problem, which answers a question of Itsykson and Riazanov ( CCC 2021 ). 2012 ACM Subject Classification Theory of Communication
碰撞问题是决定一个给定的数字列表(x 1,…), x n)∈[n] n是1比1或2比1,当承诺其中一个是情况。我们展示了自然的双方碰撞版本的n Ω(1)随机通信下界,其中Alice持有每个x i的前半部分,Bob持有后半部分。作为一个应用,我们还展示了弱位鸽洞搜索问题的类似下界,它回答了Itsykson和Riazanov (CCC 2021)的问题。2012美国计算机学会传播学科分类理论
{"title":"Communication Complexity of Collision","authors":"Mika Göös, Siddhartha Jain","doi":"10.48550/arXiv.2208.00029","DOIUrl":"https://doi.org/10.48550/arXiv.2208.00029","url":null,"abstract":"The Collision problem is to decide whether a given list of numbers ( x 1 , . . . , x n ) ∈ [ n ] n is 1-to-1 or 2-to-1 when promised one of them is the case. We show an n Ω(1) randomised communication lower bound for the natural two-party version of Collision where Alice holds the first half of the bits of each x i and Bob holds the second half. As an application, we also show a similar lower bound for a weak bit-pigeonhole search problem, which answers a question of Itsykson and Riazanov ( CCC 2021 ). 2012 ACM Subject Classification Theory of Communication","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"90 1","pages":"19:1-19:9"},"PeriodicalIF":0.0,"publicationDate":"2022-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77117007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-25DOI: 10.48550/arXiv.2207.12514
Sourav Chakraborty, E. Fischer, Arijit Ghosh, Gopinath Mishra, Sayantan Sen
The study of distribution testing has become ubiquitous in the area of property testing, both for its theoretical appeal, as well as for its applications in other fields of Computer Science. The original distribution testing model relies on samples drawn independently from the distribution to be tested. However, when testing distributions over the $n$-dimensional Hamming cube $left{0,1right}^{n}$ for a large $n$, even reading a few samples is infeasible. To address this, Goldreich and Ron [ITCS 2022] have defined a model called the huge object model, in which the samples may only be queried in a few places. In this work, we initiate a study of a general class of properties in the huge object model, those that are invariant under a permutation of the indices of the vectors in $left{0,1right}^{n}$, while still not being necessarily fully symmetric as per the definition used in traditional distribution testing. We prove that every index-invariant property satisfying a bounded VC-dimension restriction admits a property tester with a number of queries independent of n. To complement this result, we argue that satisfying only index-invariance or only a VC-dimension bound is insufficient to guarantee a tester whose query complexity is independent of n. Moreover, we prove that the dependency of sample and query complexities of our tester on the VC-dimension is tight. As a second part of this work, we address the question of the number of queries required for non-adaptive testing. We show that it can be at most quadratic in the number of queries required for an adaptive tester of index-invariant properties. This is in contrast with the tight exponential gap for general non-index-invariant properties. Finally, we provide an index-invariant property for which the quadratic gap between adaptive and non-adaptive query complexities for testing is almost tight.
{"title":"Testing of Index-Invariant Properties in the Huge Object Model","authors":"Sourav Chakraborty, E. Fischer, Arijit Ghosh, Gopinath Mishra, Sayantan Sen","doi":"10.48550/arXiv.2207.12514","DOIUrl":"https://doi.org/10.48550/arXiv.2207.12514","url":null,"abstract":"The study of distribution testing has become ubiquitous in the area of property testing, both for its theoretical appeal, as well as for its applications in other fields of Computer Science. The original distribution testing model relies on samples drawn independently from the distribution to be tested. However, when testing distributions over the $n$-dimensional Hamming cube $left{0,1right}^{n}$ for a large $n$, even reading a few samples is infeasible. To address this, Goldreich and Ron [ITCS 2022] have defined a model called the huge object model, in which the samples may only be queried in a few places. In this work, we initiate a study of a general class of properties in the huge object model, those that are invariant under a permutation of the indices of the vectors in $left{0,1right}^{n}$, while still not being necessarily fully symmetric as per the definition used in traditional distribution testing. We prove that every index-invariant property satisfying a bounded VC-dimension restriction admits a property tester with a number of queries independent of n. To complement this result, we argue that satisfying only index-invariance or only a VC-dimension bound is insufficient to guarantee a tester whose query complexity is independent of n. Moreover, we prove that the dependency of sample and query complexities of our tester on the VC-dimension is tight. As a second part of this work, we address the question of the number of queries required for non-adaptive testing. We show that it can be at most quadratic in the number of queries required for an adaptive tester of index-invariant properties. This is in contrast with the tight exponential gap for general non-index-invariant properties. Finally, we provide an index-invariant property for which the quadratic gap between adaptive and non-adaptive query complexities for testing is almost tight.","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"6 1","pages":"3065-3136"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78728189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-21DOI: 10.48550/arXiv.2207.10588
S. Chillara, Coral Grichener, Amir Shpilka
We say that two given polynomials $f, g in R[X]$, over a ring $R$, are equivalent under shifts if there exists a vector $a in R^n$ such that $f(X+a) = g(X)$. Grigoriev and Karpinski (FOCS 1990), Lakshman and Saunders (SICOMP, 1995), and Grigoriev and Lakshman (ISSAC 1995) studied the problem of testing polynomial equivalence of a given polynomial to any $t$-sparse polynomial, over the rational numbers, and gave exponential time algorithms. In this paper, we provide hardness results for this problem. Formally, for a ring $R$, let $mathrm{SparseShift}_R$ be the following decision problem. Given a polynomial $P(X)$, is there a vector $a$ such that $P(X+a)$ contains fewer monomials than $P(X)$. We show that $mathrm{SparseShift}_R$ is at least as hard as checking if a given system of polynomial equations over $R[x_1,ldots, x_n]$ has a solution (Hilbert's Nullstellensatz). As a consequence of this reduction, we get the following results. 1. $mathrm{SparseShift}_mathbb{Z}$ is undecidable. 2. For any ring $R$ (which is not a field) such that $mathrm{HN}_R$ is $mathrm{NP}_R$-complete over the Blum-Shub-Smale model of computation, $mathrm{SparseShift}_{R}$ is also $mathrm{NP}_{R}$-complete. In particular, $mathrm{SparseShift}_{mathbb{Z}}$ is also $mathrm{NP}_{mathbb{Z}}$-complete. We also study the gap version of the $mathrm{SparseShift}_R$ and show the following. 1. For every function $beta: mathbb{N}tomathbb{R}_+$ such that $betain o(1)$, $N^beta$-gap-$mathrm{SparseShift}_mathbb{Z}$ is also undecidable (where $N$ is the input length). 2. For $R=mathbb{F}_p, mathbb{Q}, mathbb{R}$ or $mathbb{Z}_q$ and for every $beta>1$ the $beta$-gap-$mathrm{SparseShift}_R$ problem is $mathrm{NP}$-hard.
{"title":"On Hardness of Testing Equivalence to Sparse Polynomials Under Shifts","authors":"S. Chillara, Coral Grichener, Amir Shpilka","doi":"10.48550/arXiv.2207.10588","DOIUrl":"https://doi.org/10.48550/arXiv.2207.10588","url":null,"abstract":"We say that two given polynomials $f, g in R[X]$, over a ring $R$, are equivalent under shifts if there exists a vector $a in R^n$ such that $f(X+a) = g(X)$. Grigoriev and Karpinski (FOCS 1990), Lakshman and Saunders (SICOMP, 1995), and Grigoriev and Lakshman (ISSAC 1995) studied the problem of testing polynomial equivalence of a given polynomial to any $t$-sparse polynomial, over the rational numbers, and gave exponential time algorithms. In this paper, we provide hardness results for this problem. Formally, for a ring $R$, let $mathrm{SparseShift}_R$ be the following decision problem. Given a polynomial $P(X)$, is there a vector $a$ such that $P(X+a)$ contains fewer monomials than $P(X)$. We show that $mathrm{SparseShift}_R$ is at least as hard as checking if a given system of polynomial equations over $R[x_1,ldots, x_n]$ has a solution (Hilbert's Nullstellensatz). As a consequence of this reduction, we get the following results. 1. $mathrm{SparseShift}_mathbb{Z}$ is undecidable. 2. For any ring $R$ (which is not a field) such that $mathrm{HN}_R$ is $mathrm{NP}_R$-complete over the Blum-Shub-Smale model of computation, $mathrm{SparseShift}_{R}$ is also $mathrm{NP}_{R}$-complete. In particular, $mathrm{SparseShift}_{mathbb{Z}}$ is also $mathrm{NP}_{mathbb{Z}}$-complete. We also study the gap version of the $mathrm{SparseShift}_R$ and show the following. 1. For every function $beta: mathbb{N}tomathbb{R}_+$ such that $betain o(1)$, $N^beta$-gap-$mathrm{SparseShift}_mathbb{Z}$ is also undecidable (where $N$ is the input length). 2. For $R=mathbb{F}_p, mathbb{Q}, mathbb{R}$ or $mathbb{Z}_q$ and for every $beta>1$ the $beta$-gap-$mathrm{SparseShift}_R$ problem is $mathrm{NP}$-hard.","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"77 1","pages":"22:1-22:20"},"PeriodicalIF":0.0,"publicationDate":"2022-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80380528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-14DOI: 10.1137/1.9781611977554.ch156
Raghuvansh R. Saxena, Noah G. Singer, M. Sudan, Santhoshini Velusamy
We initiate a study of the streaming complexity of constraint satisfaction problems (CSPs) when the constraints arrive in a random order. We show that there exists a CSP, namely $textsf{Max-DICUT}$, for which random ordering makes a provable difference. Whereas a $4/9 approx 0.445$ approximation of $textsf{DICUT}$ requires $Omega(sqrt{n})$ space with adversarial ordering, we show that with random ordering of constraints there exists a $0.48$-approximation algorithm that only needs $O(log n)$ space. We also give new algorithms for $textsf{Max-DICUT}$ in variants of the adversarial ordering setting. Specifically, we give a two-pass $O(log n)$ space $0.48$-approximation algorithm for general graphs and a single-pass $tilde{O}(sqrt{n})$ space $0.48$-approximation algorithm for bounded degree graphs. On the negative side, we prove that CSPs where the satisfying assignments of the constraints support a one-wise independent distribution require $Omega(sqrt{n})$-space for any non-trivial approximation, even when the constraints are randomly ordered. This was previously known only for adversarially ordered constraints. Extending the results to randomly ordered constraints requires switching the hard instances from a union of random matchings to simple Erd"os-Renyi random (hyper)graphs and extending tools that can perform Fourier analysis on such instances. The only CSP to have been considered previously with random ordering is $textsf{Max-CUT}$ where the ordering is not known to change the approximability. Specifically it is known to be as hard to approximate with random ordering as with adversarial ordering, for $o(sqrt{n})$ space algorithms. Our results show a richer variety of possibilities and motivate further study of CSPs with randomly ordered constraints.
{"title":"Streaming complexity of CSPs with randomly ordered constraints","authors":"Raghuvansh R. Saxena, Noah G. Singer, M. Sudan, Santhoshini Velusamy","doi":"10.1137/1.9781611977554.ch156","DOIUrl":"https://doi.org/10.1137/1.9781611977554.ch156","url":null,"abstract":"We initiate a study of the streaming complexity of constraint satisfaction problems (CSPs) when the constraints arrive in a random order. We show that there exists a CSP, namely $textsf{Max-DICUT}$, for which random ordering makes a provable difference. Whereas a $4/9 approx 0.445$ approximation of $textsf{DICUT}$ requires $Omega(sqrt{n})$ space with adversarial ordering, we show that with random ordering of constraints there exists a $0.48$-approximation algorithm that only needs $O(log n)$ space. We also give new algorithms for $textsf{Max-DICUT}$ in variants of the adversarial ordering setting. Specifically, we give a two-pass $O(log n)$ space $0.48$-approximation algorithm for general graphs and a single-pass $tilde{O}(sqrt{n})$ space $0.48$-approximation algorithm for bounded degree graphs. On the negative side, we prove that CSPs where the satisfying assignments of the constraints support a one-wise independent distribution require $Omega(sqrt{n})$-space for any non-trivial approximation, even when the constraints are randomly ordered. This was previously known only for adversarially ordered constraints. Extending the results to randomly ordered constraints requires switching the hard instances from a union of random matchings to simple Erd\"os-Renyi random (hyper)graphs and extending tools that can perform Fourier analysis on such instances. The only CSP to have been considered previously with random ordering is $textsf{Max-CUT}$ where the ordering is not known to change the approximability. Specifically it is known to be as hard to approximate with random ordering as with adversarial ordering, for $o(sqrt{n})$ space algorithms. Our results show a richer variety of possibilities and motivate further study of CSPs with randomly ordered constraints.","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"51 1","pages":"4083-4103"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77239653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quantified conflict-driven clause learning (QCDCL) is one of the main approaches for solving quantified Boolean formulas (QBF). We formalise and investigate several versions of QCDCL that include cube learning and/or pure-literal elimination, and formally compare the resulting solving models via proof complexity techniques. Our results show that almost all of the QCDCL models are exponentially incomparable with respect to proof size (and hence solver running time), pointing towards different orthogonal ways how to practically implement QCDCL.
{"title":"QCDCL with Cube Learning or Pure Literal Elimination - What is best?","authors":"Olaf Beyersdorff, Benjamin Böhm","doi":"10.24963/ijcai.2022/248","DOIUrl":"https://doi.org/10.24963/ijcai.2022/248","url":null,"abstract":"Quantified conflict-driven clause learning (QCDCL) is one of the main approaches for solving quantified Boolean formulas (QBF). We formalise and investigate several versions of QCDCL that include cube learning and/or pure-literal elimination, and formally compare the resulting solving models via proof complexity techniques. Our results show that almost all of the QCDCL models are exponentially incomparable with respect to proof size (and hence solver running time), pointing towards different orthogonal ways how to practically implement QCDCL.","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"135 1","pages":"1781-1787"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84736541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-23DOI: 10.1007/s00037-022-00224-7
O. Goldreich
{"title":"Improved bounds on the AN-complexity of O(1)-linear functions","authors":"O. Goldreich","doi":"10.1007/s00037-022-00224-7","DOIUrl":"https://doi.org/10.1007/s00037-022-00224-7","url":null,"abstract":"","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"1 1","pages":"7"},"PeriodicalIF":0.0,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89511662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}