We prove that the class LOGSPACE (L, for short) is different from the class NP.
我们证明类 LOGSPACE(简称 L)不同于类 NP。
{"title":"L is different from NP","authors":"J. Andres Montoya","doi":"arxiv-2404.16562","DOIUrl":"https://doi.org/arxiv-2404.16562","url":null,"abstract":"We prove that the class LOGSPACE (L, for short) is different from the class\u0000NP.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"98 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140801468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
MIT Hardness Group, Della Hendrickson, Andy Tockman
We study three problems related to the computational complexity of the popular game Minesweeper. The first is consistency: given a set of clues, is there any arrangement of mines that satisfies it? This problem has been known to be NP-complete since 2000, but our framework proves it as a side effect. The second is inference: given a set of clues, is there any cell that the player can prove is safe? The coNP-completeness of this problem has been in the literature since 2011, but we discovered a flaw that we believe is present in all published results, and we provide a fixed proof. Finally, the third is solvability: given the full state of a Minesweeper game, can the player win the game by safely clicking all non-mine cells? This problem has not yet been studied, and we prove that it is coNP-complete.
{"title":"Complexity of Planar Graph Orientation Consistency, Promise-Inference, and Uniqueness, with Applications to Minesweeper Variants","authors":"MIT Hardness Group, Della Hendrickson, Andy Tockman","doi":"arxiv-2404.14519","DOIUrl":"https://doi.org/arxiv-2404.14519","url":null,"abstract":"We study three problems related to the computational complexity of the\u0000popular game Minesweeper. The first is consistency: given a set of clues, is\u0000there any arrangement of mines that satisfies it? This problem has been known\u0000to be NP-complete since 2000, but our framework proves it as a side effect. The\u0000second is inference: given a set of clues, is there any cell that the player\u0000can prove is safe? The coNP-completeness of this problem has been in the\u0000literature since 2011, but we discovered a flaw that we believe is present in\u0000all published results, and we provide a fixed proof. Finally, the third is\u0000solvability: given the full state of a Minesweeper game, can the player win the\u0000game by safely clicking all non-mine cells? This problem has not yet been\u0000studied, and we prove that it is coNP-complete.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140801381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
MIT Hardness Group, Erik D. Demaine, Holden Hall, Jeffery Li
We prove NP-hardness and #P-hardness of Tetris clearing (clearing an initial board using a given sequence of pieces) with the Super Rotation System (SRS), even when the pieces are limited to any two of the seven Tetris piece types. This result is the first advance on a question posed twenty years ago: which piece sets are easy vs. hard? All previous Tetris NP-hardness proofs used five of the seven piece types. We also prove ASP-completeness of Tetris clearing, using three piece types, as well as versions of 3-Partition and Numerical 3-Dimensional Matching where all input integers are distinct. Finally, we prove NP-hardness of Tetris survival and clearing under the "hard drops only" and "20G" modes, using two piece types, improving on a previous "hard drops only" result that used five piece types.
{"title":"Tetris with Few Piece Types","authors":"MIT Hardness Group, Erik D. Demaine, Holden Hall, Jeffery Li","doi":"arxiv-2404.10712","DOIUrl":"https://doi.org/arxiv-2404.10712","url":null,"abstract":"We prove NP-hardness and #P-hardness of Tetris clearing (clearing an initial\u0000board using a given sequence of pieces) with the Super Rotation System (SRS),\u0000even when the pieces are limited to any two of the seven Tetris piece types.\u0000This result is the first advance on a question posed twenty years ago: which\u0000piece sets are easy vs. hard? All previous Tetris NP-hardness proofs used five\u0000of the seven piece types. We also prove ASP-completeness of Tetris clearing,\u0000using three piece types, as well as versions of 3-Partition and Numerical\u00003-Dimensional Matching where all input integers are distinct. Finally, we prove\u0000NP-hardness of Tetris survival and clearing under the \"hard drops only\" and\u0000\"20G\" modes, using two piece types, improving on a previous \"hard drops only\"\u0000result that used five piece types.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140613007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
MIT Hardness Group, Hayashi Ani, Erik D. Demaine, Holden Hall, Matias Korman
We prove PSPACE-hardness for fifteen games in the Super Mario Bros. 2D platforming video game series. Previously, only the original Super Mario Bros. was known to be PSPACE-hard (FUN 2016), though several of the games we study were known to be NP-hard (FUN 2014). Our reductions build door gadgets with open, close, and traverse traversals, in each case using mechanics unique to the game. While some of our door constructions are similar to those from FUN 2016, those for Super Mario Bros. 2, Super Mario Land 2, Super Mario World 2, and the New Super Mario Bros. series are quite different; notably, the Super Mario Bros. 2 door is extremely difficult. Doors remain elusive for just two 2D Mario games (Super Mario Land and Super Mario Run); we prove that these games are at least NP-hard.
{"title":"PSPACE-Hard 2D Super Mario Games: Thirteen Doors","authors":"MIT Hardness Group, Hayashi Ani, Erik D. Demaine, Holden Hall, Matias Korman","doi":"arxiv-2404.10380","DOIUrl":"https://doi.org/arxiv-2404.10380","url":null,"abstract":"We prove PSPACE-hardness for fifteen games in the Super Mario Bros. 2D\u0000platforming video game series. Previously, only the original Super Mario Bros.\u0000was known to be PSPACE-hard (FUN 2016), though several of the games we study\u0000were known to be NP-hard (FUN 2014). Our reductions build door gadgets with\u0000open, close, and traverse traversals, in each case using mechanics unique to\u0000the game. While some of our door constructions are similar to those from FUN\u00002016, those for Super Mario Bros. 2, Super Mario Land 2, Super Mario World 2,\u0000and the New Super Mario Bros. series are quite different; notably, the Super\u0000Mario Bros. 2 door is extremely difficult. Doors remain elusive for just two 2D\u0000Mario games (Super Mario Land and Super Mario Run); we prove that these games\u0000are at least NP-hard.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"196 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140612667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carlos V. G. C. Lima, Thiago Marcilon, Pedro Paulo de Medeiros
The subject of graph convexity is well explored in the literature, the so-called interval convexities above all. In this work, we explore the cycle convexity, whose interval function is $I(S) = S cup {u mid G[S cup {u}]$ has a cycle containing $u}$. In this convexity, we prove that the decision problems associated to the parameters rank and convexity number are in NP-complete and W[1]-hard when parameterized by the solution size. We also prove that to determine whether the percolation time of a graph is at least $k$ is NP-complete, but polynomial for cacti or when $kleq2$
图凸性这一主题在文献中得到了很好的探讨,尤其是所谓的区间凸性。在这项工作中,我们探讨了循环凸性,其区间函数为 $I(S) = S cup {u mid G[S cup {u}]$has a cycle containing $u/}$。在这个凸性中,我们证明了与参数秩和凸性数相关的决策问题在以解的大小为参数时是in(NP-complete)和W[1]-hard的。我们还证明,确定一个图的渗滤时间是否至少为 $k$ 是 NP-complete 的,但对于仙人掌或当 $kleq2$ 时是多项式的。
{"title":"On the complexity of some cycle convexity parameters","authors":"Carlos V. G. C. Lima, Thiago Marcilon, Pedro Paulo de Medeiros","doi":"arxiv-2404.09236","DOIUrl":"https://doi.org/arxiv-2404.09236","url":null,"abstract":"The subject of graph convexity is well explored in the literature, the\u0000so-called interval convexities above all. In this work, we explore the cycle\u0000convexity, whose interval function is $I(S) = S cup {u mid G[S cup {u}]$\u0000has a cycle containing $u}$. In this convexity, we prove that the decision\u0000problems associated to the parameters rank and convexity number are in\u0000NP-complete and W[1]-hard when parameterized by the solution size. We also\u0000prove that to determine whether the percolation time of a graph is at least $k$\u0000is NP-complete, but polynomial for cacti or when $kleq2$","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"46 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140583038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Parameterized Inapproximability Hypothesis (PIH), which is an analog of the PCP theorem in parameterized complexity, asserts that, there is a constant $varepsilon> 0$ such that for any computable function $f:mathbb{N}tomathbb{N}$, no $f(k)cdot n^{O(1)}$-time algorithm can, on input a $k$-variable CSP instance with domain size $n$, find an assignment satisfying $1-varepsilon$ fraction of the constraints. A recent work by Guruswami, Lin, Ren, Sun, and Wu (STOC'24) established PIH under the Exponential Time Hypothesis (ETH). In this work, we improve the quantitative aspects of PIH and prove (under ETH) that approximating sparse parameterized CSPs within a constant factor requires $n^{k^{1-o(1)}}$ time. This immediately implies that, assuming ETH, finding a $(k/2)$-clique in an $n$-vertex graph with a $k$-clique requires $n^{k^{1-o(1)}}$ time. We also prove almost optimal time lower bounds for approximating $k$-ExactCover and Max $k$-Coverage. Our proof follows the blueprint of the previous work to identify a "vector-structured" ETH-hard CSP whose satisfiability can be checked via an appropriate form of "parallel" PCP. Using further ideas in the reduction, we guarantee additional structures for constraints in the CSP. We then leverage this to design a parallel PCP of almost linear size based on Reed-Muller codes and derandomized low degree testing.
{"title":"Almost Optimal Time Lower Bound for Approximating Parameterized Clique, CSP, and More, under ETH","authors":"Venkatesan Guruswami, Bingkai Lin, Xuandi Ren, Yican Sun, Kewen Wu","doi":"arxiv-2404.08870","DOIUrl":"https://doi.org/arxiv-2404.08870","url":null,"abstract":"The Parameterized Inapproximability Hypothesis (PIH), which is an analog of\u0000the PCP theorem in parameterized complexity, asserts that, there is a constant\u0000$varepsilon> 0$ such that for any computable function\u0000$f:mathbb{N}tomathbb{N}$, no $f(k)cdot n^{O(1)}$-time algorithm can, on\u0000input a $k$-variable CSP instance with domain size $n$, find an assignment\u0000satisfying $1-varepsilon$ fraction of the constraints. A recent work by\u0000Guruswami, Lin, Ren, Sun, and Wu (STOC'24) established PIH under the\u0000Exponential Time Hypothesis (ETH). In this work, we improve the quantitative aspects of PIH and prove (under\u0000ETH) that approximating sparse parameterized CSPs within a constant factor\u0000requires $n^{k^{1-o(1)}}$ time. This immediately implies that, assuming ETH,\u0000finding a $(k/2)$-clique in an $n$-vertex graph with a $k$-clique requires\u0000$n^{k^{1-o(1)}}$ time. We also prove almost optimal time lower bounds for\u0000approximating $k$-ExactCover and Max $k$-Coverage. Our proof follows the blueprint of the previous work to identify a\u0000\"vector-structured\" ETH-hard CSP whose satisfiability can be checked via an\u0000appropriate form of \"parallel\" PCP. Using further ideas in the reduction, we\u0000guarantee additional structures for constraints in the CSP. We then leverage\u0000this to design a parallel PCP of almost linear size based on Reed-Muller codes\u0000and derandomized low degree testing.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"76 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140602041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Svyatoslav Gryaznov, Sergei Ovcharov, Artur Riazanov
We consider the proof system Res($oplus$) introduced by Itsykson and Sokolov (Ann. Pure Appl. Log.'20), which is an extension of the resolution proof system and operates with disjunctions of linear equations over $mathbb{F}_2$. We study characterizations of tree-like size and space of Res($oplus$) refutations using combinatorial games. Namely, we introduce a class of extensible formulas and prove tree-like size lower bounds on it using Prover-Delayer games, as well as space lower bounds. This class is of particular interest since it contains many classical combinatorial principles, including the pigeonhole, ordering, and dense linear ordering principles. Furthermore, we present the width-space relation for Res($oplus$) generalizing the results by Atserias and Dalmau (J. Comput. Syst. Sci.'08) and their variant of Spoiler-Duplicator games.
{"title":"Resolution Over Linear Equations: Combinatorial Games for Tree-like Size and Space","authors":"Svyatoslav Gryaznov, Sergei Ovcharov, Artur Riazanov","doi":"arxiv-2404.08370","DOIUrl":"https://doi.org/arxiv-2404.08370","url":null,"abstract":"We consider the proof system Res($oplus$) introduced by Itsykson and Sokolov\u0000(Ann. Pure Appl. Log.'20), which is an extension of the resolution proof system\u0000and operates with disjunctions of linear equations over $mathbb{F}_2$. We study characterizations of tree-like size and space of Res($oplus$)\u0000refutations using combinatorial games. Namely, we introduce a class of\u0000extensible formulas and prove tree-like size lower bounds on it using\u0000Prover-Delayer games, as well as space lower bounds. This class is of\u0000particular interest since it contains many classical combinatorial principles,\u0000including the pigeonhole, ordering, and dense linear ordering principles. Furthermore, we present the width-space relation for Res($oplus$)\u0000generalizing the results by Atserias and Dalmau (J. Comput. Syst. Sci.'08) and\u0000their variant of Spoiler-Duplicator games.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"46 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140583037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lifting theorems are theorems that bound the communication complexity of a composed function $fcirc g^{n}$ in terms of the query complexity of $f$ and the communication complexity of $g$. Such theorems constitute a powerful generalization of direct-sum theorems for $g$, and have seen numerous applications in recent years. We prove a new lifting theorem that works for every two functions $f,g$ such that the discrepancy of $g$ is at most inverse polynomial in the input length of $f$. Our result is a significant generalization of the known direct-sum theorem for discrepancy, and extends the range of inner functions $g$ for which lifting theorems hold.
{"title":"Lifting with Inner Functions of Polynomial Discrepancy","authors":"Yahel Manor, Or Meir","doi":"arxiv-2404.07606","DOIUrl":"https://doi.org/arxiv-2404.07606","url":null,"abstract":"Lifting theorems are theorems that bound the communication complexity of a\u0000composed function $fcirc g^{n}$ in terms of the query complexity of $f$ and\u0000the communication complexity of $g$. Such theorems constitute a powerful\u0000generalization of direct-sum theorems for $g$, and have seen numerous\u0000applications in recent years. We prove a new lifting theorem that works for\u0000every two functions $f,g$ such that the discrepancy of $g$ is at most inverse\u0000polynomial in the input length of $f$. Our result is a significant\u0000generalization of the known direct-sum theorem for discrepancy, and extends the\u0000range of inner functions $g$ for which lifting theorems hold.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140583131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the CHAIN communication problem introduced by Cormode et al. [ICALP 2019]. It is a generalization of the well-studied INDEX problem. For $kgeq 1$, in CHAIN$_{n,k}$, there are $k$ instances of INDEX, all with the same answer. They are shared between $k+1$ players as follows. Player 1 has the first string $X^1 in {0,1}^n$, player 2 has the first index $sigma^1 in [n]$ and the second string $X^2 in {0,1}^n$, player 3 has the second index $sigma^2 in [n]$ along with the third string $X^3 in {0,1}^n$, and so on. Player $k+1$ has the last index $sigma^k in [n]$. The communication is one way from each player to the next, starting from player 1 to player 2, then from player 2 to player 3 and so on. Player $k+1$, after receiving the message from player $k$, has to output a single bit which is the answer to all $k$ instances of INDEX. It was proved that the CHAIN$_{n,k}$ problem requires $Omega(n/k^2)$ communication by Cormode et al., and they used it to prove streaming lower bounds for approximation of maximum independent sets. Subsequently, it was used by Feldman et al. [STOC 2020] to prove lower bounds for streaming submodular maximization. However, these works do not get optimal bounds on the communication complexity of CHAIN$_{n,k}$, and in fact, it was conjectured by Cormode et al. that $Omega(n)$ bits are necessary, for any $k$. As our main result, we prove the optimal lower bound of $Omega(n)$ for CHAIN$_{n,k}$. This settles the open conjecture of Cormode et al. in the affirmative. The key technique is to use information theoretic tools to analyze protocols over the Jensen-Shannon divergence measure, as opposed to total variation distance. As a corollary, we get an improved lower bound for approximation of maximum independent set in vertex arrival streams through a reduction from CHAIN directly.
我们研究的是 Cormode 等人提出的 CHAIN 通信问题[ICALP2019]。它是研究得很透彻的 INDEX 问题的一般化。对于 $kgeq 1$,在 CHAIN$_{n,k}$ 中,有 $k$ 个 INDEX 实例,它们都有相同的答案。玩家 1 拥有第一个字符串 $X^1 in {0,1}^n$, 玩家 2 拥有第一个索引 $sigma^1 in [n]$ 以及第二个字符串 $X^2 in {0,1}^n$, 玩家 3 拥有第二个索引 $sigma^2 in [n]$ 以及第三个字符串 $X^3 in {0,1}^n$, 以此类推。玩家 $k+1$ 拥有最后一个索引 $sigma^k in [n]$。通信是单向的,从玩家 1 到玩家 2,然后从玩家 2 到玩家 3,以此类推。玩家 $k+1$ 收到来自玩家 $k$ 的信息后,必须输出一个比特,这个比特就是 INDEX 所有 $k$ 实例的答案。Cormode 等人证明了 CHAIN$_{n,k}$ 问题需要 $Omega(n/k^2)$ 通信,并用它证明了最大独立集近似的流式下界。随后,Feldman 等人[STOC 2020]用它证明了流式子模最大化的下界。然而,这些工作并没有得到 CHAIN$_{n,k}$ 通信复杂度的最优边界,事实上,Cormode 等人猜想,对于任意 $k$,$Omega(n)$ 位都是必要的。作为我们的主要结果,我们证明了 CHAIN$_{n,k}$ 的 $Omega(n)$ 的最优下限。这就肯定了 Cormode 等人的公开猜想。关键技术是使用信息论工具分析詹森-香农发散度量上的协议,而不是总变异距离。作为推论,我们通过直接从 CHAIN 引入,得到了顶点到达流中最大独立集近似值的改进下界。
{"title":"Optimal Communication Complexity of Chained Index","authors":"Janani Sundaresan","doi":"arxiv-2404.07026","DOIUrl":"https://doi.org/arxiv-2404.07026","url":null,"abstract":"We study the CHAIN communication problem introduced by Cormode et al. [ICALP\u00002019]. It is a generalization of the well-studied INDEX problem. For $kgeq 1$,\u0000in CHAIN$_{n,k}$, there are $k$ instances of INDEX, all with the same answer.\u0000They are shared between $k+1$ players as follows. Player 1 has the first string\u0000$X^1 in {0,1}^n$, player 2 has the first index $sigma^1 in [n]$ and the\u0000second string $X^2 in {0,1}^n$, player 3 has the second index $sigma^2 in\u0000[n]$ along with the third string $X^3 in {0,1}^n$, and so on. Player $k+1$\u0000has the last index $sigma^k in [n]$. The communication is one way from each\u0000player to the next, starting from player 1 to player 2, then from player 2 to\u0000player 3 and so on. Player $k+1$, after receiving the message from player $k$,\u0000has to output a single bit which is the answer to all $k$ instances of INDEX. It was proved that the CHAIN$_{n,k}$ problem requires $Omega(n/k^2)$\u0000communication by Cormode et al., and they used it to prove streaming lower\u0000bounds for approximation of maximum independent sets. Subsequently, it was used\u0000by Feldman et al. [STOC 2020] to prove lower bounds for streaming submodular\u0000maximization. However, these works do not get optimal bounds on the\u0000communication complexity of CHAIN$_{n,k}$, and in fact, it was conjectured by\u0000Cormode et al. that $Omega(n)$ bits are necessary, for any $k$. As our main result, we prove the optimal lower bound of $Omega(n)$ for\u0000CHAIN$_{n,k}$. This settles the open conjecture of Cormode et al. in the\u0000affirmative. The key technique is to use information theoretic tools to analyze\u0000protocols over the Jensen-Shannon divergence measure, as opposed to total\u0000variation distance. As a corollary, we get an improved lower bound for\u0000approximation of maximum independent set in vertex arrival streams through a\u0000reduction from CHAIN directly.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"49 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140583040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We give improved lower bounds for binary $3$-query locally correctable codes (3-LCCs) $C colon {0,1}^k rightarrow {0,1}^n$. Specifically, we prove: (1) If $C$ is a linear design 3-LCC, then $n geq 2^{(1 - o(1))sqrt{k} }$. A design 3-LCC has the additional property that the correcting sets for every codeword bit form a perfect matching and every pair of codeword bits is queried an equal number of times across all matchings. Our bound is tight up to a factor $sqrt{8}$ in the exponent of $2$, as the best construction of binary $3$-LCCs (obtained by taking Reed-Muller codes on $mathbb{F}_4$ and applying a natural projection map) is a design $3$-LCC with $n leq 2^{sqrt{8 k}}$. Up to a $sqrt{8}$ factor, this resolves the Hamada conjecture on the maximum $mathbb{F}_2$-codimension of a $4$-design. (2) If $C$ is a smooth, non-linear $3$-LCC with near-perfect completeness, then, $n geq k^{Omega(log k)}$. (3) If $C$ is a smooth, non-linear $3$-LCC with completeness $1 - varepsilon$, then $n geq tilde{Omega}(k^{frac{1}{2varepsilon}})$. In particular, when $varepsilon$ is a small constant, this implies a lower bound for general non-linear LCCs that beats the prior best $n geq tilde{Omega}(k^3)$ lower bound of [AGKM23] by a polynomial factor. Our design LCC lower bound is obtained via a fine-grained analysis of the Kikuchi matrix method applied to a variant of the matrix used in [KM23]. Our lower bounds for non-linear codes are obtained by designing a from-scratch reduction from nonlinear $3$-LCCs to a system of "chain polynomial equations": polynomial equations with similar structure to the long chain derivations that arise in the lower bounds for linear $3$-LCCs [KM23].
{"title":"Superpolynomial Lower Bounds for Smooth 3-LCCs and Sharp Bounds for Designs","authors":"Pravesh K. Kothari, Peter Manohar","doi":"arxiv-2404.06513","DOIUrl":"https://doi.org/arxiv-2404.06513","url":null,"abstract":"We give improved lower bounds for binary $3$-query locally correctable codes\u0000(3-LCCs) $C colon {0,1}^k rightarrow {0,1}^n$. Specifically, we prove: (1) If $C$ is a linear design 3-LCC, then $n geq 2^{(1 - o(1))sqrt{k} }$. A\u0000design 3-LCC has the additional property that the correcting sets for every\u0000codeword bit form a perfect matching and every pair of codeword bits is queried\u0000an equal number of times across all matchings. Our bound is tight up to a\u0000factor $sqrt{8}$ in the exponent of $2$, as the best construction of binary\u0000$3$-LCCs (obtained by taking Reed-Muller codes on $mathbb{F}_4$ and applying a\u0000natural projection map) is a design $3$-LCC with $n leq 2^{sqrt{8 k}}$. Up to\u0000a $sqrt{8}$ factor, this resolves the Hamada conjecture on the maximum\u0000$mathbb{F}_2$-codimension of a $4$-design. (2) If $C$ is a smooth, non-linear $3$-LCC with near-perfect completeness,\u0000then, $n geq k^{Omega(log k)}$. (3) If $C$ is a smooth, non-linear $3$-LCC with completeness $1 -\u0000varepsilon$, then $n geq tilde{Omega}(k^{frac{1}{2varepsilon}})$. In\u0000particular, when $varepsilon$ is a small constant, this implies a lower bound\u0000for general non-linear LCCs that beats the prior best $n geq\u0000tilde{Omega}(k^3)$ lower bound of [AGKM23] by a polynomial factor. Our design LCC lower bound is obtained via a fine-grained analysis of the\u0000Kikuchi matrix method applied to a variant of the matrix used in [KM23]. Our\u0000lower bounds for non-linear codes are obtained by designing a from-scratch\u0000reduction from nonlinear $3$-LCCs to a system of \"chain polynomial equations\":\u0000polynomial equations with similar structure to the long chain derivations that\u0000arise in the lower bounds for linear $3$-LCCs [KM23].","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140583028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}