A function $f:[n]^{d} to mathbb{F}_2$ is a defn{direct sum} if there are functions $L_i:[n]to mathbb{F}_2$ such that ${f(x) = sum_{i}L_i(x_i)}$. In this work we give multiple results related to the property testing of direct sums. Our first result concerns a test proposed by Dinur and Golubev in 2019. We call their test the Diamond test and show that it is indeed a direct sum tester. More specifically, we show that if a function $f$ is $epsilon$-far from being a direct sum function, then the Diamond test rejects $f$ with probability at least $Omega_{n,epsilon}(1)$. Even in the case of $n = 2$, the Diamond test is, to the best of our knowledge, novel and yields a new tester for the classic property of affinity. Apart from the Diamond test, we also analyze a broad family of direct sum tests, which at a high level, run an arbitrary affinity test on the restriction of $f$ to a random hypercube inside of $[n]^d$. This family of tests includes the direct sum test analyzed in cite{di19}, but does not include the Diamond test. As an application of our result, we obtain a direct sum test which works in the online adversary model of cite{KRV}. Finally, we also discuss a Fourier analytic interpretation of the diamond tester in the $n=2$ case, as well as prove local correction results for direct sum as conjectured by Dinur and Golubev.
{"title":"New Direct Sum Tests","authors":"Alek Westover, Edward Yu, Kai Zheng","doi":"arxiv-2409.10464","DOIUrl":"https://doi.org/arxiv-2409.10464","url":null,"abstract":"A function $f:[n]^{d} to mathbb{F}_2$ is a defn{direct sum} if there are\u0000functions $L_i:[n]to mathbb{F}_2$ such that ${f(x) = sum_{i}L_i(x_i)}$. In\u0000this work we give multiple results related to the property testing of direct\u0000sums. Our first result concerns a test proposed by Dinur and Golubev in 2019. We\u0000call their test the Diamond test and show that it is indeed a direct sum\u0000tester. More specifically, we show that if a function $f$ is $epsilon$-far\u0000from being a direct sum function, then the Diamond test rejects $f$ with\u0000probability at least $Omega_{n,epsilon}(1)$. Even in the case of $n = 2$, the\u0000Diamond test is, to the best of our knowledge, novel and yields a new tester\u0000for the classic property of affinity. Apart from the Diamond test, we also analyze a broad family of direct sum\u0000tests, which at a high level, run an arbitrary affinity test on the restriction\u0000of $f$ to a random hypercube inside of $[n]^d$. This family of tests includes\u0000the direct sum test analyzed in cite{di19}, but does not include the Diamond\u0000test. As an application of our result, we obtain a direct sum test which works\u0000in the online adversary model of cite{KRV}. Finally, we also discuss a Fourier analytic interpretation of the diamond\u0000tester in the $n=2$ case, as well as prove local correction results for direct\u0000sum as conjectured by Dinur and Golubev.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"64 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142264887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Genome rearrangements are events in which large blocks of DNA exchange pieces during evolution. The analysis of such events is a tool for understanding evolutionary genomics, based on finding the minimum number of rearrangements to transform one genome into another. In a general scenario, more than two genomes are considered and we have new challenges. The {sc Median} problem consists in finding, given three permutations and a distance metric, a permutation $s$ that minimizes the sum of the distances between $s$ and each input. We study the {sc median} problem over emph{swap} distances in permutations, for which the computational complexity has been open for almost 20 years (Eriksen, emph{Theor. Compt. Sci.}, 2007). We consider this problem through some branches. We associate median solutions and interval convex sets, where the concept of graph convexity inspires the following investigation: Does a median permutation belong to every shortest path between one of the pairs of input permutations? We are able to partially answer this question, and as a by-product we solve a long open problem by proving that the {sc Swap Median} problem is NP-hard. Furthermore, using a similar approach, we show that the {sc Closest} problem, which seeks to minimize the maximum distance between the solution and the input permutations, is NP-hard even considering three input permutations. This gives a sharp dichotomy into the P vs. NP-hard approaches, since considering two input permutations the problem is easily solvable and considering any number of input permutations it is known to be NP-hard since 2007 (Popov, emph{Theor. Compt. Sci.}, 2007). In addition, we show that {sc Swap Median} and {sc Swap Closest} are APX-hard problems.
{"title":"Complexity and algorithms for Swap median and relation to other consensus problems","authors":"Luís Cunha, Thiago Lopes, Arnaud Mary","doi":"arxiv-2409.09734","DOIUrl":"https://doi.org/arxiv-2409.09734","url":null,"abstract":"Genome rearrangements are events in which large blocks of DNA exchange pieces\u0000during evolution. The analysis of such events is a tool for understanding\u0000evolutionary genomics, based on finding the minimum number of rearrangements to\u0000transform one genome into another. In a general scenario, more than two genomes\u0000are considered and we have new challenges. The {sc Median} problem consists in\u0000finding, given three permutations and a distance metric, a permutation $s$ that\u0000minimizes the sum of the distances between $s$ and each input. We study the\u0000{sc median} problem over emph{swap} distances in permutations, for which the\u0000computational complexity has been open for almost 20 years (Eriksen,\u0000emph{Theor. Compt. Sci.}, 2007). We consider this problem through some\u0000branches. We associate median solutions and interval convex sets, where the\u0000concept of graph convexity inspires the following investigation: Does a median\u0000permutation belong to every shortest path between one of the pairs of input\u0000permutations? We are able to partially answer this question, and as a\u0000by-product we solve a long open problem by proving that the {sc Swap Median}\u0000problem is NP-hard. Furthermore, using a similar approach, we show that the\u0000{sc Closest} problem, which seeks to minimize the maximum distance between the\u0000solution and the input permutations, is NP-hard even considering three input\u0000permutations. This gives a sharp dichotomy into the P vs. NP-hard approaches,\u0000since considering two input permutations the problem is easily solvable and\u0000considering any number of input permutations it is known to be NP-hard since\u00002007 (Popov, emph{Theor. Compt. Sci.}, 2007). In addition, we show that {sc\u0000Swap Median} and {sc Swap Closest} are APX-hard problems.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"37 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142264891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seth C. Lewis, David M. Markowitz, Jon Benedik Bunquin
As part of a broader look at the impact of generative AI, this study investigated the emotional responses of journalists to the release of ChatGPT at the time of its launch. By analyzing nearly 1 million Tweets from journalists at major U.S. news outlets, we tracked changes in emotional tone and sentiment before and after the introduction of ChatGPT in November 2022. Using various computational and natural language processing techniques to measure emotional shifts in response to ChatGPT's release, we found an increase in positive emotion and a more favorable tone post-launch, suggesting initial optimism toward AI's potential. This research underscores the pivotal role of journalists as interpreters of technological innovation and disruption, highlighting how their emotional reactions may shape public narratives around emerging technologies. The study contributes to understanding the intersection of journalism, emotion, and AI, offering insights into the broader societal impact of generative AI tools.
{"title":"Journalists, Emotions, and the Introduction of Generative AI Chatbots: A Large-Scale Analysis of Tweets Before and After the Launch of ChatGPT","authors":"Seth C. Lewis, David M. Markowitz, Jon Benedik Bunquin","doi":"arxiv-2409.08761","DOIUrl":"https://doi.org/arxiv-2409.08761","url":null,"abstract":"As part of a broader look at the impact of generative AI, this study\u0000investigated the emotional responses of journalists to the release of ChatGPT\u0000at the time of its launch. By analyzing nearly 1 million Tweets from\u0000journalists at major U.S. news outlets, we tracked changes in emotional tone\u0000and sentiment before and after the introduction of ChatGPT in November 2022.\u0000Using various computational and natural language processing techniques to\u0000measure emotional shifts in response to ChatGPT's release, we found an increase\u0000in positive emotion and a more favorable tone post-launch, suggesting initial\u0000optimism toward AI's potential. This research underscores the pivotal role of\u0000journalists as interpreters of technological innovation and disruption,\u0000highlighting how their emotional reactions may shape public narratives around\u0000emerging technologies. The study contributes to understanding the intersection\u0000of journalism, emotion, and AI, offering insights into the broader societal\u0000impact of generative AI tools.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"54 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142264888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sagar Bisoyi, Krishnamoorthy Dinesh, Bhabya Deep Rai, Jayalal Sarma
Designing algorithms for space bounded models with restoration requirements on the space used by the algorithm is an important challenge posed about the catalytic computation model introduced by Buhrman et al. (2014). Motivated by the scenarios where we do not need to restore unless is useful, we define $ACL(A)$ to be the class of languages that can be accepted by almost-catalytic Turing machines with respect to $A$ (which we call the catalytic set), that uses at most $clog n$ work space and $n^c$ catalytic space. We show that if there are almost-catalytic algorithms for a problem with catalytic set as $A subseteq Sigma^*$ and its complement respectively, then the problem can be solved by a ZPP algorithm. Using this, we derive that to design catalytic algorithms, it suffices to design almost-catalytic algorithms where the catalytic set is the set of strings of odd weight ($PARITY$). Towards this, we consider two complexity measures of the set $A$ which are maximized for $PARITY$ - random projection complexity (${cal R}(A)$) and the subcube partition complexity (${cal P}(A)$). By making use of error-correcting codes, we show that for all $k ge 1$, there is a language $A_k subseteq Sigma^*$ such that $DSPACE(n^k) subseteq ACL(A_k)$ where for every $m ge 1$, $mathcal{R}(A_k cap {0,1}^m) ge frac{m}{4}$ and $mathcal{P}(A_k cap {0,1}^m)=2^{m/4}$. This contrasts the catalytic machine model where it is unclear if it can accept all languages in $DSPACE(log^{1+epsilon} n)$ for any $epsilon > 0$. Improving the partition complexity of the catalytic set $A$ further, we show that for all $k ge 1$, there is a $A_k subseteq {0,1}^*$ such that $mathsf{DSPACE}(log^k n) subseteq ACL(A_k)$ where for every $m ge 1$, $mathcal{R}(A_k cap {0,1}^m) ge frac{m}{4}$ and $mathcal{P}(A_k cap {0,1}^m)=2^{m/4+Omega(log m)}$.
为有空间约束的模型设计算法,并对算法使用的空间提出还原要求,是 Buhrman 等人(2014)提出的催化计算模型面临的一个重要挑战。在我们不需要还原除非有用的场景的激励下,我们将$ACL(A)$定义为可以被关于$A$(我们称之为催化集)的近乎催化图灵机所接受的语言类,该类语言最多使用$clog n$工作空间和$n^c$催化空间。我们证明,如果一个问题的催化集分别为 $A subseteq Sigma^*$ 及其补集,那么这个问题就可以用 ZPP 算法来解决。利用这一点,我们得出,要设计催化算法,只需设计几乎催化的算法即可,其中催化集是奇数权重的字符串集($PARITY$)。为此,我们考虑了针对 $PARITY$ 而最大化的集合 $A$ 的两个复杂度度量--随机投影复杂度(${cal R}(A)$)和子分区复杂度(${cal P}(A)$)。通过使用纠错码,我们证明了对于所有 $k ge 1$,存在一种语言 $A_k subseteq Sigma^*$ ,使得 $DSPACE(n^k) subseteqACL(A_k)$ 其中对于每 $m ge 1$、$mathcal{R}(A_k cap {0,1}^m) gefrac{m}{4}$ 和 $mathcal{P}(A_k cap {0,1}^m)=2^{m/4}$ 。这与催化机器模型形成了鲜明对比,在催化机器模型中,对于任何$epsilon > 0$的语言,它是否能接受$DSPACE(log^{1+epsilon} n)$中的所有语言还不清楚。为了进一步提高催化集 $A$ 的分割复杂度,我们证明对于所有 $k ge 1$,存在一个 $A_k subseteq {0、1}^*$ such that$mathsf{DSPACE}(log^k n)subseteq ACL(A_k)$ where for every $m ge 1$,$mathcal{R}(A_k cap{0,1}^m) ge frac{m}{4}$ and $mathcal{P}(A_k cap{0,1}^m)=2^{m/4+Omega(log m)}$.
{"title":"Almost-catalytic Computation","authors":"Sagar Bisoyi, Krishnamoorthy Dinesh, Bhabya Deep Rai, Jayalal Sarma","doi":"arxiv-2409.07208","DOIUrl":"https://doi.org/arxiv-2409.07208","url":null,"abstract":"Designing algorithms for space bounded models with restoration requirements\u0000on the space used by the algorithm is an important challenge posed about the\u0000catalytic computation model introduced by Buhrman et al. (2014). Motivated by\u0000the scenarios where we do not need to restore unless is useful, we define\u0000$ACL(A)$ to be the class of languages that can be accepted by almost-catalytic\u0000Turing machines with respect to $A$ (which we call the catalytic set), that\u0000uses at most $clog n$ work space and $n^c$ catalytic space. We show that if there are almost-catalytic algorithms for a problem with\u0000catalytic set as $A subseteq Sigma^*$ and its complement respectively, then\u0000the problem can be solved by a ZPP algorithm. Using this, we derive that to\u0000design catalytic algorithms, it suffices to design almost-catalytic algorithms\u0000where the catalytic set is the set of strings of odd weight ($PARITY$). Towards\u0000this, we consider two complexity measures of the set $A$ which are maximized\u0000for $PARITY$ - random projection complexity (${cal R}(A)$) and the subcube\u0000partition complexity (${cal P}(A)$). By making use of error-correcting codes, we show that for all $k ge 1$,\u0000there is a language $A_k subseteq Sigma^*$ such that $DSPACE(n^k) subseteq\u0000ACL(A_k)$ where for every $m ge 1$, $mathcal{R}(A_k cap {0,1}^m) ge\u0000frac{m}{4}$ and $mathcal{P}(A_k cap {0,1}^m)=2^{m/4}$. This contrasts the\u0000catalytic machine model where it is unclear if it can accept all languages in\u0000$DSPACE(log^{1+epsilon} n)$ for any $epsilon > 0$. Improving the partition complexity of the catalytic set $A$ further, we show\u0000that for all $k ge 1$, there is a $A_k subseteq {0,1}^*$ such that\u0000$mathsf{DSPACE}(log^k n) subseteq ACL(A_k)$ where for every $m ge 1$,\u0000$mathcal{R}(A_k cap {0,1}^m) ge frac{m}{4}$ and $mathcal{P}(A_k cap\u0000{0,1}^m)=2^{m/4+Omega(log m)}$.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is shown that computing the configuration of any one-dimensional cellular automaton at generation $n$ can be accelerated by constructing and running a composite one with a radius proportional to $log n$. The new automaton is the original automaton whose local rule function is composed with itself. The asymptotic time complexity to compute the configuration of generation $n$ is reduced from $O(n^2)$ operations to $O(n^2 / log n)$ on a given machine with $O(n^2)$ memory usage. Experimental results are given in the case of Rule 30.
{"title":"Fast Simulation of Cellular Automata by Self-Composition","authors":"Joseph Natal, Oleksiy Al-saadi","doi":"arxiv-2409.07065","DOIUrl":"https://doi.org/arxiv-2409.07065","url":null,"abstract":"It is shown that computing the configuration of any one-dimensional cellular\u0000automaton at generation $n$ can be accelerated by constructing and running a\u0000composite one with a radius proportional to $log n$. The new automaton is the\u0000original automaton whose local rule function is composed with itself. The\u0000asymptotic time complexity to compute the configuration of generation $n$ is\u0000reduced from $O(n^2)$ operations to $O(n^2 / log n)$ on a given machine with\u0000$O(n^2)$ memory usage. Experimental results are given in the case of Rule 30.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marten Folkertsma, Ian Mertz, Florian Speelman, Quinten Tupker
A catalytic machine is a model of computation where a traditional space-bounded machine is augmented with an additional, significantly larger, "catalytic" tape, which, while being available as a work tape, has the caveat of being initialized with an arbitrary string, which must be preserved at the end of the computation. Despite this restriction, catalytic machines have been shown to have surprising additional power; a logspace machine with a polynomial length catalytic tape, known as catalytic logspace ($CL$), can compute problems which are believed to be impossible for $L$. A fundamental question of the model is whether the catalytic condition, of leaving the catalytic tape in its exact original configuration, is robust to minor deviations. This study was initialized by Gupta et al. (2024), who defined lossy catalytic logspace ($LCL[e]$) as a variant of $CL$ where we allow up to $e$ errors when resetting the catalytic tape. They showed that $LCL[e] = CL$ for any $e = O(1)$, which remains the frontier of our understanding. In this work we completely characterize lossy catalytic space ($LCSPACE[s,c,e]$) in terms of ordinary catalytic space ($CSPACE[s,c]$). We show that $$LCSPACE[s,c,e] = CSPACE[Theta(s + e log c), Theta(c)]$$ In other words, allowing $e$ errors on a catalytic tape of length $c$ is equivalent, up to a constant stretch, to an equivalent errorless catalytic machine with an additional $e log c$ bits of ordinary working memory. As a consequence, we show that for any $e$, $LCL[e] = CL$ implies $SPACE[e log n] subseteq ZPP$, thus giving a barrier to any improvement beyond $LCL[O(1)] = CL$. We also show equivalent results for non-deterministic and randomized catalytic space.
{"title":"Fully Characterizing Lossy Catalytic Computation","authors":"Marten Folkertsma, Ian Mertz, Florian Speelman, Quinten Tupker","doi":"arxiv-2409.05046","DOIUrl":"https://doi.org/arxiv-2409.05046","url":null,"abstract":"A catalytic machine is a model of computation where a traditional\u0000space-bounded machine is augmented with an additional, significantly larger,\u0000\"catalytic\" tape, which, while being available as a work tape, has the caveat\u0000of being initialized with an arbitrary string, which must be preserved at the\u0000end of the computation. Despite this restriction, catalytic machines have been\u0000shown to have surprising additional power; a logspace machine with a polynomial\u0000length catalytic tape, known as catalytic logspace ($CL$), can compute problems\u0000which are believed to be impossible for $L$. A fundamental question of the model is whether the catalytic condition, of\u0000leaving the catalytic tape in its exact original configuration, is robust to\u0000minor deviations. This study was initialized by Gupta et al. (2024), who\u0000defined lossy catalytic logspace ($LCL[e]$) as a variant of $CL$ where we allow\u0000up to $e$ errors when resetting the catalytic tape. They showed that $LCL[e] =\u0000CL$ for any $e = O(1)$, which remains the frontier of our understanding. In this work we completely characterize lossy catalytic space\u0000($LCSPACE[s,c,e]$) in terms of ordinary catalytic space ($CSPACE[s,c]$). We\u0000show that $$LCSPACE[s,c,e] = CSPACE[Theta(s + e log c), Theta(c)]$$ In other\u0000words, allowing $e$ errors on a catalytic tape of length $c$ is equivalent, up\u0000to a constant stretch, to an equivalent errorless catalytic machine with an\u0000additional $e log c$ bits of ordinary working memory. As a consequence, we show that for any $e$, $LCL[e] = CL$ implies $SPACE[e\u0000log n] subseteq ZPP$, thus giving a barrier to any improvement beyond\u0000$LCL[O(1)] = CL$. We also show equivalent results for non-deterministic and\u0000randomized catalytic space.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study semidefinite relaxations of $Pi_1$ combinatorial statements. By relaxing the pigeonhole principle, we obtain a new "quantum" pigeonhole principle which is a stronger statement. By relaxing statements of the form "the communication complexity of $f$ is $> k$", we obtain new communication models, which we call "$gamma_2$ communication" and "quantum-lab protocols". We prove, via an argument from proof complexity, that any natural model obtained by such a relaxation must solve all Karchmer--Wigderson games efficiently. However, the argument is not constructive, so we work to explicitly construct such protocols in these two models.
{"title":"A Quantum Pigeonhole Principle and Two Semidefinite Relaxations of Communication Complexity","authors":"Pavel Dvořák, Bruno Loff, Suhail Sherif","doi":"arxiv-2409.04592","DOIUrl":"https://doi.org/arxiv-2409.04592","url":null,"abstract":"We study semidefinite relaxations of $Pi_1$ combinatorial statements. By\u0000relaxing the pigeonhole principle, we obtain a new \"quantum\" pigeonhole\u0000principle which is a stronger statement. By relaxing statements of the form\u0000\"the communication complexity of $f$ is $> k$\", we obtain new communication\u0000models, which we call \"$gamma_2$ communication\" and \"quantum-lab protocols\".\u0000We prove, via an argument from proof complexity, that any natural model\u0000obtained by such a relaxation must solve all Karchmer--Wigderson games\u0000efficiently. However, the argument is not constructive, so we work to\u0000explicitly construct such protocols in these two models.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"37 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present the first explicit construction of two-sided lossless expanders in the unbalanced setting (bipartite graphs that have many more nodes on the left than on the right). Prior to our work, all known explicit constructions in the unbalanced setting achieved only one-sided lossless expansion. Specifically, we show that the one-sided lossless expanders constructed by Kalev and Ta-Shma (RANDOM'22) -- that are based on multiplicity codes introduced by Kopparty, Saraf, and Yekhanin (STOC'11) -- are, in fact, two-sided lossless expanders. Using our unbalanced bipartite expander, we easily obtain lossless (non-bipartite) expander graphs with high degree and a free group action. As far as we know, this is the first explicit construction of lossless (non-bipartite) expanders with $N$ vertices and degree $ll N$.
{"title":"Two-Sided Lossless Expanders in the Unbalanced Setting","authors":"Eshan Chattopadhyay, Mohit Gurumukhani, Noam Ringach, Yunya Zhao","doi":"arxiv-2409.04549","DOIUrl":"https://doi.org/arxiv-2409.04549","url":null,"abstract":"We present the first explicit construction of two-sided lossless expanders in\u0000the unbalanced setting (bipartite graphs that have many more nodes on the left\u0000than on the right). Prior to our work, all known explicit constructions in the\u0000unbalanced setting achieved only one-sided lossless expansion. Specifically, we show that the one-sided lossless expanders constructed by\u0000Kalev and Ta-Shma (RANDOM'22) -- that are based on multiplicity codes\u0000introduced by Kopparty, Saraf, and Yekhanin (STOC'11) -- are, in fact,\u0000two-sided lossless expanders. Using our unbalanced bipartite expander, we easily obtain lossless\u0000(non-bipartite) expander graphs with high degree and a free group action. As\u0000far as we know, this is the first explicit construction of lossless\u0000(non-bipartite) expanders with $N$ vertices and degree $ll N$.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"225 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A binary code Enc$:{0,1}^k to {0,1}^n$ is $(0.5-epsilon,L)$-list decodable if for all $w in {0,1}^n$, the set List$(w)$ of all messages $m in {0,1}^k$ such that the relative Hamming distance between Enc$(m)$ and $w$ is at most $0.5 -epsilon$, has size at most $L$. Informally, a $q$-query local list-decoder for Enc is a randomized procedure Dec$:[k]times [L] to {0,1}$ that when given oracle access to a string $w$, makes at most $q$ oracle calls, and for every message $m in text{List}(w)$, with high probability, there exists $j in [L]$ such that for every $i in [k]$, with high probability, Dec$^w(i,j)=m_i$. We prove lower bounds on $q$, that apply even if $L$ is huge (say $L=2^{k^{0.9}}$) and the rate of Enc is small (meaning that $n ge 2^{k}$): 1. For $epsilon geq 1/k^{nu}$ for some universal constant $0< nu < 1$, we prove a lower bound of $q=Omega(frac{log(1/delta)}{epsilon^2})$, where $delta$ is the error probability of the local list-decoder. This bound is tight as there is a matching upper bound by Goldreich and Levin (STOC 1989) of $q=O(frac{log(1/delta)}{epsilon^2})$ for the Hadamard code (which has $n=2^k$). This bound extends an earlier work of Grinberg, Shaltiel and Viola (FOCS 2018) which only works if $n le 2^{k^{gamma}}$ for some universal constant $0