In the NP-hard Optimizing PD with Dependencies (PDD) problem, the input consists of a phylogenetic tree $T$ over a set of taxa $X$, a food-web that describes the prey-predator relationships in $X$, and integers $k$ and $D$. The task is to find a set $S$ of $k$ species that is viable in the food-web such that the subtree of $T$ obtained by retaining only the vertices of $S$ has total edge weight at least $D$. Herein, viable means that for every predator taxon of $S$, the set $S$ contains at least one prey taxon. We provide the first systematic analysis of PDD and its special case s-PDD from a parameterized complexity perspective. For solution-size related parameters, we show that PDD is FPT with respect to $D$ and with respect to $k$ plus the height of the phylogenetic tree. Moreover, we consider structural parameterizations of the food-web. For example, we show an FPT-algorithm for the parameter that measures the vertex deletion distance to graphs where every connected component is a complete graph. Finally, we show that s-PDD admits an FPT-algorithm for the treewidth of the food-web. This disproves a conjecture of Faller et al. [Annals of Combinatorics, 2011] who conjectured that s-PDD is NP-hard even when the food-web is a tree.
{"title":"Maximizing Phylogenetic Diversity under Ecological Constraints: A Parameterized Complexity Study","authors":"Christian Komusiewicz, Jannik Schestag","doi":"arxiv-2405.17314","DOIUrl":"https://doi.org/arxiv-2405.17314","url":null,"abstract":"In the NP-hard Optimizing PD with Dependencies (PDD) problem, the input\u0000consists of a phylogenetic tree $T$ over a set of taxa $X$, a food-web that\u0000describes the prey-predator relationships in $X$, and integers $k$ and $D$. The\u0000task is to find a set $S$ of $k$ species that is viable in the food-web such\u0000that the subtree of $T$ obtained by retaining only the vertices of $S$ has\u0000total edge weight at least $D$. Herein, viable means that for every predator\u0000taxon of $S$, the set $S$ contains at least one prey taxon. We provide the\u0000first systematic analysis of PDD and its special case s-PDD from a\u0000parameterized complexity perspective. For solution-size related parameters, we\u0000show that PDD is FPT with respect to $D$ and with respect to $k$ plus the\u0000height of the phylogenetic tree. Moreover, we consider structural\u0000parameterizations of the food-web. For example, we show an FPT-algorithm for\u0000the parameter that measures the vertex deletion distance to graphs where every\u0000connected component is a complete graph. Finally, we show that s-PDD admits an\u0000FPT-algorithm for the treewidth of the food-web. This disproves a conjecture of\u0000Faller et al. [Annals of Combinatorics, 2011] who conjectured that s-PDD is\u0000NP-hard even when the food-web is a tree.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"81 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141165633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A line digraph $L(G) = (A, E)$ is the digraph constructed from the digraph $G = (V, A)$ such that there is an arc $(a,b)$ in $L(G)$ if the terminal node of $a$ in $G$ is the initial node of $b$. The maximum number of arcs in a line digraph with $m$ nodes is $(m/2)^2 + (m/2)$ if $m$ is even, and $((m - 1)/2)^2 + m - 1$ otherwise. For $m geq 7$, there is only one line digraph with as many arcs if $m$ is even, and if $m$ is odd, there are two line digraphs, each being the transpose of the other.
{"title":"Maximal Line Digraphs","authors":"Quentin JaphetDAVID, Dimitri WatelIP Paris, SAMOVAR, SOP - SAMOVAR, ENSIIE, Dominique BarthDAVID, Marc-Antoine WeisserGALaC","doi":"arxiv-2406.05141","DOIUrl":"https://doi.org/arxiv-2406.05141","url":null,"abstract":"A line digraph $L(G) = (A, E)$ is the digraph constructed from the digraph $G\u0000= (V, A)$ such that there is an arc $(a,b)$ in $L(G)$ if the terminal node of\u0000$a$ in $G$ is the initial node of $b$. The maximum number of arcs in a line\u0000digraph with $m$ nodes is $(m/2)^2 + (m/2)$ if $m$ is even, and $((m - 1)/2)^2\u0000+ m - 1$ otherwise. For $m geq 7$, there is only one line digraph with as many\u0000arcs if $m$ is even, and if $m$ is odd, there are two line digraphs, each being\u0000the transpose of the other.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"207 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141518336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many unconventional computing models, including some that appear to be quite different from traditional ones such as Turing machines, happen to characterise either the complexity class P or PSPACE when working in deterministic polynomial time (and in the maximally parallel way, where this applies). We discuss variants of cellular automata and membrane systems that escape this dichotomy and characterise intermediate complexity classes, usually defined in terms of Turing machines with oracles, as well as some possible reasons why this happens.
许多非常规计算模型,包括一些看似与图灵机等传统计算模型大相径庭的模型,在确定性多项式时间内工作时(以及在最大并行方式适用的情况下),恰好可以描述复杂度类别 P 或 PSPACE 的特征。我们讨论了细胞自动机和膜系统的变体,这些变体摆脱了这种二分法,并具有中间复杂度等级的特征,通常是以带有算子的图灵机来定义的,我们还讨论了出现这种情况的一些可能原因。
{"title":"Unconventional complexity classes in unconventional computing (extended abstract)","authors":"Antonio E. Porreca","doi":"arxiv-2405.16896","DOIUrl":"https://doi.org/arxiv-2405.16896","url":null,"abstract":"Many unconventional computing models, including some that appear to be quite\u0000different from traditional ones such as Turing machines, happen to characterise\u0000either the complexity class P or PSPACE when working in deterministic\u0000polynomial time (and in the maximally parallel way, where this applies). We\u0000discuss variants of cellular automata and membrane systems that escape this\u0000dichotomy and characterise intermediate complexity classes, usually defined in\u0000terms of Turing machines with oracles, as well as some possible reasons why\u0000this happens.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"42 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141165631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guy Blanc, Caleb Koch, Carmen Strassle, Li-Yang Tan
Consider the expected query complexity of computing the $k$-fold direct product $f^{otimes k}$ of a function $f$ to error $varepsilon$ with respect to a distribution $mu^k$. One strategy is to sequentially compute each of the $k$ copies to error $varepsilon/k$ with respect to $mu$ and apply the union bound. We prove a strong direct sum theorem showing that this naive strategy is essentially optimal. In particular, computing a direct product necessitates a blowup in both query complexity and error. Strong direct sum theorems contrast with results that only show a blowup in query complexity or error but not both. There has been a long line of such results for distributional query complexity, dating back to (Impagliazzo, Raz, Wigderson 1994) and (Nisan, Rudich, Saks 1994), but a strong direct sum theorem had been elusive. A key idea in our work is the first use of the Hardcore Theorem (Impagliazzo 1995) in the context of query complexity. We prove a new "resilience lemma" that accompanies it, showing that the hardcore of $f^{otimes k}$ is likely to remain dense under arbitrary partitions of the input space.
{"title":"A Strong Direct Sum Theorem for Distributional Query Complexity","authors":"Guy Blanc, Caleb Koch, Carmen Strassle, Li-Yang Tan","doi":"arxiv-2405.16340","DOIUrl":"https://doi.org/arxiv-2405.16340","url":null,"abstract":"Consider the expected query complexity of computing the $k$-fold direct\u0000product $f^{otimes k}$ of a function $f$ to error $varepsilon$ with respect\u0000to a distribution $mu^k$. One strategy is to sequentially compute each of the\u0000$k$ copies to error $varepsilon/k$ with respect to $mu$ and apply the union\u0000bound. We prove a strong direct sum theorem showing that this naive strategy is\u0000essentially optimal. In particular, computing a direct product necessitates a\u0000blowup in both query complexity and error. Strong direct sum theorems contrast with results that only show a blowup in\u0000query complexity or error but not both. There has been a long line of such\u0000results for distributional query complexity, dating back to (Impagliazzo, Raz,\u0000Wigderson 1994) and (Nisan, Rudich, Saks 1994), but a strong direct sum theorem\u0000had been elusive. A key idea in our work is the first use of the Hardcore Theorem (Impagliazzo\u00001995) in the context of query complexity. We prove a new \"resilience lemma\"\u0000that accompanies it, showing that the hardcore of $f^{otimes k}$ is likely to\u0000remain dense under arbitrary partitions of the input space.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"345 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141165515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the following generalization of the Hamiltonian cycle problem: Given integers $a,b$ and graph $G$, does there exist a closed walk in $G$ that visits every vertex at least $a$ times and at most $b$ times? Equivalently, does there exist a connected $[2a,2b]$ factor of $2b cdot G$ with all degrees even? This problem is NP-hard for any constants $1 leq a leq b$. However, the graphs produced by known reductions have maximum degree growing linearly in $b$. The case $a = b = 1 $ -- i.e. Hamiltonicity -- remains NP-hard even in $3$-regular graphs; a natural question is whether this is true for other $a$, $b$. In this work, we study which $a, b$ permit polynomial time algorithms and which lead to NP-hardness in graphs with constrained degrees. We give tight characterizations for regular graphs and graphs of bounded max-degree, both directed and undirected.
{"title":"Complexity of Multiple-Hamiltonicity in Graphs of Bounded Degree","authors":"Brian Liu, Nathan S. Sheffield, Alek Westover","doi":"arxiv-2405.16270","DOIUrl":"https://doi.org/arxiv-2405.16270","url":null,"abstract":"We study the following generalization of the Hamiltonian cycle problem: Given\u0000integers $a,b$ and graph $G$, does there exist a closed walk in $G$ that visits\u0000every vertex at least $a$ times and at most $b$ times? Equivalently, does there\u0000exist a connected $[2a,2b]$ factor of $2b cdot G$ with all degrees even? This\u0000problem is NP-hard for any constants $1 leq a leq b$. However, the graphs\u0000produced by known reductions have maximum degree growing linearly in $b$. The\u0000case $a = b = 1 $ -- i.e. Hamiltonicity -- remains NP-hard even in $3$-regular\u0000graphs; a natural question is whether this is true for other $a$, $b$. In this work, we study which $a, b$ permit polynomial time algorithms and\u0000which lead to NP-hardness in graphs with constrained degrees. We give tight\u0000characterizations for regular graphs and graphs of bounded max-degree, both\u0000directed and undirected.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141165500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Using Kolmogorov Game Derandomization, upper bounds of the Kolmogorov complexity of deterministic winning players against deterministic environments can be proved. This paper gives improved upper bounds of the Kolmogorov complexity of such players. This paper also generalizes this result to probabilistic games. This applies to computable, lower computable, and uncomputable environments. We characterize the classic even-odds game and then generalize these results to time bounded players and also to all zero-sum repeated games. We characterize partial game derandomization. But first, we start with an illustrative example of game derandomization, taking place on the island of Crete.
{"title":"Game Derandomization","authors":"Samuel Epstein","doi":"arxiv-2405.16353","DOIUrl":"https://doi.org/arxiv-2405.16353","url":null,"abstract":"Using Kolmogorov Game Derandomization, upper bounds of the Kolmogorov\u0000complexity of deterministic winning players against deterministic environments\u0000can be proved. This paper gives improved upper bounds of the Kolmogorov\u0000complexity of such players. This paper also generalizes this result to\u0000probabilistic games. This applies to computable, lower computable, and\u0000uncomputable environments. We characterize the classic even-odds game and then\u0000generalize these results to time bounded players and also to all zero-sum\u0000repeated games. We characterize partial game derandomization. But first, we\u0000start with an illustrative example of game derandomization, taking place on the\u0000island of Crete.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141165501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Harm Derksen, Peter Ivanov, Chin Ho Lee, Emanuele Viola
We prove several new results about bounded uniform and small-bias distributions. A main message is that, small-bias, even perturbed with noise, does not fool several classes of tests better than bounded uniformity. We prove this for threshold tests, small-space algorithms, and small-depth circuits. In particular, we obtain small-bias distributions that 1) achieve an optimal lower bound on their statistical distance to any bounded-uniform distribution. This closes a line of research initiated by Alon, Goldreich, and Mansour in 2003, and improves on a result by O'Donnell and Zhao. 2) have heavier tail mass than the uniform distribution. This answers a question posed by several researchers including Bun and Steinke. 3) rule out a popular paradigm for constructing pseudorandom generators, originating in a 1989 work by Ajtai and Wigderson. This again answers a question raised by several researchers. For branching programs, our result matches a bound by Forbes and Kelley. Our small-bias distributions above are symmetric. We show that the xor of any two symmetric small-bias distributions fools any bounded function. Hence our examples cannot be extended to the xor of two small-bias distributions, another popular paradigm whose power remains unknown. We also generalize and simplify the proof of a result of Bazzi.
{"title":"Pseudorandomness, symmetry, smoothing: I","authors":"Harm Derksen, Peter Ivanov, Chin Ho Lee, Emanuele Viola","doi":"arxiv-2405.13143","DOIUrl":"https://doi.org/arxiv-2405.13143","url":null,"abstract":"We prove several new results about bounded uniform and small-bias\u0000distributions. A main message is that, small-bias, even perturbed with noise,\u0000does not fool several classes of tests better than bounded uniformity. We prove\u0000this for threshold tests, small-space algorithms, and small-depth circuits. In\u0000particular, we obtain small-bias distributions that 1) achieve an optimal lower bound on their statistical distance to any\u0000bounded-uniform distribution. This closes a line of research initiated by Alon,\u0000Goldreich, and Mansour in 2003, and improves on a result by O'Donnell and Zhao. 2) have heavier tail mass than the uniform distribution. This answers a\u0000question posed by several researchers including Bun and Steinke. 3) rule out a popular paradigm for constructing pseudorandom generators,\u0000originating in a 1989 work by Ajtai and Wigderson. This again answers a\u0000question raised by several researchers. For branching programs, our result\u0000matches a bound by Forbes and Kelley. Our small-bias distributions above are symmetric. We show that the xor of any\u0000two symmetric small-bias distributions fools any bounded function. Hence our\u0000examples cannot be extended to the xor of two small-bias distributions, another\u0000popular paradigm whose power remains unknown. We also generalize and simplify\u0000the proof of a result of Bazzi.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"51 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141152461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a simple proof that finding a rank-$R$ canonical polyadic decomposition of 3-dimensional tensors over a finite field $mathbb{F}$ is fixed-parameter tractable with respect to $R$ and $mathbb{F}$. We also show some more concrete upper bounds on the time complexity of this problem.
{"title":"Fixed-parameter tractability of canonical polyadic decomposition over finite fields","authors":"Jason Yang","doi":"arxiv-2405.11699","DOIUrl":"https://doi.org/arxiv-2405.11699","url":null,"abstract":"We present a simple proof that finding a rank-$R$ canonical polyadic\u0000decomposition of 3-dimensional tensors over a finite field $mathbb{F}$ is\u0000fixed-parameter tractable with respect to $R$ and $mathbb{F}$. We also show\u0000some more concrete upper bounds on the time complexity of this problem.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141152542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a template for the Promise Constraint Satisfaction Problem (PCSP) which is NP-hard but does not satisfy the current state-of-the-art hardness condition [ACMTCT'21]. We introduce a new "injective" condition based on the smooth version of the layered PCP Theorem and use this new condition to confirm that the problem is indeed NP-hard. In the second part of the article, we establish a dichotomy for Boolean PCSPs defined by templates with polymorphisms in the set of linear threshold functions. The reasoning relies on the new injective condition.
{"title":"Injective hardness condition for PCSPs","authors":"Demian Banakh, Marcin Kozik","doi":"arxiv-2405.10774","DOIUrl":"https://doi.org/arxiv-2405.10774","url":null,"abstract":"We present a template for the Promise Constraint Satisfaction Problem (PCSP)\u0000which is NP-hard but does not satisfy the current state-of-the-art hardness\u0000condition [ACMTCT'21]. We introduce a new \"injective\" condition based on the\u0000smooth version of the layered PCP Theorem and use this new condition to confirm\u0000that the problem is indeed NP-hard. In the second part of the article, we\u0000establish a dichotomy for Boolean PCSPs defined by templates with polymorphisms\u0000in the set of linear threshold functions. The reasoning relies on the new\u0000injective condition.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"46 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141152541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
MIT Hardness Group, Hayashi Ani, Erik D. Demaine, Holden Hall, Ricardo Ruiz, Naveen Venkat
We prove RE-completeness (and thus undecidability) of several 2D games in the Super Mario Bros. platform video game series: the New Super Mario Bros. series (original, Wii, U, and 2), and both Super Mario Maker games in all five game styles (Super Mario Bros. 1 and 3, Super Mario World, New Super Mario Bros. U, and Super Mario 3D World). These results hold even when we restrict to constant-size levels and screens, but they do require generalizing to allow arbitrarily many enemies at each location and onscreen, as well as allowing for exponentially large (or no) timer. Our New Super Mario Bros. constructions fit within one standard screen size. In our Super Mario Maker reductions, we work within the standard screen size and use the property that the game engine remembers offscreen objects that are global because they are supported by "global ground". To prove these Mario results, we build a new theory of counter gadgets in the motion-planning-through-gadgets framework, and provide a suite of simple gadgets for which reachability is RE-complete.
我们证明了《超级马里奥兄弟》(Super Mario Bros.)平台视频游戏系列中几款二维游戏的RE完备性(从而证明了其不可判定性):《新超级马里奥兄弟》(New Super Mario Bros.)系列(原始版、Wii版、U版和2版),以及所有五种游戏风格(《超级马里奥兄弟1》和《超级马里奥兄弟3》、《超级马里奥世界》、《新超级马里奥兄弟U》和《超级马里奥3D世界》)中的两款《超级马里奥制造者》游戏。即使我们限制关卡和屏幕的大小,这些结果也是成立的,但它们确实需要进行归纳,以便在每个位置和屏幕上允许任意多的敌人,以及允许前指数大(或无)的计时器。我们的《新超级马里奥兄弟》构建符合一个标准屏幕尺寸。在我们的《超级马里奥制造》重制版中,我们在标准屏幕尺寸内工作,并使用了游戏引擎会记住屏幕外物体的属性,这些物体是全局的,因为它们得到了 "全局地面 "的支持。为了证明这些马里奥结果,我们在 "通过小工具进行运动规划 "的框架中建立了一个新的反小工具理论,并提供了一套简单的小工具,这些小工具的可达性是 RE-完备的。
{"title":"You Can't Solve These Super Mario Bros. Levels: Undecidable Mario Games","authors":"MIT Hardness Group, Hayashi Ani, Erik D. Demaine, Holden Hall, Ricardo Ruiz, Naveen Venkat","doi":"arxiv-2405.10546","DOIUrl":"https://doi.org/arxiv-2405.10546","url":null,"abstract":"We prove RE-completeness (and thus undecidability) of several 2D games in the\u0000Super Mario Bros. platform video game series: the New Super Mario Bros. series\u0000(original, Wii, U, and 2), and both Super Mario Maker games in all five game\u0000styles (Super Mario Bros. 1 and 3, Super Mario World, New Super Mario Bros. U,\u0000and Super Mario 3D World). These results hold even when we restrict to\u0000constant-size levels and screens, but they do require generalizing to allow\u0000arbitrarily many enemies at each location and onscreen, as well as allowing for\u0000exponentially large (or no) timer. Our New Super Mario Bros. constructions fit\u0000within one standard screen size. In our Super Mario Maker reductions, we work\u0000within the standard screen size and use the property that the game engine\u0000remembers offscreen objects that are global because they are supported by\u0000\"global ground\". To prove these Mario results, we build a new theory of counter\u0000gadgets in the motion-planning-through-gadgets framework, and provide a suite\u0000of simple gadgets for which reachability is RE-complete.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"59 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141152515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}