In this paper, I first establish -- via methods other than the Gottesman-Knill theorem -- the existence of an infinite set of instances of simulating a quantum circuit to decide a decision problem that can be simulated classically. I then examine under what restrictions on quantum circuits the existence of infinitely many classically simulable instances persists. There turns out to be a vast number of such restrictions, and any combination of those found can be applied at the same time without eliminating the infinite set of classically simulable instances. Further analysis of the tools used in this then shows there exists a language that every (promise) BQP language is one-one reducible to. This language is also not P-bi-immune under very many promises.
{"title":"Extensively Not P-Bi-Immune promiseBQP-Complete Languages","authors":"Andrew Jackson","doi":"arxiv-2406.16764","DOIUrl":"https://doi.org/arxiv-2406.16764","url":null,"abstract":"In this paper, I first establish -- via methods other than the\u0000Gottesman-Knill theorem -- the existence of an infinite set of instances of\u0000simulating a quantum circuit to decide a decision problem that can be simulated\u0000classically. I then examine under what restrictions on quantum circuits the\u0000existence of infinitely many classically simulable instances persists. There\u0000turns out to be a vast number of such restrictions, and any combination of\u0000those found can be applied at the same time without eliminating the infinite\u0000set of classically simulable instances. Further analysis of the tools used in\u0000this then shows there exists a language that every (promise) BQP language is\u0000one-one reducible to. This language is also not P-bi-immune under very many\u0000promises.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"71 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141507160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christopher Kempes, Sara I. Walker, Michael Lachmann, Leroy Cronin
Assembly theory (AT) quantifies selection using the assembly equation and identifies complex objects that occur in abundance based on two measurements, assembly index and copy number. The assembly index is determined by the minimal number of recursive joining operations necessary to construct an object from basic parts, and the copy number is how many of the given object(s) are observed. Together these allow defining a quantity, called Assembly, which captures the amount of causation required to produce the observed objects in the sample. AT's focus on how selection generates complexity offers a distinct approach to that of computational complexity theory which focuses on minimum descriptions via compressibility. To explore formal differences between the two approaches, we show several simple and explicit mathematical examples demonstrating that the assembly index, itself only one piece of the theoretical framework of AT, is formally not equivalent to other commonly used complexity measures from computer science and information theory including Huffman encoding and Lempel-Ziv-Welch compression.
组装理论(AT)利用组装方程对选择进行量化,并根据组装指数和拷贝数这两个测量值来识别大量出现的复杂对象。装配指数由从基本部件构建一个物体所需的递归连接操作的最小数量决定,而拷贝数则是观测到的给定物体的数量。这些因素结合在一起,就可以定义一个称为 "集合"(Assembly)的量,它可以捕捉到产生样本中观察到的对象所需的因果关系量。计算复杂性理论关注的是选择如何产生复杂性,这与计算复杂性理论关注通过可压缩性进行最小描述的方法截然不同。为了探讨这两种方法在形式上的差异,我们展示了几个简单明了的数学例子,证明装配指数本身只是 AT 理论框架的一部分,在形式上并不等同于计算机科学和信息论中其他常用的复杂性度量,包括哈夫曼编码和 Lempel-Ziv-Welch 压缩。
{"title":"Assembly Theory and its Relationship with Computational Complexity","authors":"Christopher Kempes, Sara I. Walker, Michael Lachmann, Leroy Cronin","doi":"arxiv-2406.12176","DOIUrl":"https://doi.org/arxiv-2406.12176","url":null,"abstract":"Assembly theory (AT) quantifies selection using the assembly equation and\u0000identifies complex objects that occur in abundance based on two measurements,\u0000assembly index and copy number. The assembly index is determined by the minimal\u0000number of recursive joining operations necessary to construct an object from\u0000basic parts, and the copy number is how many of the given object(s) are\u0000observed. Together these allow defining a quantity, called Assembly, which\u0000captures the amount of causation required to produce the observed objects in\u0000the sample. AT's focus on how selection generates complexity offers a distinct\u0000approach to that of computational complexity theory which focuses on minimum\u0000descriptions via compressibility. To explore formal differences between the two\u0000approaches, we show several simple and explicit mathematical examples\u0000demonstrating that the assembly index, itself only one piece of the theoretical\u0000framework of AT, is formally not equivalent to other commonly used complexity\u0000measures from computer science and information theory including Huffman\u0000encoding and Lempel-Ziv-Welch compression.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141507162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We prove in this paper that there is a language $L_d$ accepted by some nondeterministic Turing machines but not by any ${rm co}mathcal{NP}$-machines (defined later). We further show that $L_d$ is in $mathcal{NP}$, thus proving that $mathcal{NP}neq{rm co}mathcal{NP}$. The techniques used in this paper are lazy-diagonalization and the novel new technique developed in author's recent work cite{Lin21}. As a by-product, we reach the important result cite{Lin21} that $mathcal{P}neqmathcal{NP}$ once again, which is clear from the above outcome and the well-known fact that $mathcal{P}={rm co}mathcal{P}$. Other direct consequences are also summarized.
{"title":"On $NP$ versus ${rm co}NP$","authors":"Tianrong Lin","doi":"arxiv-2406.10476","DOIUrl":"https://doi.org/arxiv-2406.10476","url":null,"abstract":"We prove in this paper that there is a language $L_d$ accepted by some\u0000nondeterministic Turing machines but not by any ${rm co}mathcal{NP}$-machines\u0000(defined later). We further show that $L_d$ is in $mathcal{NP}$, thus proving\u0000that $mathcal{NP}neq{rm co}mathcal{NP}$. The techniques used in this paper\u0000are lazy-diagonalization and the novel new technique developed in author's\u0000recent work cite{Lin21}. As a by-product, we reach the important result\u0000cite{Lin21} that $mathcal{P}neqmathcal{NP}$ once again, which is clear from\u0000the above outcome and the well-known fact that $mathcal{P}={rm\u0000co}mathcal{P}$. Other direct consequences are also summarized.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141507163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Using properties of Blum complexity measures and certain complexity class operators, we exhibit a total computable and non-decreasing function $t_{mathsf{poly}}$ such that for all $k$, $Sigma_kmathsf{P} = Sigma_kmathsf{TIME}(t_{mathsf{poly}})$, $mathsf{BPP} = mathsf{BPTIME}(t_{mathsf{poly}})$, $mathsf{RP} = mathsf{RTIME}(t_{mathsf{poly}})$, $mathsf{UP} = mathsf{UTIME}(t_{mathsf{poly}})$, $mathsf{PP} = mathsf{PTIME}(t_{mathsf{poly}})$, $mathsf{Mod}_kmathsf{P} = mathsf{Mod}_kmathsf{TIME}(t_{mathsf{poly}})$, $mathsf{PSPACE} = mathsf{DSPACE}(t_{mathsf{poly}})$, and so forth. A similar statement holds for any collection of language classes, provided that each class is definable by applying a certain complexity class operator to some Blum complexity class.
{"title":"A Refinement of the McCreight-Meyer Union Theorem","authors":"Matthew Fox, Chaitanya Karamchedu","doi":"arxiv-2406.08600","DOIUrl":"https://doi.org/arxiv-2406.08600","url":null,"abstract":"Using properties of Blum complexity measures and certain complexity class\u0000operators, we exhibit a total computable and non-decreasing function\u0000$t_{mathsf{poly}}$ such that for all $k$, $Sigma_kmathsf{P} =\u0000Sigma_kmathsf{TIME}(t_{mathsf{poly}})$, $mathsf{BPP} =\u0000mathsf{BPTIME}(t_{mathsf{poly}})$, $mathsf{RP} =\u0000mathsf{RTIME}(t_{mathsf{poly}})$, $mathsf{UP} =\u0000mathsf{UTIME}(t_{mathsf{poly}})$, $mathsf{PP} =\u0000mathsf{PTIME}(t_{mathsf{poly}})$, $mathsf{Mod}_kmathsf{P} =\u0000mathsf{Mod}_kmathsf{TIME}(t_{mathsf{poly}})$, $mathsf{PSPACE} =\u0000mathsf{DSPACE}(t_{mathsf{poly}})$, and so forth. A similar statement holds\u0000for any collection of language classes, provided that each class is definable\u0000by applying a certain complexity class operator to some Blum complexity class.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"2012 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141518331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Longlong Lin, Tao Jia, Zeli Wang, Jin Zhao, Rong-Hua Li
Higher-order graph clustering aims to partition the graph using frequently occurring subgraphs. Motif conductance is one of the most promising higher-order graph clustering models due to its strong interpretability. However, existing motif conductance based graph clustering algorithms are mainly limited by a seminal two-stage reweighting computing framework, needing to enumerate all motif instances to obtain an edge-weighted graph for partitioning. However, such a framework has two-fold vital defects: (1) It can only provide a quadratic bound for the motif with three vertices, and whether there is provable clustering quality for other motifs is still an open question. (2) The enumeration procedure of motif instances incurs prohibitively high costs against large motifs or large dense graphs due to combinatorial explosions. Besides, expensive spectral clustering or local graph diffusion on the edge-weighted graph also makes existing methods unable to handle massive graphs with millions of nodes. To overcome these dilemmas, we propose a Provable and Scalable Motif Conductance algorithm PSMC, which has a fixed and motif-independent approximation ratio for any motif. Specifically, PSMC first defines a new vertex metric Motif Resident based on the given motif, which can be computed locally. Then, it iteratively deletes the vertex with the smallest motif resident value very efficiently using novel dynamic update technologies. Finally, it outputs the locally optimal result during the above iterative process. To further boost efficiency, we propose several effective bounds to estimate the motif resident value of each vertex, which can greatly reduce computational costs. Empirical results show that our proposed algorithms achieve 3.2-32 times speedup and improve the quality by at least 12 times than the baselines.
{"title":"PSMC: Provable and Scalable Algorithms for Motif Conductance Based Graph Clustering","authors":"Longlong Lin, Tao Jia, Zeli Wang, Jin Zhao, Rong-Hua Li","doi":"arxiv-2406.07357","DOIUrl":"https://doi.org/arxiv-2406.07357","url":null,"abstract":"Higher-order graph clustering aims to partition the graph using frequently\u0000occurring subgraphs. Motif conductance is one of the most promising\u0000higher-order graph clustering models due to its strong interpretability.\u0000However, existing motif conductance based graph clustering algorithms are\u0000mainly limited by a seminal two-stage reweighting computing framework, needing\u0000to enumerate all motif instances to obtain an edge-weighted graph for\u0000partitioning. However, such a framework has two-fold vital defects: (1) It can\u0000only provide a quadratic bound for the motif with three vertices, and whether\u0000there is provable clustering quality for other motifs is still an open\u0000question. (2) The enumeration procedure of motif instances incurs prohibitively\u0000high costs against large motifs or large dense graphs due to combinatorial\u0000explosions. Besides, expensive spectral clustering or local graph diffusion on\u0000the edge-weighted graph also makes existing methods unable to handle massive\u0000graphs with millions of nodes. To overcome these dilemmas, we propose a\u0000Provable and Scalable Motif Conductance algorithm PSMC, which has a fixed and\u0000motif-independent approximation ratio for any motif. Specifically, PSMC first\u0000defines a new vertex metric Motif Resident based on the given motif, which can\u0000be computed locally. Then, it iteratively deletes the vertex with the smallest\u0000motif resident value very efficiently using novel dynamic update technologies.\u0000Finally, it outputs the locally optimal result during the above iterative\u0000process. To further boost efficiency, we propose several effective bounds to\u0000estimate the motif resident value of each vertex, which can greatly reduce\u0000computational costs. Empirical results show that our proposed algorithms\u0000achieve 3.2-32 times speedup and improve the quality by at least 12 times than\u0000the baselines.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141507164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The purpose of this overview is to explain the enormous impact of Les Valiant's eponymous short conference contribution from 1979 on the development of algebraic complexity.
本综述旨在解释莱斯-瓦里昂特 1979 年发表的同名短篇会议论文对代数复杂性发展的巨大影响。
{"title":"Completeness classes in algebraic complexity theory","authors":"Peter Bürgisser","doi":"arxiv-2406.06217","DOIUrl":"https://doi.org/arxiv-2406.06217","url":null,"abstract":"The purpose of this overview is to explain the enormous impact of Les\u0000Valiant's eponymous short conference contribution from 1979 on the development\u0000of algebraic complexity.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"233 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141518333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
All strings with low mutual information with the halting sequence will have flat Kolmogorov Structure Functions, in the context of Algorithmic Statistics. Assuming the Independence Postulate, strings with non-negligible information with the halting sequence are purely mathematical constructions, and cannot be found in nature. Thus Algorithmic Statistics does not study strings in the physical world. This leads to the general thesis that two part codes require limitations as shown in the Minimum Description Length Principle. We also discuss issues with set-restricted Kolmogorov Structure Functions.
{"title":"On Kolmogorov Structure Functions","authors":"Samuel Epstein","doi":"arxiv-2406.05903","DOIUrl":"https://doi.org/arxiv-2406.05903","url":null,"abstract":"All strings with low mutual information with the halting sequence will have\u0000flat Kolmogorov Structure Functions, in the context of Algorithmic Statistics.\u0000Assuming the Independence Postulate, strings with non-negligible information\u0000with the halting sequence are purely mathematical constructions, and cannot be\u0000found in nature. Thus Algorithmic Statistics does not study strings in the\u0000physical world. This leads to the general thesis that two part codes require\u0000limitations as shown in the Minimum Description Length Principle. We also\u0000discuss issues with set-restricted Kolmogorov Structure Functions.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"132 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141518332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Divij Handa, Pavel Dolin, Shrinidhi Kumbhar, Chitta Baral, Tran Cao Son
Reasoning about actions and change (RAC) has historically driven the development of many early AI challenges, such as the frame problem, and many AI disciplines, including non-monotonic and commonsense reasoning. The role of RAC remains important even now, particularly for tasks involving dynamic environments, interactive scenarios, and commonsense reasoning. Despite the progress of Large Language Models (LLMs) in various AI domains, their performance on RAC is underexplored. To address this gap, we introduce a new benchmark, ActionReasoningBench, encompassing 13 domains and rigorously evaluating LLMs across eight different areas of RAC. These include - Object Tracking, Fluent Tracking, State Tracking, Action Executability, Effects of Actions, Numerical RAC, Hallucination Detection, and Composite Questions. Furthermore, we also investigate the indirect effect of actions due to ramification constraints for every domain. Finally, we evaluate our benchmark using open-sourced and commercial state-of-the-art LLMs, including GPT-4o, Gemini-1.0-Pro, Llama2-7b-chat, Llama2-13b-chat, Llama3-8b-instruct, Gemma-2b-instruct, and Gemma-7b-instruct. Our findings indicate that these models face significant challenges across all categories included in our benchmark.
{"title":"ActionReasoningBench: Reasoning about Actions with and without Ramification Constraints","authors":"Divij Handa, Pavel Dolin, Shrinidhi Kumbhar, Chitta Baral, Tran Cao Son","doi":"arxiv-2406.04046","DOIUrl":"https://doi.org/arxiv-2406.04046","url":null,"abstract":"Reasoning about actions and change (RAC) has historically driven the\u0000development of many early AI challenges, such as the frame problem, and many AI\u0000disciplines, including non-monotonic and commonsense reasoning. The role of RAC\u0000remains important even now, particularly for tasks involving dynamic\u0000environments, interactive scenarios, and commonsense reasoning. Despite the\u0000progress of Large Language Models (LLMs) in various AI domains, their\u0000performance on RAC is underexplored. To address this gap, we introduce a new\u0000benchmark, ActionReasoningBench, encompassing 13 domains and rigorously\u0000evaluating LLMs across eight different areas of RAC. These include - Object\u0000Tracking, Fluent Tracking, State Tracking, Action Executability, Effects of\u0000Actions, Numerical RAC, Hallucination Detection, and Composite Questions.\u0000Furthermore, we also investigate the indirect effect of actions due to\u0000ramification constraints for every domain. Finally, we evaluate our benchmark\u0000using open-sourced and commercial state-of-the-art LLMs, including GPT-4o,\u0000Gemini-1.0-Pro, Llama2-7b-chat, Llama2-13b-chat, Llama3-8b-instruct,\u0000Gemma-2b-instruct, and Gemma-7b-instruct. Our findings indicate that these\u0000models face significant challenges across all categories included in our\u0000benchmark.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141546382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A proportionally dense subgraph (PDS) of a graph is an induced subgraph of size at least two such that every vertex in the subgraph has proportionally as many neighbors inside as outside of the subgraph. Then, maxPDS is the problem of determining a PDS of maximum size in a given graph. If we further require that a PDS induces a connected subgraph, we refer to such problem as connected maxPDS. In this paper, we study the complexity of maxPDS with respect to parameters representing the density of a graph and its complement. We consider $Delta$, representing the maximum degree, $h$, representing the $h$-index, and degen, representing the degeneracy of a graph. We show that maxPDS is NP-hard parameterized by $Delta,h$ and degen. More specifically, we show that maxPDS is NP-hard on graphs with $Delta=4$, $h=4$ and degen=2. Then, we show that maxPDS is NP-hard when restricted to dense graphs, more specifically graphs $G$ such that $Delta(overline{G})leq 6$, and graphs $G$ such that $degen(overline{G}) leq 2$ and $overline{G}$ is bipartite, where $overline{G}$ represents the complement of $G$. On the other hand, we show that maxPDS is polynomial-time solvable on graphs with $hle2$. Finally, we consider graphs $G$ such that $h(overline{G})le 2$ and show that there exists a polynomial-time algorithm for finding a PDS of maximum size in such graphs. This result implies polynomial-time complexity on graphs with $n$ vertices of minimum degree $n-3$, i.e. graphs $G$ such that $Delta(overline{G})le 2$. For each result presented in this paper, we consider connected maxPDS and explain how to extend it when we require connectivity.
{"title":"Proportionally dense subgraphs of maximum size in degree-constrained graphs","authors":"Narmina Baghirova, Antoine Castillon","doi":"arxiv-2405.20847","DOIUrl":"https://doi.org/arxiv-2405.20847","url":null,"abstract":"A proportionally dense subgraph (PDS) of a graph is an induced subgraph of\u0000size at least two such that every vertex in the subgraph has proportionally as\u0000many neighbors inside as outside of the subgraph. Then, maxPDS is the problem\u0000of determining a PDS of maximum size in a given graph. If we further require\u0000that a PDS induces a connected subgraph, we refer to such problem as connected\u0000maxPDS. In this paper, we study the complexity of maxPDS with respect to\u0000parameters representing the density of a graph and its complement. We consider\u0000$Delta$, representing the maximum degree, $h$, representing the $h$-index, and\u0000degen, representing the degeneracy of a graph. We show that maxPDS is NP-hard\u0000parameterized by $Delta,h$ and degen. More specifically, we show that maxPDS\u0000is NP-hard on graphs with $Delta=4$, $h=4$ and degen=2. Then, we show that\u0000maxPDS is NP-hard when restricted to dense graphs, more specifically graphs $G$\u0000such that $Delta(overline{G})leq 6$, and graphs $G$ such that\u0000$degen(overline{G}) leq 2$ and $overline{G}$ is bipartite, where\u0000$overline{G}$ represents the complement of $G$. On the other hand, we show\u0000that maxPDS is polynomial-time solvable on graphs with $hle2$. Finally, we\u0000consider graphs $G$ such that $h(overline{G})le 2$ and show that there exists\u0000a polynomial-time algorithm for finding a PDS of maximum size in such graphs.\u0000This result implies polynomial-time complexity on graphs with $n$ vertices of\u0000minimum degree $n-3$, i.e. graphs $G$ such that $Delta(overline{G})le 2$.\u0000For each result presented in this paper, we consider connected maxPDS and\u0000explain how to extend it when we require connectivity.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141257054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The abstract tile assembly model (aTam) is a model of DNA self-assembly. Most of the studies focus on cooperative aTam where a form of synchronization between the tiles is possible. Simulating Turing machines is achievable in this context. Few results and constructions are known for the non-cooperative case (a variant of Wang tilings where assemblies do not need to cover the whole plane and some mismatches may occur). Introduced by P.E. Meunier and D. Regnault, efficient paths are a non-trivial construction for non-cooperative aTam. These paths of width nlog(n) are designed with n different tile types. Assembling them relies heavily on a form of ``non-determinism''. Indeed, the set of tiles may produced different finite terminal assemblies but they all contain the same efficient path. Directed non-cooperative aTam does not allow this non-determinism as only one assembly may be produced by a tile assembly system. This variant of aTam is the only one who was shown to be decidable. In this paper, we show that if the terminal assembly of a directed non-cooperative tile assembly system is finite then its width and length are of linear size according to the size of the tile assembly system. This result implies that the construction of efficient paths cannot be generalized to the directed case and that some computation must rely on a competition between different paths. It also implies that the construction of a square of width n using 2n-1 tiles types is asymptotically optimal. Moreover, we hope that the techniques introduced here will lead to a better comprehension of the non-directed case.
抽象瓦片组装模型(aTam)是一种 DNA 自组装模型。大多数研究都集中在合作型 aTam 上,在这种模型中,瓦片之间可能存在某种形式的同步。在这种情况下,模拟图灵机是可以实现的。对于非合作的情况(Wang tilings 的一种变体,在这种情况下,组装不需要覆盖整个平面,也可能出现一些错配),已知的结果和构造很少。由 P.E. Meunier 和 D. Regnault 提出的高效路径是非合作平面的一种非难构造。这些宽度为 nlog(n) 的路径由 n 种不同的瓦片类型设计而成。组装它们在很大程度上依赖于一种 "非确定性"。事实上,瓦片集合可能产生不同的有限确定性组合,但它们都包含相同的有效路径。有向非合作 aTam 不允许这种非确定性,因为瓦片装配系统只能产生一个装配体。这种 aTam 变体是唯一被证明是可解的。在本文中,我们证明了如果有向非合作瓦片装配系统的终端装配是有限的,那么它的宽度和长度与瓦片装配系统的大小成线性关系。这一结果表明,高效路径的构造不能推广到有向情况,某些计算必须依赖于不同路径之间的竞争。这也意味着,用 2n-1 种瓦片构建宽度为 n 的正方形是渐进最优的。此外,我们希望这里介绍的技术能让我们更好地理解非定向情况。
{"title":"A linear bound for the size of the finite terminal assembly of a directed non-cooperative tile assembly system","authors":"Sergiu Ivanov, Damien Regnault","doi":"arxiv-2405.18630","DOIUrl":"https://doi.org/arxiv-2405.18630","url":null,"abstract":"The abstract tile assembly model (aTam) is a model of DNA self-assembly. Most\u0000of the studies focus on cooperative aTam where a form of synchronization\u0000between the tiles is possible. Simulating Turing machines is achievable in this\u0000context. Few results and constructions are known for the non-cooperative case\u0000(a variant of Wang tilings where assemblies do not need to cover the whole\u0000plane and some mismatches may occur). Introduced by P.E. Meunier and D. Regnault, efficient paths are a non-trivial\u0000construction for non-cooperative aTam. These paths of width nlog(n) are\u0000designed with n different tile types. Assembling them relies heavily on a form\u0000of ``non-determinism''. Indeed, the set of tiles may produced different finite\u0000terminal assemblies but they all contain the same efficient path. Directed\u0000non-cooperative aTam does not allow this non-determinism as only one assembly\u0000may be produced by a tile assembly system. This variant of aTam is the only one\u0000who was shown to be decidable. In this paper, we show that if the terminal assembly of a directed\u0000non-cooperative tile assembly system is finite then its width and length are of\u0000linear size according to the size of the tile assembly system. This result\u0000implies that the construction of efficient paths cannot be generalized to the\u0000directed case and that some computation must rely on a competition between\u0000different paths. It also implies that the construction of a square of width n\u0000using 2n-1 tiles types is asymptotically optimal. Moreover, we hope that the\u0000techniques introduced here will lead to a better comprehension of the\u0000non-directed case.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"53 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141196664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}