We propose a framework of algorithm vs. hardness for all Max-CSPs and demonstrate it for a large class of predicates. This framework extends the work of Raghavendra [STOC, 2008], who showed a similar result for almost satisfiable Max-CSPs. Our framework is based on a new hybrid approximation algorithm, which uses a combination of the Gaussian elimination technique (i.e., solving a system of linear equations over an Abelian group) and the semidefinite programming relaxation. We complement our algorithm with a matching dictator vs. quasirandom test that has perfect completeness. The analysis of our dictator vs. quasirandom test is based on a novel invariance principle, which we call the mixed invariance principle. Our mixed invariance principle is an extension of the invariance principle of Mossel, O'Donnell and Oleszkiewicz [Annals of Mathematics, 2010] which plays a crucial role in Raghavendra's work. The mixed invariance principle allows one to relate 3-wise correlations over discrete probability spaces with expectations over spaces that are a mixture of Guassian spaces and Abelian groups, and may be of independent interest.
{"title":"On Approximability of Satisfiable k-CSPs: V","authors":"Amey Bhangale, Subhash Khot, Dor Minzer","doi":"arxiv-2408.15377","DOIUrl":"https://doi.org/arxiv-2408.15377","url":null,"abstract":"We propose a framework of algorithm vs. hardness for all Max-CSPs and\u0000demonstrate it for a large class of predicates. This framework extends the work\u0000of Raghavendra [STOC, 2008], who showed a similar result for almost satisfiable\u0000Max-CSPs. Our framework is based on a new hybrid approximation algorithm, which uses a\u0000combination of the Gaussian elimination technique (i.e., solving a system of\u0000linear equations over an Abelian group) and the semidefinite programming\u0000relaxation. We complement our algorithm with a matching dictator vs.\u0000quasirandom test that has perfect completeness. The analysis of our dictator vs. quasirandom test is based on a novel\u0000invariance principle, which we call the mixed invariance principle. Our mixed\u0000invariance principle is an extension of the invariance principle of Mossel,\u0000O'Donnell and Oleszkiewicz [Annals of Mathematics, 2010] which plays a crucial\u0000role in Raghavendra's work. The mixed invariance principle allows one to relate\u00003-wise correlations over discrete probability spaces with expectations over\u0000spaces that are a mixture of Guassian spaces and Abelian groups, and may be of\u0000independent interest.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A catalytic Turing machine is a variant of a Turing machine in which there exists an auxiliary tape in addition to the input tape and the work tape. This auxiliary tape is initially filled with arbitrary content. The machine can read and write on the auxiliary tape, but it is constrained to restore its initial content when it halts. Studying such a model and finding its powers and limitations has practical applications. In this paper, we study catalytic Turing machines with O(log n)-sized work tape and polynomial-sized auxiliary tape that are allowed to lose at most constant many bits of the auxiliary tape when they halt. We show that such catalytic Turing machines can only decide the same set of languages as standard catalytic Turing machines with the same size work and auxiliary tape.
{"title":"Lossy Catalytic Computation","authors":"Chetan Gupta, Rahul Jain, Vimal Raj Sharma, Raghunath Tewari","doi":"arxiv-2408.14670","DOIUrl":"https://doi.org/arxiv-2408.14670","url":null,"abstract":"A catalytic Turing machine is a variant of a Turing machine in which there\u0000exists an auxiliary tape in addition to the input tape and the work tape. This\u0000auxiliary tape is initially filled with arbitrary content. The machine can read\u0000and write on the auxiliary tape, but it is constrained to restore its initial\u0000content when it halts. Studying such a model and finding its powers and\u0000limitations has practical applications. In this paper, we study catalytic Turing machines with O(log n)-sized work\u0000tape and polynomial-sized auxiliary tape that are allowed to lose at most\u0000constant many bits of the auxiliary tape when they halt. We show that such\u0000catalytic Turing machines can only decide the same set of languages as standard\u0000catalytic Turing machines with the same size work and auxiliary tape.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The constraint satisfaction problem asks to decide if a set of constraints over a relational structure $mathcal{A}$ is satisfiable (CSP$(mathcal{A})$). We consider CSP$(mathcal{A} cup mathcal{B})$ where $mathcal{A}$ is a structure and $mathcal{B}$ is an alien structure, and analyse its (parameterized) complexity when at most $k$ alien constraints are allowed. We establish connections and obtain transferable complexity results to several well-studied problems that previously escaped classification attempts. Our novel approach, utilizing logical and algebraic methods, yields an FPT versus pNP dichotomy for arbitrary finite structures and sharper dichotomies for Boolean structures and first-order reducts of $(mathbb{N},=)$ (equality CSPs), together with many partial results for general $omega$-categorical structures.
{"title":"CSPs with Few Alien Constraints","authors":"Peter Jonsson, Victor Lagerkvist, George Osipov","doi":"arxiv-2408.12909","DOIUrl":"https://doi.org/arxiv-2408.12909","url":null,"abstract":"The constraint satisfaction problem asks to decide if a set of constraints\u0000over a relational structure $mathcal{A}$ is satisfiable (CSP$(mathcal{A})$).\u0000We consider CSP$(mathcal{A} cup mathcal{B})$ where $mathcal{A}$ is a\u0000structure and $mathcal{B}$ is an alien structure, and analyse its\u0000(parameterized) complexity when at most $k$ alien constraints are allowed. We\u0000establish connections and obtain transferable complexity results to several\u0000well-studied problems that previously escaped classification attempts. Our\u0000novel approach, utilizing logical and algebraic methods, yields an FPT versus\u0000pNP dichotomy for arbitrary finite structures and sharper dichotomies for\u0000Boolean structures and first-order reducts of $(mathbb{N},=)$ (equality CSPs),\u0000together with many partial results for general $omega$-categorical structures.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"70 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Denote by $H$ the Halting problem. Let $R_U: = { x | C_U(x) ge |x|}$, where $C_U(x)$ is the plain Kolmogorov complexity of $x$ under a universal decompressor $U$. We prove that there exists a universal $U$ such that $H in P^{R_U}$, solving the problem posted by Eric Allender.
用 $H$ 表示 Halting 问题。让$R_U: = { x | C_U(x) ge |x|}$, 其中$C_U(x)$是$x$在通用解压缩器$U$下的普通柯尔莫哥洛夫复杂度。我们证明存在一个通用的 $U$,使得 $H inP^{R_U}$ ,从而解决了埃里克-阿伦德(Eric Allender)提出的问题。
{"title":"On the computational power of $C$-random strings","authors":"Alexey Milovanov","doi":"arxiv-2409.04448","DOIUrl":"https://doi.org/arxiv-2409.04448","url":null,"abstract":"Denote by $H$ the Halting problem. Let $R_U: = { x | C_U(x) ge |x|}$,\u0000where $C_U(x)$ is the plain Kolmogorov complexity of $x$ under a universal\u0000decompressor $U$. We prove that there exists a universal $U$ such that $H in\u0000P^{R_U}$, solving the problem posted by Eric Allender.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rida Ait El Manssour, Nikhil Balaji, Klara Nosan, Mahsa Shirmohammadi, James Worrell
Hilbert's Nullstellensatz is a fundamental result in algebraic geometry that gives a necessary and sufficient condition for a finite collection of multivariate polynomials to have a common zero in an algebraically closed field. Associated with this result, there is the computational problem HN of determining whether a system of polynomials with coefficients in the field of rational numbers has a common zero over the field of algebraic numbers. In an influential paper, Koiran showed that HN can be determined in the polynomial hierarchy assuming the Generalised Riemann Hypothesis (GRH). More precisely, he showed that HN lies in the complexity class AM under GRH. In a later work he generalised this result by showing that the problem DIM, which asks to determine the dimension of the set of solutions of a given polynomial system, also lies in AM subject to GRH. In this paper we study the solvability of polynomial equations over arbitrary algebraically closed fields of characteristic zero. Up to isomorphism, every such field is the algebraic closure of a field of rational functions. We thus formulate a parametric version of HN, called HNP, in which the input is a system of polynomials with coefficients in a function field $mathbb{Q}(mathbf{x})$ and the task is to determine whether the polynomials have a common zero in the algebraic closure $overline{mathbb{Q}(mathbf{x})}$. We observe that Koiran's proof that DIM lies in AM can be interpreted as a randomised polynomial-time reduction of DIM to HNP, followed by an argument that HNP lies in AM. Our main contribution is a self-contained proof that HNP lies in AM that follows the same basic idea as Koiran's argument -- namely random instantiation of the parameters -- but whose justification is purely algebraic, relying on a parametric version of Hilbert's Nullstellensatz, and avoiding recourse to semi-algebraic geometry.
希尔伯特零点定理是代数几何中的一个基本结果,它给出了一个有限的多变量多项式集合在代数闭域中有一个公共零点的必要条件和充分条件。与这一结果相关的是一个计算问题 HN,即确定系数在有理数域中的多项式系统在代数数域中是否有公共零点。柯伊兰在一篇颇具影响力的论文中指出,HN 可以在假设广义黎曼假说(GRH)的多项式层次中确定。更确切地说,他证明了 HN 位于 GRH 下的复杂度等级 AM 中。在后来的工作中,他对这一结果进行了推广,证明了要求确定给定多项式系统解集维度的问题 DIM 也属于 GRH 条件下的 AM。在本文中,我们研究了特征为零的任意代数闭域上多项式方程的可解性。在同构情况下,每一个这样的域都是有理函数域的代数闭包。因此,我们提出了一个参数版本的 HN,称为 HNP,其中输入是一个多项式系统,其系数在一个函数域$mathbb{Q}(mathbf{x})$中,任务是确定这些多项式在代数闭包$overlinemathbb{Q}(mathbf{x})}$中是否有一个公共零点。我们注意到,柯朗关于 DIM 位于 AM 中的证明可以解释为将 DIM 以随机多项式时间还原为 HNP,然后再论证 HNP 位于 AM 中。我们的主要贡献是一个自足的证明,证明 HNP 位于 AM 中,它遵循与柯岩论证相同的基本思想--即参数的随机实例化--但其理由是纯代数的,依赖于希尔伯特无效定理的参数版本,并避免求助于半代数几何。
{"title":"A parametric version of the Hilbert Nullstellensatz","authors":"Rida Ait El Manssour, Nikhil Balaji, Klara Nosan, Mahsa Shirmohammadi, James Worrell","doi":"arxiv-2408.13027","DOIUrl":"https://doi.org/arxiv-2408.13027","url":null,"abstract":"Hilbert's Nullstellensatz is a fundamental result in algebraic geometry that\u0000gives a necessary and sufficient condition for a finite collection of\u0000multivariate polynomials to have a common zero in an algebraically closed\u0000field. Associated with this result, there is the computational problem HN of\u0000determining whether a system of polynomials with coefficients in the field of\u0000rational numbers has a common zero over the field of algebraic numbers. In an influential paper, Koiran showed that HN can be determined in the\u0000polynomial hierarchy assuming the Generalised Riemann Hypothesis (GRH). More\u0000precisely, he showed that HN lies in the complexity class AM under GRH. In a\u0000later work he generalised this result by showing that the problem DIM, which\u0000asks to determine the dimension of the set of solutions of a given polynomial\u0000system, also lies in AM subject to GRH. In this paper we study the solvability of polynomial equations over arbitrary\u0000algebraically closed fields of characteristic zero. Up to isomorphism, every\u0000such field is the algebraic closure of a field of rational functions. We thus\u0000formulate a parametric version of HN, called HNP, in which the input is a\u0000system of polynomials with coefficients in a function field\u0000$mathbb{Q}(mathbf{x})$ and the task is to determine whether the polynomials\u0000have a common zero in the algebraic closure\u0000$overline{mathbb{Q}(mathbf{x})}$. We observe that Koiran's proof that DIM lies in AM can be interpreted as a\u0000randomised polynomial-time reduction of DIM to HNP, followed by an argument\u0000that HNP lies in AM. Our main contribution is a self-contained proof that HNP\u0000lies in AM that follows the same basic idea as Koiran's argument -- namely\u0000random instantiation of the parameters -- but whose justification is purely\u0000algebraic, relying on a parametric version of Hilbert's Nullstellensatz, and\u0000avoiding recourse to semi-algebraic geometry.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"57 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The s-club cluster vertex deletion number of a graph, or sccvd, is the minimum number of vertices whose deletion results in a disjoint union of s-clubs, or graphs whose diameter is bounded above by s. We launch a study of several domination problems on diameter-two graphs, or 2-clubs, and study their parameterized complexity with respect to the 2ccvd number as main parameter. We further propose to explore the class of problems that become solvable in sub-exponential time when the running time is independent of some input parameter. Hardness of problems for this class depends on the Exponential-Time Hypothesis. We give examples of problems that are in the proposed class and problems that are hard for it.
{"title":"Domination in Diameter-Two Graphs and the 2-Club Cluster Vertex Deletion Parameter","authors":"Faisal N. Abu-Khzam, Lucas Isenmann","doi":"arxiv-2408.08418","DOIUrl":"https://doi.org/arxiv-2408.08418","url":null,"abstract":"The s-club cluster vertex deletion number of a graph, or sccvd, is the\u0000minimum number of vertices whose deletion results in a disjoint union of\u0000s-clubs, or graphs whose diameter is bounded above by s. We launch a study of\u0000several domination problems on diameter-two graphs, or 2-clubs, and study their\u0000parameterized complexity with respect to the 2ccvd number as main parameter. We\u0000further propose to explore the class of problems that become solvable in\u0000sub-exponential time when the running time is independent of some input\u0000parameter. Hardness of problems for this class depends on the Exponential-Time\u0000Hypothesis. We give examples of problems that are in the proposed class and\u0000problems that are hard for it.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142224303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the existence of optimal proof systems for sets outside of $mathrm{NP}$. Currently, no set $L notin mathrm{NP}$ is known that has optimal proof systems. Our main result shows that this is not surprising, because we can rule out relativizable proofs of optimality for all sets outside $mathrm{NTIME}(t)$ where $t$ is slightly superpolynomial. We construct an oracle $O$, such that for any set $L subseteq Sigma^*$ at least one of the following two properties holds: $L$ does not have optimal proof systems relative to $O$. $L in mathrm{UTIME}^O(2^{2(log n)^{8+4log(log(log(n)))}})$. The runtime bound is slightly superpolynomial. So there is no relativizable proof showing that a complex set has optimal proof systems. Hence, searching for non-trivial optimal proof systems with relativizable methods can only be successful (if at all) in a narrow range above $mathrm{NP}$.
{"title":"Oracle without Optimal Proof Systems outside Nondeterministic Subexponential Time","authors":"Fabian Egidy, Christian Glaßer","doi":"arxiv-2408.07408","DOIUrl":"https://doi.org/arxiv-2408.07408","url":null,"abstract":"We study the existence of optimal proof systems for sets outside of\u0000$mathrm{NP}$. Currently, no set $L notin mathrm{NP}$ is known that has\u0000optimal proof systems. Our main result shows that this is not surprising,\u0000because we can rule out relativizable proofs of optimality for all sets outside\u0000$mathrm{NTIME}(t)$ where $t$ is slightly superpolynomial. We construct an\u0000oracle $O$, such that for any set $L subseteq Sigma^*$ at least one of the\u0000following two properties holds: $L$ does not have optimal proof systems\u0000relative to $O$. $L in mathrm{UTIME}^O(2^{2(log\u0000n)^{8+4log(log(log(n)))}})$. The runtime bound is slightly superpolynomial.\u0000So there is no relativizable proof showing that a complex set has optimal proof\u0000systems. Hence, searching for non-trivial optimal proof systems with\u0000relativizable methods can only be successful (if at all) in a narrow range\u0000above $mathrm{NP}$.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Giuseppe Mazzotta, Francesco Ricca, Mirek Truszczynski
Answer Set Programming with Quantifiers (ASP(Q)) has been introduced to provide a natural extension of ASP modeling to problems in the polynomial hierarchy (PH). However, ASP(Q) lacks a method for encoding in an elegant and compact way problems requiring a polynomial number of calls to an oracle in $Sigma_n^p$ (that is, problems in $Delta_{n+1}^p$). Such problems include, in particular, optimization problems. In this paper we propose an extension of ASP(Q), in which component programs may contain weak constraints. Weak constraints can be used both for expressing local optimization within quantified component programs and for modeling global optimization criteria. We showcase the modeling capabilities of the new formalism through various application scenarios. Further, we study its computational properties obtaining complexity results and unveiling non-obvious characteristics of ASP(Q) programs with weak constraints.
带量词的答案集编程(ASP(Q))的引入,为多项式层次(PH)中的问题提供了 ASP 建模的自然扩展。然而,ASP(Q)缺乏一种方法,无法以优雅而紧凑的方式对需要在$Sigma_n^p$中调用多项式次数的问题(即在$Delta_{n+1}^p$中的问题)进行编码。这类问题尤其包括优化问题。在本文中,我们提出了 ASP(Q) 的扩展,其中的组件程序可以包含弱约束。弱约束既可用于表达量化组件程序中的局部优化,也可用于全局优化标准的建模。我们通过各种应用场景展示了新形式主义的建模能力。此外,我们还研究了它的计算特性,获得了复杂性结果,并揭示了带有弱约束的 ASP(Q) 程序的非显性特征。
{"title":"Quantifying over Optimum Answer Sets","authors":"Giuseppe Mazzotta, Francesco Ricca, Mirek Truszczynski","doi":"arxiv-2408.07697","DOIUrl":"https://doi.org/arxiv-2408.07697","url":null,"abstract":"Answer Set Programming with Quantifiers (ASP(Q)) has been introduced to\u0000provide a natural extension of ASP modeling to problems in the polynomial\u0000hierarchy (PH). However, ASP(Q) lacks a method for encoding in an elegant and\u0000compact way problems requiring a polynomial number of calls to an oracle in\u0000$Sigma_n^p$ (that is, problems in $Delta_{n+1}^p$). Such problems include, in\u0000particular, optimization problems. In this paper we propose an extension of\u0000ASP(Q), in which component programs may contain weak constraints. Weak\u0000constraints can be used both for expressing local optimization within\u0000quantified component programs and for modeling global optimization criteria. We\u0000showcase the modeling capabilities of the new formalism through various\u0000application scenarios. Further, we study its computational properties obtaining\u0000complexity results and unveiling non-obvious characteristics of ASP(Q) programs\u0000with weak constraints.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the maxtlint{} problem you are given a system of equations on the form $x_i + x_j equiv b pmod{2}$, and your objective is to find an assignment that satisfies as many equations as possible. Let $c in [0.5, 1]$ denote the maximum fraction of satisfiable equations. In this paper we construct a curve $s (c)$ such that it is NPhard{} to find a solution satisfying at least a fraction $s$ of equations. This curve either matches or improves all of the previously known inapproximability NP-hardness results for maxtlint{}. In particular, we show that if $c geqslant 0.9232$ then $frac{1 - s (c)}{1 - c} > 1.48969$, which improves the NP-hardness inapproximability constant for the min deletion version of maxtlint{}. Our work complements the work of O'Donnell and Wu that studied the same question assuming the Unique Games Conjecture. Similar to earlier inapproximability results for maxtlint{}, we use a gadget reduction from the $(2^k - 1)$-ary Hadamard predicate. Previous works used $k$ ranging from $2$ to $4$. Our main result is a procedure for taking a gadget for some fixed $k$, and use it as a building block to construct better and better gadgets as $k$ tends to infinity. Our method can be used to boost the result of both smaller gadgets created by hand $(k = 3)$ or larger gadgets constructed using a computer $(k = 4)$.
{"title":"On the NP-Hardness Approximation Curve for Max-2Lin(2)","authors":"Björn Martinsson","doi":"arxiv-2408.04832","DOIUrl":"https://doi.org/arxiv-2408.04832","url":null,"abstract":"In the maxtlint{} problem you are given a system of equations on the form\u0000$x_i + x_j equiv b pmod{2}$, and your objective is to find an assignment that\u0000satisfies as many equations as possible. Let $c in [0.5, 1]$ denote the\u0000maximum fraction of satisfiable equations. In this paper we construct a curve\u0000$s (c)$ such that it is NPhard{} to find a solution satisfying at least a\u0000fraction $s$ of equations. This curve either matches or improves all of the\u0000previously known inapproximability NP-hardness results for maxtlint{}. In\u0000particular, we show that if $c geqslant 0.9232$ then $frac{1 - s (c)}{1 - c}\u0000> 1.48969$, which improves the NP-hardness inapproximability constant for the\u0000min deletion version of maxtlint{}. Our work complements the work of O'Donnell\u0000and Wu that studied the same question assuming the Unique Games Conjecture. Similar to earlier inapproximability results for maxtlint{}, we use a gadget\u0000reduction from the $(2^k - 1)$-ary Hadamard predicate. Previous works used $k$\u0000ranging from $2$ to $4$. Our main result is a procedure for taking a gadget for\u0000some fixed $k$, and use it as a building block to construct better and better\u0000gadgets as $k$ tends to infinity. Our method can be used to boost the result of\u0000both smaller gadgets created by hand $(k = 3)$ or larger gadgets constructed\u0000using a computer $(k = 4)$.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141941786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Léo Saulières, Martin C. Cooper, Florence Dupin de Saint Cyr
History eXplanation based on Predicates (HXP), studies the behavior of a Reinforcement Learning (RL) agent in a sequence of agent's interactions with the environment (a history), through the prism of an arbitrary predicate. To this end, an action importance score is computed for each action in the history. The explanation consists in displaying the most important actions to the user. As the calculation of an action's importance is #W[1]-hard, it is necessary for long histories to approximate the scores, at the expense of their quality. We therefore propose a new HXP method, called Backward-HXP, to provide explanations for these histories without having to approximate scores. Experiments show the ability of B-HXP to summarise long histories.
{"title":"Backward explanations via redefinition of predicates","authors":"Léo Saulières, Martin C. Cooper, Florence Dupin de Saint Cyr","doi":"arxiv-2408.02606","DOIUrl":"https://doi.org/arxiv-2408.02606","url":null,"abstract":"History eXplanation based on Predicates (HXP), studies the behavior of a\u0000Reinforcement Learning (RL) agent in a sequence of agent's interactions with\u0000the environment (a history), through the prism of an arbitrary predicate. To\u0000this end, an action importance score is computed for each action in the\u0000history. The explanation consists in displaying the most important actions to\u0000the user. As the calculation of an action's importance is #W[1]-hard, it is\u0000necessary for long histories to approximate the scores, at the expense of their\u0000quality. We therefore propose a new HXP method, called Backward-HXP, to provide\u0000explanations for these histories without having to approximate scores.\u0000Experiments show the ability of B-HXP to summarise long histories.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141941787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}