We show that various known algorithms for finite-domain constraint satisfaction problems (CSP), which are based on solving systems of linear equations over the integers, fail to solve all tractable CSPs correctly. The algorithms include $mathbb{Z}$-affine $k$-consistency, BLP+AIP, every fixed level of the BA$^{k}$-hierarchy, and the CLAP algorithm. In particular, we refute the conjecture by Dalmau and Oprv{s}al that there is a fixed constant $k$ such that the $mathbb{Z}$-affine $k$-consistency algorithm solves all tractable finite domain CSPs.
{"title":"Limitations of Affine Integer Relaxations for Solving Constraint Satisfaction Problems","authors":"Moritz Lichter, Benedikt Pago","doi":"arxiv-2407.09097","DOIUrl":"https://doi.org/arxiv-2407.09097","url":null,"abstract":"We show that various known algorithms for finite-domain constraint\u0000satisfaction problems (CSP), which are based on solving systems of linear\u0000equations over the integers, fail to solve all tractable CSPs correctly. The\u0000algorithms include $mathbb{Z}$-affine $k$-consistency, BLP+AIP, every fixed\u0000level of the BA$^{k}$-hierarchy, and the CLAP algorithm. In particular, we\u0000refute the conjecture by Dalmau and Oprv{s}al that there is a fixed constant\u0000$k$ such that the $mathbb{Z}$-affine $k$-consistency algorithm solves all\u0000tractable finite domain CSPs.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"17 5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141719613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The SETH is a hypothesis of fundamental importance to (fine-grained) parameterized complexity theory and many important tight lower bounds are based on it. This situation is somewhat problematic, because the validity of the SETH is not universally believed and because in some senses the SETH seems to be "too strong" a hypothesis for the considered lower bounds. Motivated by this, we consider a number of reasonable weakenings of the SETH that render it more plausible, with sources ranging from circuit complexity, to backdoors for SAT-solving, to graph width parameters, to weighted satisfiability problems. Despite the diversity of the different formulations, we are able to uncover several non-obvious connections using tools from classical complexity theory. This leads us to a hierarchy of five main equivalence classes of hypotheses, with some of the highlights being the following: We show that beating brute force search for SAT parameterized by a modulator to a graph of bounded pathwidth, or bounded treewidth, or logarithmic tree-depth, is actually the same question, and is in fact equivalent to beating brute force for circuits of depth $epsilon n$; we show that beating brute force search for a strong 2-SAT backdoor is equivalent to beating brute force search for a modulator to logarithmic pathwidth; we show that beting brute force search for a strong Horn backdoor is equivalent to beating brute force search for arbitrary circuit SAT.
{"title":"Circuits and Backdoors: Five Shades of the SETH","authors":"Michael Lampis","doi":"arxiv-2407.09683","DOIUrl":"https://doi.org/arxiv-2407.09683","url":null,"abstract":"The SETH is a hypothesis of fundamental importance to (fine-grained)\u0000parameterized complexity theory and many important tight lower bounds are based\u0000on it. This situation is somewhat problematic, because the validity of the SETH\u0000is not universally believed and because in some senses the SETH seems to be\u0000\"too strong\" a hypothesis for the considered lower bounds. Motivated by this,\u0000we consider a number of reasonable weakenings of the SETH that render it more\u0000plausible, with sources ranging from circuit complexity, to backdoors for\u0000SAT-solving, to graph width parameters, to weighted satisfiability problems.\u0000Despite the diversity of the different formulations, we are able to uncover\u0000several non-obvious connections using tools from classical complexity theory.\u0000This leads us to a hierarchy of five main equivalence classes of hypotheses,\u0000with some of the highlights being the following: We show that beating brute force search for SAT parameterized by a modulator\u0000to a graph of bounded pathwidth, or bounded treewidth, or logarithmic\u0000tree-depth, is actually the same question, and is in fact equivalent to beating\u0000brute force for circuits of depth $epsilon n$; we show that beating brute\u0000force search for a strong 2-SAT backdoor is equivalent to beating brute force\u0000search for a modulator to logarithmic pathwidth; we show that beting brute\u0000force search for a strong Horn backdoor is equivalent to beating brute force\u0000search for arbitrary circuit SAT.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"69 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141719612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sourav Chakraborty, Chandrima Kayal, Rajat Mittal, Manaswi Paraashar, Nitin Saurabh
Determining the approximate degree composition for Boolean functions remains a significant unsolved problem in Boolean function complexity. In recent decades, researchers have concentrated on proving that approximate degree composes for special types of inner and outer functions. An important and extensively studied class of functions are the recursive functions, i.e.~functions obtained by composing a base function with itself a number of times. Let $h^d$ denote the standard $d$-fold composition of the base function $h$. The main result of this work is to show that the approximate degree composes if either of the following conditions holds: begin{itemize} item The outer function $f:{0,1}^nto {0,1}$ is a recursive function of the form $h^d$, with $h$ being any base function and $d= Omega(loglog n)$. item The inner function is a recursive function of the form $h^d$, with $h$ being any constant arity base function (other than AND and OR) and $d= Omega(loglog n)$, where $n$ is the arity of the outer function. end{itemize} In terms of proof techniques, we first observe that the lower bound for composition can be obtained by introducing majority in between the inner and the outer functions. We then show that majority can be emph{efficiently eliminated} if the inner or outer function is a recursive function.
{"title":"Approximate Degree Composition for Recursive Functions","authors":"Sourav Chakraborty, Chandrima Kayal, Rajat Mittal, Manaswi Paraashar, Nitin Saurabh","doi":"arxiv-2407.08385","DOIUrl":"https://doi.org/arxiv-2407.08385","url":null,"abstract":"Determining the approximate degree composition for Boolean functions remains\u0000a significant unsolved problem in Boolean function complexity. In recent\u0000decades, researchers have concentrated on proving that approximate degree\u0000composes for special types of inner and outer functions. An important and\u0000extensively studied class of functions are the recursive functions,\u0000i.e.~functions obtained by composing a base function with itself a number of\u0000times. Let $h^d$ denote the standard $d$-fold composition of the base function\u0000$h$. The main result of this work is to show that the approximate degree composes\u0000if either of the following conditions holds: begin{itemize} item The outer function $f:{0,1}^nto {0,1}$ is a recursive function of\u0000the form $h^d$, with $h$ being any base function and $d= Omega(loglog n)$. item The inner function is a recursive function of the form $h^d$, with $h$\u0000being any constant arity base function (other than AND and OR) and $d=\u0000Omega(loglog n)$, where $n$ is the arity of the outer function. end{itemize} In terms of proof techniques, we first observe that the lower bound for\u0000composition can be obtained by introducing majority in between the inner and\u0000the outer functions. We then show that majority can be emph{efficiently\u0000eliminated} if the inner or outer function is a recursive function.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"155 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141611391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alessandro Panconesi, Pietro Maria Posta, Mirko Giacchini
In the video game "7 Billion Humans", the player is requested to direct a group of workers to various destinations by writing a program that is executed simultaneously on each worker. While the game is quite rich and, indeed, it is considered one of the best games for beginners to learn the basics of programming, we show that even extremely simple versions are already NP-Hard or PSPACE-Hard.
{"title":"Coordinating \"7 Billion Humans\" is hard","authors":"Alessandro Panconesi, Pietro Maria Posta, Mirko Giacchini","doi":"arxiv-2407.07246","DOIUrl":"https://doi.org/arxiv-2407.07246","url":null,"abstract":"In the video game \"7 Billion Humans\", the player is requested to direct a\u0000group of workers to various destinations by writing a program that is executed\u0000simultaneously on each worker. While the game is quite rich and, indeed, it is\u0000considered one of the best games for beginners to learn the basics of\u0000programming, we show that even extremely simple versions are already NP-Hard or\u0000PSPACE-Hard.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141584608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gautam Chandrasekaran, Adam Klivans, Vasilis Kontonis, Raghu Meka, Konstantinos Stavropoulos
In traditional models of supervised learning, the goal of a learner -- given examples from an arbitrary joint distribution on $mathbb{R}^d times {pm 1}$ -- is to output a hypothesis that is competitive (to within $epsilon$) of the best fitting concept from some class. In order to escape strong hardness results for learning even simple concept classes, we introduce a smoothed-analysis framework that requires a learner to compete only with the best classifier that is robust to small random Gaussian perturbation. This subtle change allows us to give a wide array of learning results for any concept that (1) depends on a low-dimensional subspace (aka multi-index model) and (2) has a bounded Gaussian surface area. This class includes functions of halfspaces and (low-dimensional) convex sets, cases that are only known to be learnable in non-smoothed settings with respect to highly structured distributions such as Gaussians. Surprisingly, our analysis also yields new results for traditional non-smoothed frameworks such as learning with margin. In particular, we obtain the first algorithm for agnostically learning intersections of $k$-halfspaces in time $k^{poly(frac{log k}{epsilon gamma}) }$ where $gamma$ is the margin parameter. Before our work, the best-known runtime was exponential in $k$ (Arriaga and Vempala, 1999).
{"title":"Smoothed Analysis for Learning Concepts with Low Intrinsic Dimension","authors":"Gautam Chandrasekaran, Adam Klivans, Vasilis Kontonis, Raghu Meka, Konstantinos Stavropoulos","doi":"arxiv-2407.00966","DOIUrl":"https://doi.org/arxiv-2407.00966","url":null,"abstract":"In traditional models of supervised learning, the goal of a learner -- given\u0000examples from an arbitrary joint distribution on $mathbb{R}^d times {pm\u00001}$ -- is to output a hypothesis that is competitive (to within $epsilon$) of\u0000the best fitting concept from some class. In order to escape strong hardness\u0000results for learning even simple concept classes, we introduce a\u0000smoothed-analysis framework that requires a learner to compete only with the\u0000best classifier that is robust to small random Gaussian perturbation. This subtle change allows us to give a wide array of learning results for any\u0000concept that (1) depends on a low-dimensional subspace (aka multi-index model)\u0000and (2) has a bounded Gaussian surface area. This class includes functions of\u0000halfspaces and (low-dimensional) convex sets, cases that are only known to be\u0000learnable in non-smoothed settings with respect to highly structured\u0000distributions such as Gaussians. Surprisingly, our analysis also yields new results for traditional\u0000non-smoothed frameworks such as learning with margin. In particular, we obtain\u0000the first algorithm for agnostically learning intersections of $k$-halfspaces\u0000in time $k^{poly(frac{log k}{epsilon gamma}) }$ where $gamma$ is the\u0000margin parameter. Before our work, the best-known runtime was exponential in\u0000$k$ (Arriaga and Vempala, 1999).","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141518329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We prove a lower bound on the communication complexity of computing the $n$-fold xor of an arbitrary function $f$, in terms of the communication complexity and rank of $f$. We prove that $D(f^{oplus n}) geq n cdot Big(frac{Omega(D(f))}{log mathsf{rk}(f)} -log mathsf{rk}(f)Big )$, where here $D(f), D(f^{oplus n})$ represent the deterministic communication complexity, and $mathsf{rk}(f)$ is the rank of $f$. Our methods involve a new way to use information theory to reason about deterministic communication complexity.
{"title":"An XOR Lemma for Deterministic Communication Complexity","authors":"Siddharth Iyer, Anup Rao","doi":"arxiv-2407.01802","DOIUrl":"https://doi.org/arxiv-2407.01802","url":null,"abstract":"We prove a lower bound on the communication complexity of computing the\u0000$n$-fold xor of an arbitrary function $f$, in terms of the communication\u0000complexity and rank of $f$. We prove that $D(f^{oplus n}) geq n cdot\u0000Big(frac{Omega(D(f))}{log mathsf{rk}(f)} -log mathsf{rk}(f)Big )$,\u0000where here $D(f), D(f^{oplus n})$ represent the deterministic communication\u0000complexity, and $mathsf{rk}(f)$ is the rank of $f$. Our methods involve a new\u0000way to use information theory to reason about deterministic communication\u0000complexity.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141518328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Davide Bilò, Alessia Di Fonso, Gabriele Di Stefano, Stefano Leucci
Visibility problems have been investigated for a long time under different assumptions as they pose challenging combinatorial problems and are connected to robot navigation problems. The mutual-visibility problem in a graph $G$ of $n$ vertices asks to find the largest set of vertices $Xsubseteq V(G)$, also called $mu$-set, such that for any two vertices $u,vin X$, there is a shortest $u,v$-path $P$ where all internal vertices of $P$ are not in $X$. This means that $u$ and $v$ are visible w.r.t. $X$. Variations of this problem are known as total, outer, and dual mutual-visibility problems, depending on the visibility property of vertices inside and/or outside $X$. The mutual-visibility problem and all its variations are known to be $mathsf{NP}$-complete on graphs of diameter $4$. In this paper, we design a polynomial-time algorithm that finds a $mu$-set with size $Omegaleft( sqrt{n/ overline{D}} right)$, where $overline D$ is the average distance between any two vertices of $G$. Moreover, we show inapproximability results for all visibility problems on graphs of diameter $2$ and strengthen the inapproximability ratios for graphs of diameter $3$ or larger. More precisely, for graphs of diameter at least $3$ and for every constant $varepsilon > 0$, we show that mutual-visibility and dual mutual-visibility problems are not approximable within a factor of $n^{1/3-varepsilon}$, while outer and total mutual-visibility problems are not approximable within a factor of $n^{1/2 - varepsilon}$, unless $mathsf{P}=mathsf{NP}$. Furthermore we study the relationship between the mutual-visibility number and the general position number in which no three distinct vertices $u,v,w$ of $X$ belong to any shortest path of $G$.
{"title":"On the approximability of graph visibility problems","authors":"Davide Bilò, Alessia Di Fonso, Gabriele Di Stefano, Stefano Leucci","doi":"arxiv-2407.00409","DOIUrl":"https://doi.org/arxiv-2407.00409","url":null,"abstract":"Visibility problems have been investigated for a long time under different\u0000assumptions as they pose challenging combinatorial problems and are connected\u0000to robot navigation problems. The mutual-visibility problem in a graph $G$ of\u0000$n$ vertices asks to find the largest set of vertices $Xsubseteq V(G)$, also\u0000called $mu$-set, such that for any two vertices $u,vin X$, there is a\u0000shortest $u,v$-path $P$ where all internal vertices of $P$ are not in $X$. This\u0000means that $u$ and $v$ are visible w.r.t. $X$. Variations of this problem are\u0000known as total, outer, and dual mutual-visibility problems, depending on the\u0000visibility property of vertices inside and/or outside $X$. The\u0000mutual-visibility problem and all its variations are known to be\u0000$mathsf{NP}$-complete on graphs of diameter $4$. In this paper, we design a polynomial-time algorithm that finds a $mu$-set\u0000with size $Omegaleft( sqrt{n/ overline{D}} right)$, where $overline D$ is\u0000the average distance between any two vertices of $G$. Moreover, we show\u0000inapproximability results for all visibility problems on graphs of diameter $2$\u0000and strengthen the inapproximability ratios for graphs of diameter $3$ or\u0000larger. More precisely, for graphs of diameter at least $3$ and for every\u0000constant $varepsilon > 0$, we show that mutual-visibility and dual\u0000mutual-visibility problems are not approximable within a factor of\u0000$n^{1/3-varepsilon}$, while outer and total mutual-visibility problems are not\u0000approximable within a factor of $n^{1/2 - varepsilon}$, unless\u0000$mathsf{P}=mathsf{NP}$. Furthermore we study the relationship between the mutual-visibility number\u0000and the general position number in which no three distinct vertices $u,v,w$ of\u0000$X$ belong to any shortest path of $G$.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"160 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141507159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A temporal graph is a graph whose edges only appear at certain points in time. Reachability in these graphs is defined in terms of paths that traverse the edges in chronological order (temporal paths). This form of reachability is neither symmetric nor transitive, the latter having important consequences on the computational complexity of even basic questions, such as computing temporal connected components. In this paper, we introduce several parameters that capture how far a temporal graph $mathcal{G}$ is from being transitive, namely, emph{vertex-deletion distance to transitivity} and emph{arc-modification distance to transitivity}, both being applied to the reachability graph of $mathcal{G}$. We illustrate the impact of these parameters on the temporal connected component problem, obtaining several tractability results in terms of fixed-parameter tractability and polynomial kernels. Significantly, these results are obtained without restrictions of the underlying graph, the snapshots, or the lifetime of the input graph. As such, our results isolate the impact of non-transitivity and confirm the key role that it plays in the hardness of temporal graph problems.
时序图是一种边只在特定时间点出现的图。这些图中的可达性是根据按时间顺序遍历边的路径(时间路径)来定义的。这种形式的可达性既不是对称的,也不是传递的,后者甚至对计算时空连接成分等基本问题的计算复杂性都有重要影响。在本文中,我们引入了几个参数来捕捉时空图 $mathcal{G}$ 距离传递性有多远,即 emph{vertex-deletion distance to transitivity} 和 emph{arc-modification distance to transitivity},这两个参数都应用于 $mathcal{G}$ 的可达性图。我们说明了这些参数对时间连通分量问题的影响,并从固定参数可计算性和多项式核的角度得到了几个可计算性结果。值得注意的是,这些结果的获得不受底层图、快照或输入图生命周期的限制。因此,我们的结果隔离了非传递性的影响,并证实了它在时序图问题的难易程度中扮演的关键角色。
{"title":"Distance to Transitivity: New Parameters for Taming Reachability in Temporal Graphs","authors":"Arnaud Casteigts, Nils Morawietz, Petra Wolf","doi":"arxiv-2406.19514","DOIUrl":"https://doi.org/arxiv-2406.19514","url":null,"abstract":"A temporal graph is a graph whose edges only appear at certain points in\u0000time. Reachability in these graphs is defined in terms of paths that traverse\u0000the edges in chronological order (temporal paths). This form of reachability is\u0000neither symmetric nor transitive, the latter having important consequences on\u0000the computational complexity of even basic questions, such as computing\u0000temporal connected components. In this paper, we introduce several parameters\u0000that capture how far a temporal graph $mathcal{G}$ is from being transitive,\u0000namely, emph{vertex-deletion distance to transitivity} and\u0000emph{arc-modification distance to transitivity}, both being applied to the\u0000reachability graph of $mathcal{G}$. We illustrate the impact of these\u0000parameters on the temporal connected component problem, obtaining several\u0000tractability results in terms of fixed-parameter tractability and polynomial\u0000kernels. Significantly, these results are obtained without restrictions of the\u0000underlying graph, the snapshots, or the lifetime of the input graph. As such,\u0000our results isolate the impact of non-transitivity and confirm the key role\u0000that it plays in the hardness of temporal graph problems.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"66 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141518330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Given an Abelian group G, a Boolean-valued function f: G -> {-1,+1}, is said to be s-sparse, if it has at most s-many non-zero Fourier coefficients over the domain G. In a seminal paper, Gopalan et al. proved "Granularity" for Fourier coefficients of Boolean valued functions over Z_2^n, that have found many diverse applications in theoretical computer science and combinatorics. They also studied structural results for Boolean functions over Z_2^n which are approximately Fourier-sparse. In this work, we obtain structural results for approximately Fourier-sparse Boolean valued functions over Abelian groups G of the form,G:= Z_{p_1}^{n_1} times ... times Z_{p_t}^{n_t}, for distinct primes p_i. We also obtain a lower bound of the form 1/(m^{2}s)^ceiling(phi(m)/2), on the absolute value of the smallest non-zero Fourier coefficient of an s-sparse function, where m=p_1 ... p_t, and phi(m)=(p_1-1) ... (p_t-1). We carefully apply probabilistic techniques from Gopalan et al., to obtain our structural results, and use some non-trivial results from algebraic number theory to get the lower bound. We construct a family of at most s-sparse Boolean functions over Z_p^n, where p > 2, for arbitrarily large enough s, where the minimum non-zero Fourier coefficient is 1/omega(n). The "Granularity" result of Gopalan et al. implies that the absolute values of non-zero Fourier coefficients of any s-sparse Boolean valued function over Z_2^n are 1/O(s). So, our result shows that one cannot expect such a lower bound for general Abelian groups. Using our new structural results on the Fourier coefficients of sparse functions, we design an efficient testing algorithm for Fourier-sparse Boolean functions, thata requires poly((ms)^phi(m),1/epsilon)-many queries. Further, we prove an Omega(sqrt{s}) lower bound on the query complexity of any adaptive sparsity testing algorithm.
在一篇开创性论文中,Gopalan 等人证明了 Z_2^n 上布尔值函数傅里叶系数的 "粒度",这在理论计算机科学和组合学中得到了广泛应用。他们还研究了 Z_2^n 上近似傅里叶稀疏布尔函数的结构性结果。在这项工作中,我们获得了形式为 G:= Z_{p_1}^{n_1} 的阿贝尔群 G 上近似傅立叶稀疏布尔有值函数的结构结果。times ...times Z_{p_t}^{n_t},对于不同的素数 p_i。我们还得到了一个 s 稀疏函数最小非零傅里叶系数绝对值的下限,其形式为 1/(m^{2}s)^ceiling(phi(m)/2),其中 m=p_1 ... p_t,phi(m)=(p_1-1) ... (p_t-1)。我们小心翼翼地运用戈帕兰等人的概率技术来获得我们的结构性结果,并利用代数数论的一些非难结果来得到下界。我们构造了 Z_p^n 上最多 s 个稀疏布尔函数族,其中对于任意足够大的 s,p > 2,最小非零傅里叶系数为 1/omega(n)。戈帕兰等人的 "粒度 "结果意味着,Z_2^n 上任何 s 稀疏布尔值函数的非零傅里叶系数的绝对值都是 1/O(s)。因此,我们的结果表明,对于一般的阿贝尔群,我们无法期待这样的下界。利用我们关于稀疏函数傅里叶系数的新结构性结果,我们设计了一种高效的傅里叶稀疏布尔函数测试算法,它只需要 poly((ms)^phi(m),1/epsilon)-many 查询。此外,我们还证明了任何自适应稀疏性测试算法查询复杂度的欧米茄(sqrt{s})下限。
{"title":"On Fourier analysis of sparse Boolean functions over certain Abelian groups","authors":"Sourav Chakraborty, Swarnalipa Datta, Pranjal Dutta, Arijit Ghosh, Swagato Sanyal","doi":"arxiv-2406.18700","DOIUrl":"https://doi.org/arxiv-2406.18700","url":null,"abstract":"Given an Abelian group G, a Boolean-valued function f: G -> {-1,+1}, is said\u0000to be s-sparse, if it has at most s-many non-zero Fourier coefficients over the\u0000domain G. In a seminal paper, Gopalan et al. proved \"Granularity\" for Fourier\u0000coefficients of Boolean valued functions over Z_2^n, that have found many\u0000diverse applications in theoretical computer science and combinatorics. They\u0000also studied structural results for Boolean functions over Z_2^n which are\u0000approximately Fourier-sparse. In this work, we obtain structural results for\u0000approximately Fourier-sparse Boolean valued functions over Abelian groups G of\u0000the form,G:= Z_{p_1}^{n_1} times ... times Z_{p_t}^{n_t}, for distinct primes\u0000p_i. We also obtain a lower bound of the form 1/(m^{2}s)^ceiling(phi(m)/2), on\u0000the absolute value of the smallest non-zero Fourier coefficient of an s-sparse\u0000function, where m=p_1 ... p_t, and phi(m)=(p_1-1) ... (p_t-1). We carefully\u0000apply probabilistic techniques from Gopalan et al., to obtain our structural\u0000results, and use some non-trivial results from algebraic number theory to get\u0000the lower bound. We construct a family of at most s-sparse Boolean functions over Z_p^n, where\u0000p > 2, for arbitrarily large enough s, where the minimum non-zero Fourier\u0000coefficient is 1/omega(n). The \"Granularity\" result of Gopalan et al. implies\u0000that the absolute values of non-zero Fourier coefficients of any s-sparse\u0000Boolean valued function over Z_2^n are 1/O(s). So, our result shows that one\u0000cannot expect such a lower bound for general Abelian groups. Using our new structural results on the Fourier coefficients of sparse\u0000functions, we design an efficient testing algorithm for Fourier-sparse Boolean\u0000functions, thata requires poly((ms)^phi(m),1/epsilon)-many queries. Further, we\u0000prove an Omega(sqrt{s}) lower bound on the query complexity of any adaptive\u0000sparsity testing algorithm.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141507161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shqiponja Ahmetaj, Timo Camillo Merkl, Reinhard Pichler
The Shapes Constraint Language (SHACL) was standardized by the World Wide Web as a constraint language to describe and validate RDF data graphs. SHACL uses the notion of shapes graph to describe a set of shape constraints paired with targets, that specify which nodes of the RDF graph should satisfy which shapes. An important question in practice is how to handle data graphs that do not validate the shapes graph. A solution is to tolerate the non-validation and find ways to obtain meaningful and correct answers to queries despite the non-validation. This is known as consistent query answering (CQA) and there is extensive literature on CQA in both the database and the KR setting. We study CQA in the context of SHACL for a fundamental fragment of the Semantic Web query language SPARQL. The goal of our work is a detailed complexity analysis of CQA for various semantics and possible restrictions on the acceptable repairs. It turns out that all considered variants of the problem are intractable, with complexities ranging between the first and third level of the polynomial hierarchy.
{"title":"Consistent Query Answering over SHACL Constraints","authors":"Shqiponja Ahmetaj, Timo Camillo Merkl, Reinhard Pichler","doi":"arxiv-2406.16653","DOIUrl":"https://doi.org/arxiv-2406.16653","url":null,"abstract":"The Shapes Constraint Language (SHACL) was standardized by the World Wide Web\u0000as a constraint language to describe and validate RDF data graphs. SHACL uses\u0000the notion of shapes graph to describe a set of shape constraints paired with\u0000targets, that specify which nodes of the RDF graph should satisfy which shapes.\u0000An important question in practice is how to handle data graphs that do not\u0000validate the shapes graph. A solution is to tolerate the non-validation and\u0000find ways to obtain meaningful and correct answers to queries despite the\u0000non-validation. This is known as consistent query answering (CQA) and there is\u0000extensive literature on CQA in both the database and the KR setting. We study\u0000CQA in the context of SHACL for a fundamental fragment of the Semantic Web\u0000query language SPARQL. The goal of our work is a detailed complexity analysis\u0000of CQA for various semantics and possible restrictions on the acceptable\u0000repairs. It turns out that all considered variants of the problem are\u0000intractable, with complexities ranging between the first and third level of the\u0000polynomial hierarchy.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141507211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}