Amir Abboud, K. Censor-Hillel, Seri Khoury, A. Paz
This article proves strong lower bounds for distributed computing in the congest model, by presenting the bit-gadget: a new technique for constructing graphs with small cuts. The contribution of bit-gadgets is twofold. First, developing careful sparse graph constructions with small cuts extends known techniques to show a near-linear lower bound for computing the diameter, a result previously known only for dense graphs. Moreover, the sparseness of the construction plays a crucial role in applying it to approximations of various distance computation problems, drastically improving over what can be obtained when using dense graphs. Second, small cuts are essential for proving super-linear lower bounds, none of which were known prior to this work. In fact, they allow us to show near-quadratic lower bounds for several problems, such as exact minimum vertex cover or maximum independent set, as well as for coloring a graph with its chromatic number. Such strong lower bounds are not limited to NP-hard problems, as given by two simple graph problems in P, which are shown to require a quadratic and near-quadratic number of rounds. All of the above are optimal up to logarithmic factors. In addition, in this context, the complexity of the all-pairs-shortest-paths problem is discussed. Finally, it is shown that graph constructions for congest lower bounds translate to lower bounds for the semi-streaming model, despite being very different in its nature.
{"title":"Smaller Cuts, Higher Lower Bounds","authors":"Amir Abboud, K. Censor-Hillel, Seri Khoury, A. Paz","doi":"10.1145/3469834","DOIUrl":"https://doi.org/10.1145/3469834","url":null,"abstract":"This article proves strong lower bounds for distributed computing in the congest model, by presenting the bit-gadget: a new technique for constructing graphs with small cuts. The contribution of bit-gadgets is twofold. First, developing careful sparse graph constructions with small cuts extends known techniques to show a near-linear lower bound for computing the diameter, a result previously known only for dense graphs. Moreover, the sparseness of the construction plays a crucial role in applying it to approximations of various distance computation problems, drastically improving over what can be obtained when using dense graphs. Second, small cuts are essential for proving super-linear lower bounds, none of which were known prior to this work. In fact, they allow us to show near-quadratic lower bounds for several problems, such as exact minimum vertex cover or maximum independent set, as well as for coloring a graph with its chromatic number. Such strong lower bounds are not limited to NP-hard problems, as given by two simple graph problems in P, which are shown to require a quadratic and near-quadratic number of rounds. All of the above are optimal up to logarithmic factors. In addition, in this context, the complexity of the all-pairs-shortest-paths problem is discussed. Finally, it is shown that graph constructions for congest lower bounds translate to lower bounds for the semi-streaming model, despite being very different in its nature.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"567 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115622571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nairen Cao, Jeremy T. Fineman, Katina Russell, Eugene Yang
This article presents I/O-efficient algorithms for topologically sorting a directed acyclic graph and for the more general problem identifying and topologically sorting the strongly connected components of a directed graph G = (V, E). Both algorithms are randomized and have I/O-costs O(sort(E) · poly(log V)), with high probability, where sort(E) = O(E/B log M/B(E/B)) is the I/O cost of sorting an |E|-element array on a machine with size-B blocks and size-M cache/internal memory. These are the first algorithms for these problems that do not incur at least one I/O per vertex, and as such these are the first I/O-efficient algorithms for sparse graphs. By applying the technique of time-forward processing, these algorithms also imply I/O-efficient algorithms for most problems on directed acyclic graphs, such as shortest paths, as well as the single-source reachability problem on arbitrary directed graphs.
{"title":"I/O-Efficient Algorithms for Topological Sort and Related Problems","authors":"Nairen Cao, Jeremy T. Fineman, Katina Russell, Eugene Yang","doi":"10.1145/3418356","DOIUrl":"https://doi.org/10.1145/3418356","url":null,"abstract":"This article presents I/O-efficient algorithms for topologically sorting a directed acyclic graph and for the more general problem identifying and topologically sorting the strongly connected components of a directed graph G = (V, E). Both algorithms are randomized and have I/O-costs O(sort(E) · poly(log V)), with high probability, where sort(E) = O(E/B log M/B(E/B)) is the I/O cost of sorting an |E|-element array on a machine with size-B blocks and size-M cache/internal memory. These are the first algorithms for these problems that do not incur at least one I/O per vertex, and as such these are the first I/O-efficient algorithms for sparse graphs. By applying the technique of time-forward processing, these algorithms also imply I/O-efficient algorithms for most problems on directed acyclic graphs, such as shortest paths, as well as the single-source reachability problem on arbitrary directed graphs.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"199 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133658274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Given an ideal I and a polynomial f the Ideal Membership Problem (IMP) is to test if f ϵ I. This problem is a fundamental algorithmic problem with important applications and notoriously intractable. We study the complexity of the IMP for combinatorial ideals that arise from constrained problems over the Boolean domain. As our main result, we identify the borderline of tractability. By using Gröbner bases techniques, we extend Schaefer’s dichotomy theorem [STOC, 1978] which classifies all Constraint Satisfaction Problems (CSPs) over the Boolean domain to be either in P or NP-hard. Moreover, our result implies necessary and sufficient conditions for the efficient computation of Theta Body Semi-Definite Programming (SDP) relaxations, identifying therefore the borderline of tractability for constraint language problems. This article is motivated by the pursuit of understanding the recently raised issue of bit complexity of Sum-of-Squares (SoS) proofs [O’Donnell, ITCS, 2017]. Raghavendra and Weitz [ICALP, 2017] show how the IMP tractability for combinatorial ideals implies bounded coefficients in SoS proofs.
{"title":"The Complexity of the Ideal Membership Problem for Constrained Problems Over the Boolean Domain","authors":"M. Mastrolilli","doi":"10.1145/3449350","DOIUrl":"https://doi.org/10.1145/3449350","url":null,"abstract":"Given an ideal I and a polynomial f the Ideal Membership Problem (IMP) is to test if f ϵ I. This problem is a fundamental algorithmic problem with important applications and notoriously intractable. We study the complexity of the IMP for combinatorial ideals that arise from constrained problems over the Boolean domain. As our main result, we identify the borderline of tractability. By using Gröbner bases techniques, we extend Schaefer’s dichotomy theorem [STOC, 1978] which classifies all Constraint Satisfaction Problems (CSPs) over the Boolean domain to be either in P or NP-hard. Moreover, our result implies necessary and sufficient conditions for the efficient computation of Theta Body Semi-Definite Programming (SDP) relaxations, identifying therefore the borderline of tractability for constraint language problems. This article is motivated by the pursuit of understanding the recently raised issue of bit complexity of Sum-of-Squares (SoS) proofs [O’Donnell, ITCS, 2017]. Raghavendra and Weitz [ICALP, 2017] show how the IMP tractability for combinatorial ideals implies bounded coefficients in SoS proofs.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131342298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sepehr Abbasi Zadeh, N. Bansal, Guru Guruganesh, Aleksandar Nikolov, Roy Schwartz, Mohit Singh
Semidefinite programming is a powerful tool in the design and analysis of approximation algorithms for combinatorial optimization problems. In particular, the random hyperplane rounding method of Goemans and Williamson [31] has been extensively studied for more than two decades, resulting in various extensions to the original technique and beautiful algorithms for a wide range of applications. Despite the fact that this approach yields tight approximation guarantees for some problems, e.g., Max-Cut, for many others, e.g., Max-SAT and Max-DiCut, the tight approximation ratio is still unknown. One of the main reasons for this is the fact that very few techniques for rounding semi-definite relaxations are known. In this work, we present a new general and simple method for rounding semi-definite programs, based on Brownian motion. Our approach is inspired by recent results in algorithmic discrepancy theory. We develop and present tools for analyzing our new rounding algorithms, utilizing mathematical machinery from the theory of Brownian motion, complex analysis, and partial differential equations. Focusing on constraint satisfaction problems, we apply our method to several classical problems, including Max-Cut, Max-2SAT, and Max-DiCut, and derive new algorithms that are competitive with the best known results. To illustrate the versatility and general applicability of our approach, we give new approximation algorithms for the Max-Cut problem with side constraints that crucially utilizes measure concentration results for the Sticky Brownian Motion, a feature missing from hyperplane rounding and its generalizations.
{"title":"Sticky Brownian Rounding and its Applications to Constraint Satisfaction Problems","authors":"Sepehr Abbasi Zadeh, N. Bansal, Guru Guruganesh, Aleksandar Nikolov, Roy Schwartz, Mohit Singh","doi":"10.1145/3459096","DOIUrl":"https://doi.org/10.1145/3459096","url":null,"abstract":"Semidefinite programming is a powerful tool in the design and analysis of approximation algorithms for combinatorial optimization problems. In particular, the random hyperplane rounding method of Goemans and Williamson [31] has been extensively studied for more than two decades, resulting in various extensions to the original technique and beautiful algorithms for a wide range of applications. Despite the fact that this approach yields tight approximation guarantees for some problems, e.g., Max-Cut, for many others, e.g., Max-SAT and Max-DiCut, the tight approximation ratio is still unknown. One of the main reasons for this is the fact that very few techniques for rounding semi-definite relaxations are known. In this work, we present a new general and simple method for rounding semi-definite programs, based on Brownian motion. Our approach is inspired by recent results in algorithmic discrepancy theory. We develop and present tools for analyzing our new rounding algorithms, utilizing mathematical machinery from the theory of Brownian motion, complex analysis, and partial differential equations. Focusing on constraint satisfaction problems, we apply our method to several classical problems, including Max-Cut, Max-2SAT, and Max-DiCut, and derive new algorithms that are competitive with the best known results. To illustrate the versatility and general applicability of our approach, we give new approximation algorithms for the Max-Cut problem with side constraints that crucially utilizes measure concentration results for the Sticky Brownian Motion, a feature missing from hyperplane rounding and its generalizations.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130671618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a simple deterministic single-pass (2+ε)-approximation algorithm for the maximum weight matching problem in the semi-streaming model. This improves on the currently best known approximation ratio of (4+ε). Our algorithm uses O(nlog2 n) bits of space for constant values of ε. It relies on a variation of the local-ratio theorem, which may be of use for other algorithms in the semi-streaming model as well.
{"title":"A (2+ε)-Approximation for Maximum Weight Matching in the Semi-streaming Model","authors":"A. Paz, Gregory Schwartzman","doi":"10.1145/3274668","DOIUrl":"https://doi.org/10.1145/3274668","url":null,"abstract":"We present a simple deterministic single-pass (2+ε)-approximation algorithm for the maximum weight matching problem in the semi-streaming model. This improves on the currently best known approximation ratio of (4+ε). Our algorithm uses O(nlog2 n) bits of space for constant values of ε. It relies on a variation of the local-ratio theorem, which may be of use for other algorithms in the semi-streaming model as well.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"293 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116568419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anders Roy Christiansen, Mikko Berggren Ettienne, T. Kociumaka, G. Navarro, N. Prezza
We describe the first self-indexes able to count and locate pattern occurrences in optimal time within a space bounded by the size of the most popular dictionary compressors. To achieve this result, we combine several recent findings, including string attractors—new combinatorial objects encompassing most known compressibility measures for highly repetitive texts—and grammars based on locally consistent parsing. More in detail, letγ be the size of the smallest attractor for a text T of length n. The measureγ is an (asymptotic) lower bound to the size of dictionary compressors based on Lempel–Ziv, context-free grammars, and many others. The smallest known text representations in terms of attractors use space O(γ log (n/γ)), and our lightest indexes work within the same asymptotic space. Let ε > 0 be a suitably small constant fixed at construction time, m be the pattern length, and occ be the number of its text occurrences. Our index counts pattern occurrences in O(m+log 2+ε n) time and locates them in O(m+(occ+1)log ε n) time. These times already outperform those of most dictionary-compressed indexes, while obtaining the least asymptotic space for any index searching within O((m+occ),polylog, n) time. Further, by increasing the space to O(γ log (n/γ)log ε n), we reduce the locating time to the optimal O(m+occ), and within O(γ log (n/γ)log n) space we can also count in optimal O(m) time. No dictionary-compressed index had obtained this time before. All our indexes can be constructed in O(n) space and O(nlog n) expected time. As a by-product of independent interest, we show how to build, in O(n) expected time and without knowing the sizeγ of the smallest attractor (which is NP-hard to find), a run-length context-free grammar of size O(γ log (n/γ)) generating (only) T. As a result, our indexes can be built without knowingγ.
{"title":"Optimal-Time Dictionary-Compressed Indexes","authors":"Anders Roy Christiansen, Mikko Berggren Ettienne, T. Kociumaka, G. Navarro, N. Prezza","doi":"10.1145/3426473","DOIUrl":"https://doi.org/10.1145/3426473","url":null,"abstract":"We describe the first self-indexes able to count and locate pattern occurrences in optimal time within a space bounded by the size of the most popular dictionary compressors. To achieve this result, we combine several recent findings, including string attractors—new combinatorial objects encompassing most known compressibility measures for highly repetitive texts—and grammars based on locally consistent parsing. More in detail, letγ be the size of the smallest attractor for a text T of length n. The measureγ is an (asymptotic) lower bound to the size of dictionary compressors based on Lempel–Ziv, context-free grammars, and many others. The smallest known text representations in terms of attractors use space O(γ log (n/γ)), and our lightest indexes work within the same asymptotic space. Let ε > 0 be a suitably small constant fixed at construction time, m be the pattern length, and occ be the number of its text occurrences. Our index counts pattern occurrences in O(m+log 2+ε n) time and locates them in O(m+(occ+1)log ε n) time. These times already outperform those of most dictionary-compressed indexes, while obtaining the least asymptotic space for any index searching within O((m+occ),polylog, n) time. Further, by increasing the space to O(γ log (n/γ)log ε n), we reduce the locating time to the optimal O(m+occ), and within O(γ log (n/γ)log n) space we can also count in optimal O(m) time. No dictionary-compressed index had obtained this time before. All our indexes can be constructed in O(n) space and O(nlog n) expected time. As a by-product of independent interest, we show how to build, in O(n) expected time and without knowing the sizeγ of the smallest attractor (which is NP-hard to find), a run-length context-free grammar of size O(γ log (n/γ)) generating (only) T. As a result, our indexes can be built without knowingγ.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123592900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sándor Kisfaludi-Bak, Jesper Nederlof, E. J. V. Leeuwen
The STEINER TREE problem is one of the most fundamental NP-complete problems, as it models many network design problems. Recall that an instance of this problem consists of a graph with edge weights and a subset of vertices (often called terminals); the goal is to find a subtree of the graph of minimum total weight that connects all terminals. A seminal paper by Erickson et al. [Math. Oper. Res., 1987{ considers instances where the underlying graph is planar and all terminals can be covered by the boundary of k faces. Erickson et al. show that the problem can be solved by an algorithm using nO(k) time and nO(k) space, where n denotes the number of vertices of the input graph. In the past 30 years there has been no significant improvement of this algorithm, despite several efforts. In this work, we give an algorithm for PLANAR STEINER TREE with running time 2O(k)nO(√k) with the above parameterization, using only polynomial space. Furthermore, we show that the running time of our algorithm is almost tight: We prove that there is no f(k)no(√k) algorithm for PLANAR STEINER TREE for any computable function f, unless the Exponential Time Hypothesis fails.
{"title":"Nearly ETH-tight Algorithms for Planar Steiner Tree with Terminals on Few Faces","authors":"Sándor Kisfaludi-Bak, Jesper Nederlof, E. J. V. Leeuwen","doi":"10.1145/3371389","DOIUrl":"https://doi.org/10.1145/3371389","url":null,"abstract":"The STEINER TREE problem is one of the most fundamental NP-complete problems, as it models many network design problems. Recall that an instance of this problem consists of a graph with edge weights and a subset of vertices (often called terminals); the goal is to find a subtree of the graph of minimum total weight that connects all terminals. A seminal paper by Erickson et al. [Math. Oper. Res., 1987{ considers instances where the underlying graph is planar and all terminals can be covered by the boundary of k faces. Erickson et al. show that the problem can be solved by an algorithm using nO(k) time and nO(k) space, where n denotes the number of vertices of the input graph. In the past 30 years there has been no significant improvement of this algorithm, despite several efforts. In this work, we give an algorithm for PLANAR STEINER TREE with running time 2O(k)nO(√k) with the above parameterization, using only polynomial space. Furthermore, we show that the running time of our algorithm is almost tight: We prove that there is no f(k)no(√k) algorithm for PLANAR STEINER TREE for any computable function f, unless the Exponential Time Hypothesis fails.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128130913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Fomin, P. Golovach, D. Lokshtanov, Saket Saurabh, M. Zehavi
MAX-CUT, EDGE DOMINATING SET, GRAPH COLORING, and HAMILTONIAN CYCLE on graphs of bounded clique-width have received significant attention as they can be formulated in MSO2 (and, therefore, have linear-time algorithms on bounded treewidth graphs by the celebrated Courcelle’s theorem), but cannot be formulated in MSO1 (which would have yielded linear-time algorithms on bounded clique-width graphs by a well-known theorem of Courcelle, Makowsky, and Rotics). Each of these problems can be solved in time g(k)nf(k) on graphs of clique-width k. Fomin et al. (2010) showed that the running times cannot be improved to g(k)nO(1) assuming W[1]≠FPT. However, this does not rule out non-trivial improvements to the exponent f(k) in the running times. In a follow-up paper, Fomin et al. (2014) improved the running times for EDGE DOMINATING SET and MAX-CUT to nO(k), and proved that these problems cannot be solved in time g(k)no(k) unless ETH fails. Thus, prior to this work, EDGE DOMINATING SET and MAX-CUT were known to have tight nΘ (k) algorithmic upper and lower bounds. In this article, we provide lower bounds for HAMILTONIAN CYCLE and GRAPH COLORING. For HAMILTONIAN CYCLE, our lower bound g(k)no(k) matches asymptotically the recent upper bound nO(k) due to Bergougnoux, Kanté, and Kwon (2017). As opposed to the asymptotically tight nΘ(k) bounds for EDGE DOMINATING SET, MAX-CUT, and HAMILTONIAN CYCLE, the GRAPH COLORING problem has an upper bound of nO(2k) and a lower bound of merely no(√ [4]k) (implicit from the W[1]-hardness proof). In this article, we close the gap for GRAPH COLORING by proving a lower bound of n2o(k). This shows that GRAPH COLORING behaves qualitatively different from the other three problems. To the best of our knowledge, GRAPH COLORING is the first natural problem known to require exponential dependence on the parameter in the exponent of n.
{"title":"Clique-width III","authors":"F. Fomin, P. Golovach, D. Lokshtanov, Saket Saurabh, M. Zehavi","doi":"10.1145/3280824","DOIUrl":"https://doi.org/10.1145/3280824","url":null,"abstract":"MAX-CUT, EDGE DOMINATING SET, GRAPH COLORING, and HAMILTONIAN CYCLE on graphs of bounded clique-width have received significant attention as they can be formulated in MSO2 (and, therefore, have linear-time algorithms on bounded treewidth graphs by the celebrated Courcelle’s theorem), but cannot be formulated in MSO1 (which would have yielded linear-time algorithms on bounded clique-width graphs by a well-known theorem of Courcelle, Makowsky, and Rotics). Each of these problems can be solved in time g(k)nf(k) on graphs of clique-width k. Fomin et al. (2010) showed that the running times cannot be improved to g(k)nO(1) assuming W[1]≠FPT. However, this does not rule out non-trivial improvements to the exponent f(k) in the running times. In a follow-up paper, Fomin et al. (2014) improved the running times for EDGE DOMINATING SET and MAX-CUT to nO(k), and proved that these problems cannot be solved in time g(k)no(k) unless ETH fails. Thus, prior to this work, EDGE DOMINATING SET and MAX-CUT were known to have tight nΘ (k) algorithmic upper and lower bounds. In this article, we provide lower bounds for HAMILTONIAN CYCLE and GRAPH COLORING. For HAMILTONIAN CYCLE, our lower bound g(k)no(k) matches asymptotically the recent upper bound nO(k) due to Bergougnoux, Kanté, and Kwon (2017). As opposed to the asymptotically tight nΘ(k) bounds for EDGE DOMINATING SET, MAX-CUT, and HAMILTONIAN CYCLE, the GRAPH COLORING problem has an upper bound of nO(2k) and a lower bound of merely no(√ [4]k) (implicit from the W[1]-hardness proof). In this article, we close the gap for GRAPH COLORING by proving a lower bound of n2o(k). This shows that GRAPH COLORING behaves qualitatively different from the other three problems. To the best of our knowledge, GRAPH COLORING is the first natural problem known to require exponential dependence on the parameter in the exponent of n.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"163 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133905056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We construct a theory of holant clones to capture the notion of expressibility in the holant framework. Their role is analogous to the role played by functional clones in the study of weighted counting Constraint Satisfaction Problems. We explore the landscape of conservative holant clones and determine the situations in which a set F of functions is “universal in the conservative case,” which means that all functions are contained in the holant clone generated by F together with all unary functions. When F is not universal in the conservative case, we give concise generating sets for the clone. We demonstrate the usefulness of the holant clone theory by using it to give a complete complexity-theory classification for the problem of approximating the solution to conservative holant problems. We show that approximation is intractable exactly when F is universal in the conservative case.
{"title":"Holant Clones and the Approximability of Conservative Holant Problems","authors":"Miriam Backens, L. A. Goldberg","doi":"10.1145/3381425","DOIUrl":"https://doi.org/10.1145/3381425","url":null,"abstract":"We construct a theory of holant clones to capture the notion of expressibility in the holant framework. Their role is analogous to the role played by functional clones in the study of weighted counting Constraint Satisfaction Problems. We explore the landscape of conservative holant clones and determine the situations in which a set F of functions is “universal in the conservative case,” which means that all functions are contained in the holant clone generated by F together with all unary functions. When F is not universal in the conservative case, we give concise generating sets for the clone. We demonstrate the usefulness of the holant clone theory by using it to give a complete complexity-theory classification for the problem of approximating the solution to conservative holant problems. We show that approximation is intractable exactly when F is universal in the conservative case.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127492545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many dynamic graph algorithms have an amortized update time, rather than a stronger worst-case guarantee. But amortized data structures are not suitable for real-time systems, where each individual operation has to be executed quickly. For this reason, there exist many recent randomized results that aim to provide a guarantee stronger than amortized expected. The strongest possible guarantee for a randomized algorithm is that it is always correct (Las Vegas) and has high-probability worst-case update time, which gives a bound on the time for each individual operation that holds with high probability. In this article, we present the first polylogarithmic high-probability worst-case time bounds for the dynamic spanner and the dynamic maximal matching problem. (1) For dynamic spanner, the only known o(n) worst-case bounds were O(n3/4) high-probability worst-case update time for maintaining a 3-spanner and O(n5/9) for maintaining a 5-spanner. We give a O(1)k log3 (n) high-probability worst-case time bound for maintaining a (2k-1)-spanner, which yields the first worst-case polylog update time for all constant k. (All the results above maintain the optimal tradeoff of stretch 2k-1 and Õ(n1+1/k) edges.) (2) For dynamic maximal matching, or dynamic 2-approximate maximum matching, no algorithm with o(n) worst-case time bound was known and we present an algorithm with O(log 5 (n)) high-probability worst-case time; similar worst-case bounds existed only for maintaining a matching that was (2+ϵ)-approximate, and hence not maximal. Our results are achieved using a new approach for converting amortized guarantees to worst-case ones for randomized data structures by going through a third type of guarantee, which is a middle ground between the two above: An algorithm is said to have worst-case expected update time ɑ if for every update σ, the expected time to process σ is at most ɑ. Although stronger than amortized expected, the worst-case expected guarantee does not resolve the fundamental problem of amortization: A worst-case expected update time of O(1) still allows for the possibility that every 1/f(n) updates requires ϴ (f(n)) time to process, for arbitrarily high f(n). In this article, we present a black-box reduction that converts any data structure with worst-case expected update time into one with a high-probability worst-case update time: The query time remains the same, while the update time increases by a factor of O(log 2(n)). Thus, we achieve our results in two steps: (1) First, we show how to convert existing dynamic graph algorithms with amortized expected polylogarithmic running times into algorithms with worst-case expected polylogarithmic running times. (2) Then, we use our black-box reduction to achieve the polylogarithmic high-probability worst-case time bound. All our algorithms are Las-Vegas-type algorithms.
{"title":"A Deamortization Approach for Dynamic Spanner and Dynamic Maximal Matching","authors":"A. Bernstein, S. Forster, M. Henzinger","doi":"10.1145/3469833","DOIUrl":"https://doi.org/10.1145/3469833","url":null,"abstract":"Many dynamic graph algorithms have an amortized update time, rather than a stronger worst-case guarantee. But amortized data structures are not suitable for real-time systems, where each individual operation has to be executed quickly. For this reason, there exist many recent randomized results that aim to provide a guarantee stronger than amortized expected. The strongest possible guarantee for a randomized algorithm is that it is always correct (Las Vegas) and has high-probability worst-case update time, which gives a bound on the time for each individual operation that holds with high probability. In this article, we present the first polylogarithmic high-probability worst-case time bounds for the dynamic spanner and the dynamic maximal matching problem. (1) For dynamic spanner, the only known o(n) worst-case bounds were O(n3/4) high-probability worst-case update time for maintaining a 3-spanner and O(n5/9) for maintaining a 5-spanner. We give a O(1)k log3 (n) high-probability worst-case time bound for maintaining a (2k-1)-spanner, which yields the first worst-case polylog update time for all constant k. (All the results above maintain the optimal tradeoff of stretch 2k-1 and Õ(n1+1/k) edges.) (2) For dynamic maximal matching, or dynamic 2-approximate maximum matching, no algorithm with o(n) worst-case time bound was known and we present an algorithm with O(log 5 (n)) high-probability worst-case time; similar worst-case bounds existed only for maintaining a matching that was (2+ϵ)-approximate, and hence not maximal. Our results are achieved using a new approach for converting amortized guarantees to worst-case ones for randomized data structures by going through a third type of guarantee, which is a middle ground between the two above: An algorithm is said to have worst-case expected update time ɑ if for every update σ, the expected time to process σ is at most ɑ. Although stronger than amortized expected, the worst-case expected guarantee does not resolve the fundamental problem of amortization: A worst-case expected update time of O(1) still allows for the possibility that every 1/f(n) updates requires ϴ (f(n)) time to process, for arbitrarily high f(n). In this article, we present a black-box reduction that converts any data structure with worst-case expected update time into one with a high-probability worst-case update time: The query time remains the same, while the update time increases by a factor of O(log 2(n)). Thus, we achieve our results in two steps: (1) First, we show how to convert existing dynamic graph algorithms with amortized expected polylogarithmic running times into algorithms with worst-case expected polylogarithmic running times. (2) Then, we use our black-box reduction to achieve the polylogarithmic high-probability worst-case time bound. All our algorithms are Las-Vegas-type algorithms.","PeriodicalId":154047,"journal":{"name":"ACM Transactions on Algorithms (TALG)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126293515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}