We develop efficient algorithms for estimating low-degree moments of unknown distributions in the presence of adversarial outliers and design a new family of convex relaxations for k-means clustering based on sum-of-squares method. As an immediate corollary, for any γ > 0, we obtain an efficient algorithm for learning the means of a mixture of k arbitrary distributions in d in time dO(1/γ) so long as the means have separation Ω(kγ). This in particular yields an algorithm for learning Gaussian mixtures with separation Ω(kγ), thus partially resolving an open problem of Regev and Vijayaraghavan regev2017learning. The guarantees of our robust estimation algorithms improve in many cases significantly over the best previous ones, obtained in the recent works. We also show that the guarantees of our algorithms match information-theoretic lower-bounds for the class of distributions we consider. These improved guarantees allow us to give improved algorithms for independent component analysis and learning mixtures of Gaussians in the presence of outliers. We also show a sharp upper bound on the sum-of-squares norms for moment tensors of any distribution that satisfies the Poincare inequality. The Poincare inequality is a central inequality in probability theory, and a large class of distributions satisfy it including Gaussians, product distributions, strongly log-concave distributions, and any sum or uniformly continuous transformation of such distributions. As a consequence, this yields that all of the above algorithmic improvements hold for distributions satisfying the Poincare inequality.
{"title":"Robust moment estimation and improved clustering via sum of squares","authors":"Pravesh Kothari, J. Steinhardt, David Steurer","doi":"10.1145/3188745.3188970","DOIUrl":"https://doi.org/10.1145/3188745.3188970","url":null,"abstract":"We develop efficient algorithms for estimating low-degree moments of unknown distributions in the presence of adversarial outliers and design a new family of convex relaxations for k-means clustering based on sum-of-squares method. As an immediate corollary, for any γ > 0, we obtain an efficient algorithm for learning the means of a mixture of k arbitrary distributions in d in time dO(1/γ) so long as the means have separation Ω(kγ). This in particular yields an algorithm for learning Gaussian mixtures with separation Ω(kγ), thus partially resolving an open problem of Regev and Vijayaraghavan regev2017learning. The guarantees of our robust estimation algorithms improve in many cases significantly over the best previous ones, obtained in the recent works. We also show that the guarantees of our algorithms match information-theoretic lower-bounds for the class of distributions we consider. These improved guarantees allow us to give improved algorithms for independent component analysis and learning mixtures of Gaussians in the presence of outliers. We also show a sharp upper bound on the sum-of-squares norms for moment tensors of any distribution that satisfies the Poincare inequality. The Poincare inequality is a central inequality in probability theory, and a large class of distributions satisfy it including Gaussians, product distributions, strongly log-concave distributions, and any sum or uniformly continuous transformation of such distributions. As a consequence, this yields that all of the above algorithmic improvements hold for distributions satisfying the Poincare inequality.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86216832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Badrinarayanan, Y. Kalai, Dakshita Khurana, A. Sahai, D. Wichs
We construct a delegation scheme for verifying non-deterministic computations, with complexity proportional only to the non-deterministic space of the computation. Specifically, letting n denote the input length, we construct a delegation scheme for any language verifiable in non-deterministic time and space (T(n), S(n)) with communication complexity poly(S(n)), verifier runtime n.polylog(T(n))+poly(S(n)), and prover runtime poly(T(n)). Our scheme consists of only two messages and has adaptive soundness, assuming the existence of a sub-exponentially secure private information retrieval (PIR) scheme, which can be instantiated under standard (albeit, sub-exponential) cryptographic assumptions, such as the sub-exponential LWE assumption. Specifically, the verifier publishes a (short) public key ahead of time, and this key can be used by any prover to non-interactively prove the correctness of any adaptively chosen non-deterministic computation. Such a scheme is referred to as a non-interactive delegation scheme. Our scheme is privately verifiable, where the verifier needs the corresponding secret key in order to verify proofs. Prior to our work, such results were known only in the Random Oracle Model, or under knowledge assumptions. Our results yield succinct non-interactive arguments based on sub-exponential LWE, for many natural languages believed to be outside of P.
{"title":"Succinct delegation for low-space non-deterministic computation","authors":"S. Badrinarayanan, Y. Kalai, Dakshita Khurana, A. Sahai, D. Wichs","doi":"10.1145/3188745.3188924","DOIUrl":"https://doi.org/10.1145/3188745.3188924","url":null,"abstract":"We construct a delegation scheme for verifying non-deterministic computations, with complexity proportional only to the non-deterministic space of the computation. Specifically, letting n denote the input length, we construct a delegation scheme for any language verifiable in non-deterministic time and space (T(n), S(n)) with communication complexity poly(S(n)), verifier runtime n.polylog(T(n))+poly(S(n)), and prover runtime poly(T(n)). Our scheme consists of only two messages and has adaptive soundness, assuming the existence of a sub-exponentially secure private information retrieval (PIR) scheme, which can be instantiated under standard (albeit, sub-exponential) cryptographic assumptions, such as the sub-exponential LWE assumption. Specifically, the verifier publishes a (short) public key ahead of time, and this key can be used by any prover to non-interactively prove the correctness of any adaptively chosen non-deterministic computation. Such a scheme is referred to as a non-interactive delegation scheme. Our scheme is privately verifiable, where the verifier needs the corresponding secret key in order to verify proofs. Prior to our work, such results were known only in the Random Oracle Model, or under knowledge assumptions. Our results yield succinct non-interactive arguments based on sub-exponential LWE, for many natural languages believed to be outside of P.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"532 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84344372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose truncated concentrated differential privacy (tCDP), a refinement of differential privacy and of concentrated differential privacy. This new definition provides robust and efficient composition guarantees, supports powerful algorithmic techniques such as privacy amplification via sub-sampling, and enables more accurate statistical analyses. In particular, we show a central task for which the new definition enables exponential accuracy improvement.
{"title":"Composable and versatile privacy via truncated CDP","authors":"Mark Bun, C. Dwork, G. Rothblum, T. Steinke","doi":"10.1145/3188745.3188946","DOIUrl":"https://doi.org/10.1145/3188745.3188946","url":null,"abstract":"We propose truncated concentrated differential privacy (tCDP), a refinement of differential privacy and of concentrated differential privacy. This new definition provides robust and efficient composition guarantees, supports powerful algorithmic techniques such as privacy amplification via sub-sampling, and enables more accurate statistical analyses. In particular, we show a central task for which the new definition enables exponential accuracy improvement.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89546381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","authors":"","doi":"10.1145/3188745","DOIUrl":"https://doi.org/10.1145/3188745","url":null,"abstract":"","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89841414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computing Nash equilibrium (NE) in two-player game is a central question in algorithmic game theory. The main motivation of this work is to understand the power of sum-of-squares method in computing equilibria, both exact and approximate. Previous works in this context have focused on hardness of approximating “best” equilibria with respect to some natural quality measure on equilibria such as social welfare. Such results, however, do not directly relate to the complexity of the problem of finding any equilibrium. In this work, we propose a framework of roundings for the sum-of-squares algorithm (and convex relaxations in general) applicable to finding approximate/exact equilbria in two player bimatrix games. Specifically, we define the notion of oblivious roundings with verification oracle (OV). These are algorithms that can access a solution to the degree d SoS relaxation to construct a list of candidate (partial) solutions and invoke a verification oracle to check if a candidate in the list gives an (exact or approximate) equilibrium. This framework captures most known approximation algorithms in combinatorial optimization including the celebrated semi-definite programming based algorithms for Max-Cut, Constraint-Satisfaction Problems, and the recent works on SoS relaxations for Unique Games/Small-Set Expansion, Best Separable State, and many problems in unsupervised machine learning. Our main results are strong unconditional lower bounds in this framework. Specifically, we show that for є = Θ(1/poly(n)), there’s no algorithm that uses a o(n)-degree SoS relaxation to construct a 2o(n)-size list of candidates and obtain an є-approximate NE. For some constant є, we show a similar result for degree o(log(n)) SoS relaxation and list size no(log(n)). Our results can be seen as an unconditional confirmation, in our restricted algorithmic framework, of the recent Exponential Time Hypothesis for PPAD. Our proof strategy involves constructing a family of games that all share a common sum-of-squares solution but every (approximate) equilibrium of any game is far from every equilibrium of any other game in the family (in either player’s strategy). Along the way, we strengthen the classical unconditional lower bound against enumerative algorithms for finding approximate equilibria due to Daskalakis-Papadimitriou and the classical hardness of computing equilibria due to Gilbow-Zemel.
{"title":"Sum-of-squares meets Nash: lower bounds for finding any equilibrium","authors":"Pravesh Kothari, R. Mehta","doi":"10.1145/3188745.3188892","DOIUrl":"https://doi.org/10.1145/3188745.3188892","url":null,"abstract":"Computing Nash equilibrium (NE) in two-player game is a central question in algorithmic game theory. The main motivation of this work is to understand the power of sum-of-squares method in computing equilibria, both exact and approximate. Previous works in this context have focused on hardness of approximating “best” equilibria with respect to some natural quality measure on equilibria such as social welfare. Such results, however, do not directly relate to the complexity of the problem of finding any equilibrium. In this work, we propose a framework of roundings for the sum-of-squares algorithm (and convex relaxations in general) applicable to finding approximate/exact equilbria in two player bimatrix games. Specifically, we define the notion of oblivious roundings with verification oracle (OV). These are algorithms that can access a solution to the degree d SoS relaxation to construct a list of candidate (partial) solutions and invoke a verification oracle to check if a candidate in the list gives an (exact or approximate) equilibrium. This framework captures most known approximation algorithms in combinatorial optimization including the celebrated semi-definite programming based algorithms for Max-Cut, Constraint-Satisfaction Problems, and the recent works on SoS relaxations for Unique Games/Small-Set Expansion, Best Separable State, and many problems in unsupervised machine learning. Our main results are strong unconditional lower bounds in this framework. Specifically, we show that for є = Θ(1/poly(n)), there’s no algorithm that uses a o(n)-degree SoS relaxation to construct a 2o(n)-size list of candidates and obtain an є-approximate NE. For some constant є, we show a similar result for degree o(log(n)) SoS relaxation and list size no(log(n)). Our results can be seen as an unconditional confirmation, in our restricted algorithmic framework, of the recent Exponential Time Hypothesis for PPAD. Our proof strategy involves constructing a family of games that all share a common sum-of-squares solution but every (approximate) equilibrium of any game is far from every equilibrium of any other game in the family (in either player’s strategy). Along the way, we strengthen the classical unconditional lower bound against enumerative algorithms for finding approximate equilibria due to Daskalakis-Papadimitriou and the classical hardness of computing equilibria due to Gilbow-Zemel.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"75 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88192760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vertex coloring is one of the classic symmetry breaking problems studied in distributed computing. In this paper we present a new algorithm for (Δ+1)-list coloring in the randomized LOCAL model running in O(log∗n + Detd(poly logn)) time, where Detd(n′) is the deterministic complexity of (deg+1)-list coloring (v’s palette has size deg(v)+1) on n′-vertex graphs. This improves upon a previous randomized algorithm of Harris, Schneider, and Su (STOC 2016). with complexity O(√logΔ + loglogn + Detd(poly logn)), and (when Δ is sufficiently large) is much faster than the best known deterministic algorithm of Fraigniaud, Heinrich, and Kosowski (FOCS 2016), with complexity O(√Δlog2.5Δ + log* n). Our algorithm appears to be optimal. It matches the Ω(log∗n) randomized lower bound, due to Naor (SIDMA 1991) and sort of matches the Ω(Det(poly logn)) randomized lower bound due to Chang, Kopelowitz, and Pettie (FOCS 2016), where Det is the deterministic complexity of (Δ+1)-list coloring. The best known upper bounds on Detd(n′) and Det(n′) are both 2O(√logn′) by Panconesi and Srinivasan (Journal of Algorithms 1996), and it is quite plausible that the complexities of both problems are the same, asymptotically.
{"title":"An optimal distributed (Δ+1)-coloring algorithm?","authors":"Yi-Jun Chang, Wenzheng Li, S. Pettie","doi":"10.1145/3188745.3188964","DOIUrl":"https://doi.org/10.1145/3188745.3188964","url":null,"abstract":"Vertex coloring is one of the classic symmetry breaking problems studied in distributed computing. In this paper we present a new algorithm for (Δ+1)-list coloring in the randomized LOCAL model running in O(log∗n + Detd(poly logn)) time, where Detd(n′) is the deterministic complexity of (deg+1)-list coloring (v’s palette has size deg(v)+1) on n′-vertex graphs. This improves upon a previous randomized algorithm of Harris, Schneider, and Su (STOC 2016). with complexity O(√logΔ + loglogn + Detd(poly logn)), and (when Δ is sufficiently large) is much faster than the best known deterministic algorithm of Fraigniaud, Heinrich, and Kosowski (FOCS 2016), with complexity O(√Δlog2.5Δ + log* n). Our algorithm appears to be optimal. It matches the Ω(log∗n) randomized lower bound, due to Naor (SIDMA 1991) and sort of matches the Ω(Det(poly logn)) randomized lower bound due to Chang, Kopelowitz, and Pettie (FOCS 2016), where Det is the deterministic complexity of (Δ+1)-list coloring. The best known upper bounds on Detd(n′) and Det(n′) are both 2O(√logn′) by Panconesi and Srinivasan (Journal of Algorithms 1996), and it is quite plausible that the complexities of both problems are the same, asymptotically.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84411344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rasmus Kyng, Richard Peng, Robert Schwieterman, Peng Zhang
We present an asymptotically faster algorithm for solving linear systems in well-structured 3-dimensional truss stiffness matrices. These linear systems arise from linear elasticity problems, and can be viewed as extensions of graph Laplacians into higher dimensions. Faster solvers for the 2-D variants of such systems have been studied using generalizations of tools for solving graph Laplacians [Daitch-Spielman CSC’07, Shklarski-Toledo SIMAX’08]. Given a 3-dimensional truss over n vertices which is formed from a union of k convex structures (tetrahedral meshes) with bounded aspect ratios, whose individual tetrahedrons are also in some sense well-conditioned, our algorithm solves a linear system in the associated stiffness matrix up to accuracy є in time O(k1/3 n5/3 log(1 / є)). This asymptotically improves the running time O(n2) by Nested Dissection for all k ≪ n. We also give a result that improves on Nested Dissection even when we allow any aspect ratio for each of the k convex structures (but we still require well-conditioned individual tetrahedrons). In this regime, we improve on Nested Dissection for k ≪ n1/44. The key idea of our algorithm is to combine nested dissection and support theory. Both of these techniques for solving linear systems are well studied, but usually separately. Our algorithm decomposes a 3-dimensional truss into separate and balanced regions with small boundaries. We then bound the spectrum of each such region separately, and utilize such bounds to obtain improved algorithms by preconditioning with partial states of separator-based Gaussian elimination.
{"title":"Incomplete nested dissection","authors":"Rasmus Kyng, Richard Peng, Robert Schwieterman, Peng Zhang","doi":"10.1145/3188745.3188960","DOIUrl":"https://doi.org/10.1145/3188745.3188960","url":null,"abstract":"We present an asymptotically faster algorithm for solving linear systems in well-structured 3-dimensional truss stiffness matrices. These linear systems arise from linear elasticity problems, and can be viewed as extensions of graph Laplacians into higher dimensions. Faster solvers for the 2-D variants of such systems have been studied using generalizations of tools for solving graph Laplacians [Daitch-Spielman CSC’07, Shklarski-Toledo SIMAX’08]. Given a 3-dimensional truss over n vertices which is formed from a union of k convex structures (tetrahedral meshes) with bounded aspect ratios, whose individual tetrahedrons are also in some sense well-conditioned, our algorithm solves a linear system in the associated stiffness matrix up to accuracy є in time O(k1/3 n5/3 log(1 / є)). This asymptotically improves the running time O(n2) by Nested Dissection for all k ≪ n. We also give a result that improves on Nested Dissection even when we allow any aspect ratio for each of the k convex structures (but we still require well-conditioned individual tetrahedrons). In this regime, we improve on Nested Dissection for k ≪ n1/44. The key idea of our algorithm is to combine nested dissection and support theory. Both of these techniques for solving linear systems are well studied, but usually separately. Our algorithm decomposes a 3-dimensional truss into separate and balanced regions with small boundaries. We then bound the spectrum of each such region separately, and utilize such bounds to obtain improved algorithms by preconditioning with partial states of separator-based Gaussian elimination.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89117783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amir Abboud, K. Bringmann, Holger Dell, Jesper Nederlof
The Strong Exponential Time Hypothesis and the OV-conjecture are two popular hardness assumptions used to prove a plethora of lower bounds, especially in the realm of polynomial-time algorithms. The OV-conjecture in moderate dimension states there is no ε>0 for which an O(N2−ε) poly(D) time algorithm can decide whether there is a pair of orthogonal vectors in a given set of size N that contains D-dimensional binary vectors. We strengthen the evidence for these hardness assumptions. In particular, we show that if the OV-conjecture fails, then two problems for which we are far from obtaining even tiny improvements over exhaustive search would have surprisingly fast algorithms. If the OV conjecture is false, then there is a fixed ε>0 such that: - For all d and all large enough k, there is a randomized algorithm that takes O(n(1−ε)k) time to solve the Zero-Weight-k-Clique and Min-Weight-k-Clique problems on d-hypergraphs with n vertices. As a consequence, the OV-conjecture is implied by the Weighted Clique conjecture. - For all c, the satisfiability of sparse TC1 circuits on n inputs (that is, circuits with cn wires, depth clogn, and negation, AND, OR, and threshold gates) can be computed in time O((2−ε)n).
{"title":"More consequences of falsifying SETH and the orthogonal vectors conjecture","authors":"Amir Abboud, K. Bringmann, Holger Dell, Jesper Nederlof","doi":"10.1145/3188745.3188938","DOIUrl":"https://doi.org/10.1145/3188745.3188938","url":null,"abstract":"The Strong Exponential Time Hypothesis and the OV-conjecture are two popular hardness assumptions used to prove a plethora of lower bounds, especially in the realm of polynomial-time algorithms. The OV-conjecture in moderate dimension states there is no ε>0 for which an O(N2−ε) poly(D) time algorithm can decide whether there is a pair of orthogonal vectors in a given set of size N that contains D-dimensional binary vectors. We strengthen the evidence for these hardness assumptions. In particular, we show that if the OV-conjecture fails, then two problems for which we are far from obtaining even tiny improvements over exhaustive search would have surprisingly fast algorithms. If the OV conjecture is false, then there is a fixed ε>0 such that: - For all d and all large enough k, there is a randomized algorithm that takes O(n(1−ε)k) time to solve the Zero-Weight-k-Clique and Min-Weight-k-Clique problems on d-hypergraphs with n vertices. As a consequence, the OV-conjecture is implied by the Weighted Clique conjecture. - For all c, the satisfiability of sparse TC1 circuits on n inputs (that is, circuits with cn wires, depth clogn, and negation, AND, OR, and threshold gates) can be computed in time O((2−ε)n).","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82394937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We devise an algorithm that approximately computes the number of paths of length k in a given directed graph with n vertices up to a multiplicative error of 1 ± ε. Our algorithm runs in time ε−2 4k(n+m) poly(k). The algorithm is based on associating with each vertex an element in the exterior (or, Grassmann) algebra, called an extensor, and then performing computations in this algebra. This connection to exterior algebra generalizes a number of previous approaches for the longest path problem and is of independent conceptual interest. Using this approach, we also obtain a deterministic 2k·poly(n) time algorithm to find a k-path in a given directed graph that is promised to have few of them. Our results and techniques generalize to the subgraph isomorphism problem when the subgraphs we are looking for have bounded pathwidth. Finally, we also obtain a randomized algorithm to detect k-multilinear terms in a multivariate polynomial given as a general algebraic circuit. To the best of our knowledge, this was previously only known for algebraic circuits not involving negative constants.
{"title":"Extensor-coding","authors":"Cornelius Brand, Holger Dell, T. Husfeldt","doi":"10.1145/3188745.3188902","DOIUrl":"https://doi.org/10.1145/3188745.3188902","url":null,"abstract":"We devise an algorithm that approximately computes the number of paths of length k in a given directed graph with n vertices up to a multiplicative error of 1 ± ε. Our algorithm runs in time ε−2 4k(n+m) poly(k). The algorithm is based on associating with each vertex an element in the exterior (or, Grassmann) algebra, called an extensor, and then performing computations in this algebra. This connection to exterior algebra generalizes a number of previous approaches for the longest path problem and is of independent conceptual interest. Using this approach, we also obtain a deterministic 2k·poly(n) time algorithm to find a k-path in a given directed graph that is promised to have few of them. Our results and techniques generalize to the subgraph isomorphism problem when the subgraphs we are looking for have bounded pathwidth. Finally, we also obtain a randomized algorithm to detect k-multilinear terms in a multivariate polynomial given as a general algebraic circuit. To the best of our knowledge, this was previously only known for algebraic circuits not involving negative constants.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"58 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89516353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Tree Augmentation Problem (TAP) is a fundamental network design problem in which we are given a tree and a set of additional edges, also called links. The task is to find a set of links, of minimum size, whose addition to the tree leads to a 2-edge-connected graph. A long line of results on TAP culminated in the previously best known approximation guarantee of 1.5 achieved by a combinatorial approach due to Kortsarz and Nutov [ACM Transactions on Algorithms 2016], and also by an SDP-based approach by Cheriyan and Gao [Algorithmica 2017]. Moreover, an elegant LP-based (1.5+є)-approximation has also been found very recently by Fiorini, Groß, K'onemann, and Sanitá [SODA 2018]. In this paper, we show that an approximation factor below 1.5 can be achieved, by presenting a 1.458-approximation that is based on several new techniques. By extending prior results of Adjiashvili [SODA 2017], we first present a black-box reduction to a very structured type of instance, which played a crucial role in recent development on the problem, and which we call k-wide. Our main contribution is a new approximation algorithm for O(1)-wide tree instances with approximation guarantee strictly below 1.458, based on one of their fundamental properties: wide trees naturally decompose into smaller subtrees with a constant number of leaves. Previous approaches in similar settings rounded each subtree independently and simply combined the obtained solutions. We show that additionally, when starting with a well-chosen LP, the combined solution can be improved through a new “rewiring” technique, showing that one can replace some pairs of used links by a single link. We can rephrase the rewiring problem as a stochastic version of a matching problem, which may be of independent interest. By showing that large matchings can be obtained in this problem, we obtain that a significant number of rewirings are possible, thus leading to an approximation factor below 1.5.
{"title":"Improved approximation for tree augmentation: saving by rewiring","authors":"F. Grandoni, Christos Kalaitzis, R. Zenklusen","doi":"10.1145/3188745.3188898","DOIUrl":"https://doi.org/10.1145/3188745.3188898","url":null,"abstract":"The Tree Augmentation Problem (TAP) is a fundamental network design problem in which we are given a tree and a set of additional edges, also called links. The task is to find a set of links, of minimum size, whose addition to the tree leads to a 2-edge-connected graph. A long line of results on TAP culminated in the previously best known approximation guarantee of 1.5 achieved by a combinatorial approach due to Kortsarz and Nutov [ACM Transactions on Algorithms 2016], and also by an SDP-based approach by Cheriyan and Gao [Algorithmica 2017]. Moreover, an elegant LP-based (1.5+є)-approximation has also been found very recently by Fiorini, Groß, K'onemann, and Sanitá [SODA 2018]. In this paper, we show that an approximation factor below 1.5 can be achieved, by presenting a 1.458-approximation that is based on several new techniques. By extending prior results of Adjiashvili [SODA 2017], we first present a black-box reduction to a very structured type of instance, which played a crucial role in recent development on the problem, and which we call k-wide. Our main contribution is a new approximation algorithm for O(1)-wide tree instances with approximation guarantee strictly below 1.458, based on one of their fundamental properties: wide trees naturally decompose into smaller subtrees with a constant number of leaves. Previous approaches in similar settings rounded each subtree independently and simply combined the obtained solutions. We show that additionally, when starting with a well-chosen LP, the combined solution can be improved through a new “rewiring” technique, showing that one can replace some pairs of used links by a single link. We can rephrase the rewiring problem as a stochastic version of a matching problem, which may be of independent interest. By showing that large matchings can be obtained in this problem, we obtain that a significant number of rewirings are possible, thus leading to an approximation factor below 1.5.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74276044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}