首页 > 最新文献

Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing最新文献

英文 中文
Robust moment estimation and improved clustering via sum of squares 基于平方和的鲁棒矩估计和改进聚类
Pub Date : 2018-06-20 DOI: 10.1145/3188745.3188970
Pravesh Kothari, J. Steinhardt, David Steurer
We develop efficient algorithms for estimating low-degree moments of unknown distributions in the presence of adversarial outliers and design a new family of convex relaxations for k-means clustering based on sum-of-squares method. As an immediate corollary, for any γ > 0, we obtain an efficient algorithm for learning the means of a mixture of k arbitrary distributions in d in time dO(1/γ) so long as the means have separation Ω(kγ). This in particular yields an algorithm for learning Gaussian mixtures with separation Ω(kγ), thus partially resolving an open problem of Regev and Vijayaraghavan regev2017learning. The guarantees of our robust estimation algorithms improve in many cases significantly over the best previous ones, obtained in the recent works. We also show that the guarantees of our algorithms match information-theoretic lower-bounds for the class of distributions we consider. These improved guarantees allow us to give improved algorithms for independent component analysis and learning mixtures of Gaussians in the presence of outliers. We also show a sharp upper bound on the sum-of-squares norms for moment tensors of any distribution that satisfies the Poincare inequality. The Poincare inequality is a central inequality in probability theory, and a large class of distributions satisfy it including Gaussians, product distributions, strongly log-concave distributions, and any sum or uniformly continuous transformation of such distributions. As a consequence, this yields that all of the above algorithmic improvements hold for distributions satisfying the Poincare inequality.
我们开发了一种有效的算法来估计存在对抗性异常值的未知分布的低度矩,并设计了一种新的基于平方和方法的k-means聚类凸松弛。作为一个直接的推论,对于任何γ > 0,我们得到了一种有效的算法,可以在dO(1/γ)时间内学习d中k个任意分布的混合均值,只要均值有分离Ω(kγ)。这特别产生了一种用于学习分离Ω(kγ)的高斯混合物的算法,从而部分解决了Regev和Vijayaraghavan regev2017学习的开放问题。在许多情况下,我们的鲁棒估计算法的保证比最近的研究中获得的最好的估计算法有了显著的提高。我们还证明了我们算法的保证与我们所考虑的分布类的信息论下界相匹配。这些改进的保证使我们能够在异常值存在的情况下给出独立分量分析和学习高斯混合的改进算法。我们还给出了满足庞加莱不等式的任何分布的矩张量的平方和范数的一个明显的上界。庞加莱不等式是概率论中的一个中心不等式,有一大批分布满足它,包括高斯分布、乘积分布、强对数凹分布以及这些分布的和或一致连续变换。因此,上述所有算法改进都适用于满足庞加莱不等式的分布。
{"title":"Robust moment estimation and improved clustering via sum of squares","authors":"Pravesh Kothari, J. Steinhardt, David Steurer","doi":"10.1145/3188745.3188970","DOIUrl":"https://doi.org/10.1145/3188745.3188970","url":null,"abstract":"We develop efficient algorithms for estimating low-degree moments of unknown distributions in the presence of adversarial outliers and design a new family of convex relaxations for k-means clustering based on sum-of-squares method. As an immediate corollary, for any γ > 0, we obtain an efficient algorithm for learning the means of a mixture of k arbitrary distributions in d in time dO(1/γ) so long as the means have separation Ω(kγ). This in particular yields an algorithm for learning Gaussian mixtures with separation Ω(kγ), thus partially resolving an open problem of Regev and Vijayaraghavan regev2017learning. The guarantees of our robust estimation algorithms improve in many cases significantly over the best previous ones, obtained in the recent works. We also show that the guarantees of our algorithms match information-theoretic lower-bounds for the class of distributions we consider. These improved guarantees allow us to give improved algorithms for independent component analysis and learning mixtures of Gaussians in the presence of outliers. We also show a sharp upper bound on the sum-of-squares norms for moment tensors of any distribution that satisfies the Poincare inequality. The Poincare inequality is a central inequality in probability theory, and a large class of distributions satisfy it including Gaussians, product distributions, strongly log-concave distributions, and any sum or uniformly continuous transformation of such distributions. As a consequence, this yields that all of the above algorithmic improvements hold for distributions satisfying the Poincare inequality.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86216832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 130
Succinct delegation for low-space non-deterministic computation 简洁的低空间不确定性计算委托
Pub Date : 2018-06-20 DOI: 10.1145/3188745.3188924
S. Badrinarayanan, Y. Kalai, Dakshita Khurana, A. Sahai, D. Wichs
We construct a delegation scheme for verifying non-deterministic computations, with complexity proportional only to the non-deterministic space of the computation. Specifically, letting n denote the input length, we construct a delegation scheme for any language verifiable in non-deterministic time and space (T(n), S(n)) with communication complexity poly(S(n)), verifier runtime n.polylog(T(n))+poly(S(n)), and prover runtime poly(T(n)). Our scheme consists of only two messages and has adaptive soundness, assuming the existence of a sub-exponentially secure private information retrieval (PIR) scheme, which can be instantiated under standard (albeit, sub-exponential) cryptographic assumptions, such as the sub-exponential LWE assumption. Specifically, the verifier publishes a (short) public key ahead of time, and this key can be used by any prover to non-interactively prove the correctness of any adaptively chosen non-deterministic computation. Such a scheme is referred to as a non-interactive delegation scheme. Our scheme is privately verifiable, where the verifier needs the corresponding secret key in order to verify proofs. Prior to our work, such results were known only in the Random Oracle Model, or under knowledge assumptions. Our results yield succinct non-interactive arguments based on sub-exponential LWE, for many natural languages believed to be outside of P.
我们构造了一个用于验证非确定性计算的委托方案,其复杂度仅与计算的非确定性空间成正比。具体地说,让n表示输入长度,我们构建了一个在非确定性时间和空间(T(n), S(n))中具有通信复杂度poly(S(n)),验证者运行时n.polylog(T(n))+poly(S(n))和证明者运行时poly(T(n))的任何可验证语言的委托方案。我们的方案仅由两条消息组成,并且具有自适应可靠性,假设存在一个亚指数安全私有信息检索(PIR)方案,该方案可以在标准(尽管是次指数)密码假设下实例化,例如次指数LWE假设。具体来说,验证者提前发布一个(短)公钥,这个公钥可以被任何证明者用来非交互式地证明任何自适应选择的非确定性计算的正确性。这种方案称为非交互式委托方案。我们的方案是私有可验证的,验证者需要相应的秘钥来验证证明。在我们的工作之前,这样的结果只在随机Oracle模型或知识假设下已知。我们的结果产生了基于次指数LWE的简洁的非交互参数,用于许多被认为在P之外的自然语言。
{"title":"Succinct delegation for low-space non-deterministic computation","authors":"S. Badrinarayanan, Y. Kalai, Dakshita Khurana, A. Sahai, D. Wichs","doi":"10.1145/3188745.3188924","DOIUrl":"https://doi.org/10.1145/3188745.3188924","url":null,"abstract":"We construct a delegation scheme for verifying non-deterministic computations, with complexity proportional only to the non-deterministic space of the computation. Specifically, letting n denote the input length, we construct a delegation scheme for any language verifiable in non-deterministic time and space (T(n), S(n)) with communication complexity poly(S(n)), verifier runtime n.polylog(T(n))+poly(S(n)), and prover runtime poly(T(n)). Our scheme consists of only two messages and has adaptive soundness, assuming the existence of a sub-exponentially secure private information retrieval (PIR) scheme, which can be instantiated under standard (albeit, sub-exponential) cryptographic assumptions, such as the sub-exponential LWE assumption. Specifically, the verifier publishes a (short) public key ahead of time, and this key can be used by any prover to non-interactively prove the correctness of any adaptively chosen non-deterministic computation. Such a scheme is referred to as a non-interactive delegation scheme. Our scheme is privately verifiable, where the verifier needs the corresponding secret key in order to verify proofs. Prior to our work, such results were known only in the Random Oracle Model, or under knowledge assumptions. Our results yield succinct non-interactive arguments based on sub-exponential LWE, for many natural languages believed to be outside of P.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84344372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Composable and versatile privacy via truncated CDP 可组合和通用的隐私通过截断CDP
Pub Date : 2018-06-20 DOI: 10.1145/3188745.3188946
Mark Bun, C. Dwork, G. Rothblum, T. Steinke
We propose truncated concentrated differential privacy (tCDP), a refinement of differential privacy and of concentrated differential privacy. This new definition provides robust and efficient composition guarantees, supports powerful algorithmic techniques such as privacy amplification via sub-sampling, and enables more accurate statistical analyses. In particular, we show a central task for which the new definition enables exponential accuracy improvement.
我们提出了截断集中差分隐私(tCDP),它是差分隐私和集中差分隐私的一种改进。这个新定义提供了健壮和高效的组成保证,支持强大的算法技术,如通过子采样进行隐私放大,并实现更准确的统计分析。特别是,我们展示了一个中心任务,新定义使指数精度提高。
{"title":"Composable and versatile privacy via truncated CDP","authors":"Mark Bun, C. Dwork, G. Rothblum, T. Steinke","doi":"10.1145/3188745.3188946","DOIUrl":"https://doi.org/10.1145/3188745.3188946","url":null,"abstract":"We propose truncated concentrated differential privacy (tCDP), a refinement of differential privacy and of concentrated differential privacy. This new definition provides robust and efficient composition guarantees, supports powerful algorithmic techniques such as privacy amplification via sub-sampling, and enables more accurate statistical analyses. In particular, we show a central task for which the new definition enables exponential accuracy improvement.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89546381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 138
Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing 第50届ACM SIGACT计算理论研讨会论文集
{"title":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","authors":"","doi":"10.1145/3188745","DOIUrl":"https://doi.org/10.1145/3188745","url":null,"abstract":"","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89841414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 55
Sum-of-squares meets Nash: lower bounds for finding any equilibrium 平方和满足纳什:找到任何平衡点的下界
Pub Date : 2018-06-20 DOI: 10.1145/3188745.3188892
Pravesh Kothari, R. Mehta
Computing Nash equilibrium (NE) in two-player game is a central question in algorithmic game theory. The main motivation of this work is to understand the power of sum-of-squares method in computing equilibria, both exact and approximate. Previous works in this context have focused on hardness of approximating “best” equilibria with respect to some natural quality measure on equilibria such as social welfare. Such results, however, do not directly relate to the complexity of the problem of finding any equilibrium. In this work, we propose a framework of roundings for the sum-of-squares algorithm (and convex relaxations in general) applicable to finding approximate/exact equilbria in two player bimatrix games. Specifically, we define the notion of oblivious roundings with verification oracle (OV). These are algorithms that can access a solution to the degree d SoS relaxation to construct a list of candidate (partial) solutions and invoke a verification oracle to check if a candidate in the list gives an (exact or approximate) equilibrium. This framework captures most known approximation algorithms in combinatorial optimization including the celebrated semi-definite programming based algorithms for Max-Cut, Constraint-Satisfaction Problems, and the recent works on SoS relaxations for Unique Games/Small-Set Expansion, Best Separable State, and many problems in unsupervised machine learning. Our main results are strong unconditional lower bounds in this framework. Specifically, we show that for є = Θ(1/poly(n)), there’s no algorithm that uses a o(n)-degree SoS relaxation to construct a 2o(n)-size list of candidates and obtain an є-approximate NE. For some constant є, we show a similar result for degree o(log(n)) SoS relaxation and list size no(log(n)). Our results can be seen as an unconditional confirmation, in our restricted algorithmic framework, of the recent Exponential Time Hypothesis for PPAD. Our proof strategy involves constructing a family of games that all share a common sum-of-squares solution but every (approximate) equilibrium of any game is far from every equilibrium of any other game in the family (in either player’s strategy). Along the way, we strengthen the classical unconditional lower bound against enumerative algorithms for finding approximate equilibria due to Daskalakis-Papadimitriou and the classical hardness of computing equilibria due to Gilbow-Zemel.
二人博弈中纳什均衡的计算是算法博弈论的核心问题。这项工作的主要动机是了解平方和方法在计算精确和近似平衡中的作用。在此背景下,以前的工作主要集中在接近“最佳”均衡的硬度,相对于均衡上的一些自然质量度量,如社会福利。然而,这样的结果与寻找任何平衡问题的复杂性没有直接关系。在这项工作中,我们为平方和算法(以及一般的凸松弛)提出了一个四舍五入的框架,适用于在两个玩家双矩阵博弈中寻找近似/精确平衡。具体来说,我们用验证oracle (OV)定义了遗忘舍入的概念。这些算法可以访问一个解的松弛度为d so,以构造一个候选(部分)解的列表,并调用验证oracle来检查列表中的候选是否给出(精确或近似)平衡。该框架涵盖了组合优化中大多数已知的近似算法,包括著名的基于半确定规划的最大切割算法、约束满足问题,以及最近关于唯一博弈/小集展开、最佳可分离状态的SoS松弛的工作,以及无监督机器学习中的许多问题。我们的主要结果是在这个框架下的强无条件下界。具体来说,我们表明,对于n = Θ(1/poly(n)),没有算法使用o(n)度的so松弛来构建一个20 (n)大小的候选列表并获得є-approximate NE。对于某个常数_,我们对o(log(n)) so松弛度和no(log(n))列表大小显示了类似的结果。我们的结果可以看作是无条件的确认,在我们有限的算法框架,最近的指数时间假设的PPAD。我们的证明策略包括构建一系列游戏,它们都有一个共同的平方和解决方案,但任何游戏的每个(近似)均衡都与家族中任何其他游戏的均衡相距甚远(在任何玩家的策略中)。在此过程中,我们加强了针对Daskalakis-Papadimitriou的查找近似平衡点的经典无条件下界和gilbowi - zemel的计算平衡点的经典难度。
{"title":"Sum-of-squares meets Nash: lower bounds for finding any equilibrium","authors":"Pravesh Kothari, R. Mehta","doi":"10.1145/3188745.3188892","DOIUrl":"https://doi.org/10.1145/3188745.3188892","url":null,"abstract":"Computing Nash equilibrium (NE) in two-player game is a central question in algorithmic game theory. The main motivation of this work is to understand the power of sum-of-squares method in computing equilibria, both exact and approximate. Previous works in this context have focused on hardness of approximating “best” equilibria with respect to some natural quality measure on equilibria such as social welfare. Such results, however, do not directly relate to the complexity of the problem of finding any equilibrium. In this work, we propose a framework of roundings for the sum-of-squares algorithm (and convex relaxations in general) applicable to finding approximate/exact equilbria in two player bimatrix games. Specifically, we define the notion of oblivious roundings with verification oracle (OV). These are algorithms that can access a solution to the degree d SoS relaxation to construct a list of candidate (partial) solutions and invoke a verification oracle to check if a candidate in the list gives an (exact or approximate) equilibrium. This framework captures most known approximation algorithms in combinatorial optimization including the celebrated semi-definite programming based algorithms for Max-Cut, Constraint-Satisfaction Problems, and the recent works on SoS relaxations for Unique Games/Small-Set Expansion, Best Separable State, and many problems in unsupervised machine learning. Our main results are strong unconditional lower bounds in this framework. Specifically, we show that for є = Θ(1/poly(n)), there’s no algorithm that uses a o(n)-degree SoS relaxation to construct a 2o(n)-size list of candidates and obtain an є-approximate NE. For some constant є, we show a similar result for degree o(log(n)) SoS relaxation and list size no(log(n)). Our results can be seen as an unconditional confirmation, in our restricted algorithmic framework, of the recent Exponential Time Hypothesis for PPAD. Our proof strategy involves constructing a family of games that all share a common sum-of-squares solution but every (approximate) equilibrium of any game is far from every equilibrium of any other game in the family (in either player’s strategy). Along the way, we strengthen the classical unconditional lower bound against enumerative algorithms for finding approximate equilibria due to Daskalakis-Papadimitriou and the classical hardness of computing equilibria due to Gilbow-Zemel.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88192760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
An optimal distributed (Δ+1)-coloring algorithm? 一个最优的分布式(Δ+1)着色算法?
Pub Date : 2018-06-20 DOI: 10.1145/3188745.3188964
Yi-Jun Chang, Wenzheng Li, S. Pettie
Vertex coloring is one of the classic symmetry breaking problems studied in distributed computing. In this paper we present a new algorithm for (Δ+1)-list coloring in the randomized LOCAL model running in O(log∗n + Detd(poly logn)) time, where Detd(n′) is the deterministic complexity of (deg+1)-list coloring (v’s palette has size deg(v)+1) on n′-vertex graphs. This improves upon a previous randomized algorithm of Harris, Schneider, and Su (STOC 2016). with complexity O(√logΔ + loglogn + Detd(poly logn)), and (when Δ is sufficiently large) is much faster than the best known deterministic algorithm of Fraigniaud, Heinrich, and Kosowski (FOCS 2016), with complexity O(√Δlog2.5Δ + log* n). Our algorithm appears to be optimal. It matches the Ω(log∗n) randomized lower bound, due to Naor (SIDMA 1991) and sort of matches the Ω(Det(poly logn)) randomized lower bound due to Chang, Kopelowitz, and Pettie (FOCS 2016), where Det is the deterministic complexity of (Δ+1)-list coloring. The best known upper bounds on Detd(n′) and Det(n′) are both 2O(√logn′) by Panconesi and Srinivasan (Journal of Algorithms 1996), and it is quite plausible that the complexities of both problems are the same, asymptotically.
顶点着色是分布式计算中研究的经典对称性破缺问题之一。本文提出了在O(log∗n + Detd(poly logn))时间内运行的随机LOCAL模型(Δ+1)-列表着色的新算法,其中Detd(n ')是n '顶点图上(deg+1)-列表着色(v的调色板大小为deg(v)+1)的确定性复杂度。这在Harris, Schneider和Su (STOC 2016)之前的随机算法的基础上进行了改进。复杂度为O(√logΔ + loglog + Detd(poly logn)),并且(当Δ足够大时)比复杂度为O(√Δlog2.5Δ + log* n)的Fraigniaud, Heinrich和Kosowski (FOCS 2016)最著名的确定性算法快得多。我们的算法似乎是最优的。由于Naor (SIDMA 1991),它匹配Ω(log∗n)随机下界,并且由于Chang, Kopelowitz和Pettie (FOCS 2016),它匹配Ω(Det(poly logn))随机下界,其中Det是(Δ+1)-列表着色的确定性复杂性。Panconesi和Srinivasan(1996年算法杂志)给出了最著名的Detd(n ')和Det(n ')的上界都是20(√logn '),这两个问题的复杂性是渐进的,这是相当合理的。
{"title":"An optimal distributed (Δ+1)-coloring algorithm?","authors":"Yi-Jun Chang, Wenzheng Li, S. Pettie","doi":"10.1145/3188745.3188964","DOIUrl":"https://doi.org/10.1145/3188745.3188964","url":null,"abstract":"Vertex coloring is one of the classic symmetry breaking problems studied in distributed computing. In this paper we present a new algorithm for (Δ+1)-list coloring in the randomized LOCAL model running in O(log∗n + Detd(poly logn)) time, where Detd(n′) is the deterministic complexity of (deg+1)-list coloring (v’s palette has size deg(v)+1) on n′-vertex graphs. This improves upon a previous randomized algorithm of Harris, Schneider, and Su (STOC 2016). with complexity O(√logΔ + loglogn + Detd(poly logn)), and (when Δ is sufficiently large) is much faster than the best known deterministic algorithm of Fraigniaud, Heinrich, and Kosowski (FOCS 2016), with complexity O(√Δlog2.5Δ + log* n). Our algorithm appears to be optimal. It matches the Ω(log∗n) randomized lower bound, due to Naor (SIDMA 1991) and sort of matches the Ω(Det(poly logn)) randomized lower bound due to Chang, Kopelowitz, and Pettie (FOCS 2016), where Det is the deterministic complexity of (Δ+1)-list coloring. The best known upper bounds on Detd(n′) and Det(n′) are both 2O(√logn′) by Panconesi and Srinivasan (Journal of Algorithms 1996), and it is quite plausible that the complexities of both problems are the same, asymptotically.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84411344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 71
Incomplete nested dissection 不完全嵌套解剖
Pub Date : 2018-05-23 DOI: 10.1145/3188745.3188960
Rasmus Kyng, Richard Peng, Robert Schwieterman, Peng Zhang
We present an asymptotically faster algorithm for solving linear systems in well-structured 3-dimensional truss stiffness matrices. These linear systems arise from linear elasticity problems, and can be viewed as extensions of graph Laplacians into higher dimensions. Faster solvers for the 2-D variants of such systems have been studied using generalizations of tools for solving graph Laplacians [Daitch-Spielman CSC’07, Shklarski-Toledo SIMAX’08]. Given a 3-dimensional truss over n vertices which is formed from a union of k convex structures (tetrahedral meshes) with bounded aspect ratios, whose individual tetrahedrons are also in some sense well-conditioned, our algorithm solves a linear system in the associated stiffness matrix up to accuracy є in time O(k1/3 n5/3 log(1 / є)). This asymptotically improves the running time O(n2) by Nested Dissection for all k ≪ n. We also give a result that improves on Nested Dissection even when we allow any aspect ratio for each of the k convex structures (but we still require well-conditioned individual tetrahedrons). In this regime, we improve on Nested Dissection for k ≪ n1/44. The key idea of our algorithm is to combine nested dissection and support theory. Both of these techniques for solving linear systems are well studied, but usually separately. Our algorithm decomposes a 3-dimensional truss into separate and balanced regions with small boundaries. We then bound the spectrum of each such region separately, and utilize such bounds to obtain improved algorithms by preconditioning with partial states of separator-based Gaussian elimination.
我们提出了一种求解结构良好的三维桁架刚度矩阵线性系统的渐近快速算法。这些线性系统源于线性弹性问题,可以看作是图拉普拉斯在高维上的扩展。利用求解图拉普拉斯算子的一般化工具研究了这种系统的二维变体的更快求解器[Daitch-Spielman CSC ' 07, Shklarski-Toledo SIMAX ' 08]。给定一个由k个凸结构(四面体网格)组成的n个顶点的三维桁架,具有有限的纵横比,其单个四面体在某种意义上也是条件良好的,我们的算法在时间O(k1/3 n5/3 log(1 / n))中求解相关刚度矩阵中的线性系统,精度达到k。这通过巢式解剖对所有k≪n的运行时间O(n2)进行了渐近改善。即使我们允许k个凸结构中的每一个的任意纵横比(但我们仍然需要条件良好的单个四面体),我们也给出了一个改进巢式解剖的结果。在这种情况下,我们改进了k≪n1/44的嵌套式解剖。该算法的核心思想是将嵌套分解与支持理论相结合。这两种求解线性系统的技术都得到了很好的研究,但通常是分开的。我们的算法将三维桁架分解成具有小边界的独立平衡区域。然后,我们分别对每个这样的区域的频谱进行边界,并利用这些边界通过基于分离器的高斯消去的部分状态进行预处理来获得改进的算法。
{"title":"Incomplete nested dissection","authors":"Rasmus Kyng, Richard Peng, Robert Schwieterman, Peng Zhang","doi":"10.1145/3188745.3188960","DOIUrl":"https://doi.org/10.1145/3188745.3188960","url":null,"abstract":"We present an asymptotically faster algorithm for solving linear systems in well-structured 3-dimensional truss stiffness matrices. These linear systems arise from linear elasticity problems, and can be viewed as extensions of graph Laplacians into higher dimensions. Faster solvers for the 2-D variants of such systems have been studied using generalizations of tools for solving graph Laplacians [Daitch-Spielman CSC’07, Shklarski-Toledo SIMAX’08]. Given a 3-dimensional truss over n vertices which is formed from a union of k convex structures (tetrahedral meshes) with bounded aspect ratios, whose individual tetrahedrons are also in some sense well-conditioned, our algorithm solves a linear system in the associated stiffness matrix up to accuracy є in time O(k1/3 n5/3 log(1 / є)). This asymptotically improves the running time O(n2) by Nested Dissection for all k ≪ n. We also give a result that improves on Nested Dissection even when we allow any aspect ratio for each of the k convex structures (but we still require well-conditioned individual tetrahedrons). In this regime, we improve on Nested Dissection for k ≪ n1/44. The key idea of our algorithm is to combine nested dissection and support theory. Both of these techniques for solving linear systems are well studied, but usually separately. Our algorithm decomposes a 3-dimensional truss into separate and balanced regions with small boundaries. We then bound the spectrum of each such region separately, and utilize such bounds to obtain improved algorithms by preconditioning with partial states of separator-based Gaussian elimination.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89117783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
More consequences of falsifying SETH and the orthogonal vectors conjecture 证伪SETH和正交向量猜想的更多结果
Pub Date : 2018-05-22 DOI: 10.1145/3188745.3188938
Amir Abboud, K. Bringmann, Holger Dell, Jesper Nederlof
The Strong Exponential Time Hypothesis and the OV-conjecture are two popular hardness assumptions used to prove a plethora of lower bounds, especially in the realm of polynomial-time algorithms. The OV-conjecture in moderate dimension states there is no ε>0 for which an O(N2−ε) poly(D) time algorithm can decide whether there is a pair of orthogonal vectors in a given set of size N that contains D-dimensional binary vectors. We strengthen the evidence for these hardness assumptions. In particular, we show that if the OV-conjecture fails, then two problems for which we are far from obtaining even tiny improvements over exhaustive search would have surprisingly fast algorithms. If the OV conjecture is false, then there is a fixed ε>0 such that: - For all d and all large enough k, there is a randomized algorithm that takes O(n(1−ε)k) time to solve the Zero-Weight-k-Clique and Min-Weight-k-Clique problems on d-hypergraphs with n vertices. As a consequence, the OV-conjecture is implied by the Weighted Clique conjecture. - For all c, the satisfiability of sparse TC1 circuits on n inputs (that is, circuits with cn wires, depth clogn, and negation, AND, OR, and threshold gates) can be computed in time O((2−ε)n).
强指数时间假设和ov猜想是两种常用的硬度假设,用于证明大量的下界,特别是在多项式时间算法领域。中等维的ov猜想表明,在给定的大小为N的集合中,O(N2−ε)多(D)时间算法不存在ε>0的情况下,是否存在包含D维二进制向量的正交向量对。我们加强了这些硬度假设的证据。特别是,我们表明,如果ov猜想失败,那么我们远远无法获得比穷举搜索甚至微小改进的两个问题将具有惊人的快速算法。如果OV猜想为假,则存在一个固定的ε>0,使得:-对于所有d和所有足够大的k,存在一个随机算法,它需要O(n(1−ε)k)时间来解决n个顶点的d超图上的零权重k团和最小权重k团问题。因此,ov猜想被加权团猜想所隐含。—对于所有c,稀疏TC1电路在n个输入(即具有cn导线、深度阻塞、负、与、或和阈值门的电路)上的可满足性可以在时间O((2−ε)n)内计算。
{"title":"More consequences of falsifying SETH and the orthogonal vectors conjecture","authors":"Amir Abboud, K. Bringmann, Holger Dell, Jesper Nederlof","doi":"10.1145/3188745.3188938","DOIUrl":"https://doi.org/10.1145/3188745.3188938","url":null,"abstract":"The Strong Exponential Time Hypothesis and the OV-conjecture are two popular hardness assumptions used to prove a plethora of lower bounds, especially in the realm of polynomial-time algorithms. The OV-conjecture in moderate dimension states there is no ε>0 for which an O(N2−ε) poly(D) time algorithm can decide whether there is a pair of orthogonal vectors in a given set of size N that contains D-dimensional binary vectors. We strengthen the evidence for these hardness assumptions. In particular, we show that if the OV-conjecture fails, then two problems for which we are far from obtaining even tiny improvements over exhaustive search would have surprisingly fast algorithms. If the OV conjecture is false, then there is a fixed ε>0 such that: - For all d and all large enough k, there is a randomized algorithm that takes O(n(1−ε)k) time to solve the Zero-Weight-k-Clique and Min-Weight-k-Clique problems on d-hypergraphs with n vertices. As a consequence, the OV-conjecture is implied by the Weighted Clique conjecture. - For all c, the satisfiability of sparse TC1 circuits on n inputs (that is, circuits with cn wires, depth clogn, and negation, AND, OR, and threshold gates) can be computed in time O((2−ε)n).","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82394937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Extensor-coding Extensor-coding
Pub Date : 2018-04-25 DOI: 10.1145/3188745.3188902
Cornelius Brand, Holger Dell, T. Husfeldt
We devise an algorithm that approximately computes the number of paths of length k in a given directed graph with n vertices up to a multiplicative error of 1 ± ε. Our algorithm runs in time ε−2 4k(n+m) poly(k). The algorithm is based on associating with each vertex an element in the exterior (or, Grassmann) algebra, called an extensor, and then performing computations in this algebra. This connection to exterior algebra generalizes a number of previous approaches for the longest path problem and is of independent conceptual interest. Using this approach, we also obtain a deterministic 2k·poly(n) time algorithm to find a k-path in a given directed graph that is promised to have few of them. Our results and techniques generalize to the subgraph isomorphism problem when the subgraphs we are looking for have bounded pathwidth. Finally, we also obtain a randomized algorithm to detect k-multilinear terms in a multivariate polynomial given as a general algebraic circuit. To the best of our knowledge, this was previously only known for algebraic circuits not involving negative constants.
我们设计了一种算法,在给定的n个顶点的有向图中近似计算长度为k的路径数,乘法误差为1±ε。我们的算法运行时间为ε−2 4k(n+m) poly(k)。该算法基于将外部(或Grassmann)代数中的元素与每个顶点相关联,称为扩展量,然后在该代数中执行计算。这种与外部代数的联系推广了许多先前最长路径问题的方法,并且具有独立的概念兴趣。利用这种方法,我们还获得了一种确定性的2k·多(n)时间算法,用于在给定的有向图中查找k路径,该路径保证具有很少的k路径。我们的结果和技术推广到子图同构问题,当我们寻找的子图具有有界的路径宽度。最后,我们也得到了一种随机算法来检测多元多项式作为一般代数电路中的k个多线性项。据我们所知,这之前只存在于不包含负常数的代数电路中。
{"title":"Extensor-coding","authors":"Cornelius Brand, Holger Dell, T. Husfeldt","doi":"10.1145/3188745.3188902","DOIUrl":"https://doi.org/10.1145/3188745.3188902","url":null,"abstract":"We devise an algorithm that approximately computes the number of paths of length k in a given directed graph with n vertices up to a multiplicative error of 1 ± ε. Our algorithm runs in time ε−2 4k(n+m) poly(k). The algorithm is based on associating with each vertex an element in the exterior (or, Grassmann) algebra, called an extensor, and then performing computations in this algebra. This connection to exterior algebra generalizes a number of previous approaches for the longest path problem and is of independent conceptual interest. Using this approach, we also obtain a deterministic 2k·poly(n) time algorithm to find a k-path in a given directed graph that is promised to have few of them. Our results and techniques generalize to the subgraph isomorphism problem when the subgraphs we are looking for have bounded pathwidth. Finally, we also obtain a randomized algorithm to detect k-multilinear terms in a multivariate polynomial given as a general algebraic circuit. To the best of our knowledge, this was previously only known for algebraic circuits not involving negative constants.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89516353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Improved approximation for tree augmentation: saving by rewiring 改进的树增强近似:通过重新布线节省
Pub Date : 2018-04-06 DOI: 10.1145/3188745.3188898
F. Grandoni, Christos Kalaitzis, R. Zenklusen
The Tree Augmentation Problem (TAP) is a fundamental network design problem in which we are given a tree and a set of additional edges, also called links. The task is to find a set of links, of minimum size, whose addition to the tree leads to a 2-edge-connected graph. A long line of results on TAP culminated in the previously best known approximation guarantee of 1.5 achieved by a combinatorial approach due to Kortsarz and Nutov [ACM Transactions on Algorithms 2016], and also by an SDP-based approach by Cheriyan and Gao [Algorithmica 2017]. Moreover, an elegant LP-based (1.5+є)-approximation has also been found very recently by Fiorini, Groß, K'onemann, and Sanitá [SODA 2018]. In this paper, we show that an approximation factor below 1.5 can be achieved, by presenting a 1.458-approximation that is based on several new techniques. By extending prior results of Adjiashvili [SODA 2017], we first present a black-box reduction to a very structured type of instance, which played a crucial role in recent development on the problem, and which we call k-wide. Our main contribution is a new approximation algorithm for O(1)-wide tree instances with approximation guarantee strictly below 1.458, based on one of their fundamental properties: wide trees naturally decompose into smaller subtrees with a constant number of leaves. Previous approaches in similar settings rounded each subtree independently and simply combined the obtained solutions. We show that additionally, when starting with a well-chosen LP, the combined solution can be improved through a new “rewiring” technique, showing that one can replace some pairs of used links by a single link. We can rephrase the rewiring problem as a stochastic version of a matching problem, which may be of independent interest. By showing that large matchings can be obtained in this problem, we obtain that a significant number of rewirings are possible, thus leading to an approximation factor below 1.5.
树增强问题(TAP)是一个基本的网络设计问题,其中我们给定一棵树和一组附加边,也称为链路。任务是找到一组最小尺寸的链接,将其添加到树中得到一个2边连通图。在TAP上的一长串结果最终达到了以前最著名的近似保证1.5,这是由Kortsarz和Nutov (ACM Transactions on Algorithms 2016)的组合方法实现的,也是由Cheriyan和Gao (Algorithmica 2017)的基于sdp的方法实现的。此外,Fiorini, Groß, K'onemann和sanit最近也发现了一个优雅的基于lp的(1.5+ n)近似[SODA 2018]。在本文中,我们表明,通过提出基于几种新技术的1.458近似值,可以实现低于1.5的近似因子。通过扩展Adjiashvili [SODA 2017]的先前结果,我们首先提出了一个非常结构化类型的实例的黑盒约简,它在问题的最近发展中起着至关重要的作用,我们称之为k-wide。我们的主要贡献是一种新的近似算法,用于O(1)宽的树实例,其近似保证严格低于1.458,基于它们的一个基本性质:宽树自然地分解成具有恒定数量叶子的更小的子树。在类似的情况下,以前的方法独立地对每个子树进行舍入,并简单地将得到的解组合起来。此外,我们表明,当从一个精心选择的LP开始时,可以通过一种新的“重新布线”技术来改进组合解决方案,表明可以用单个链路替换一些使用过的链路对。我们可以将重新布线问题重新表述为匹配问题的随机版本,这可能是独立的兴趣。通过展示在这个问题中可以获得大的匹配,我们得到大量的重接线是可能的,从而导致近似因子低于1.5。
{"title":"Improved approximation for tree augmentation: saving by rewiring","authors":"F. Grandoni, Christos Kalaitzis, R. Zenklusen","doi":"10.1145/3188745.3188898","DOIUrl":"https://doi.org/10.1145/3188745.3188898","url":null,"abstract":"The Tree Augmentation Problem (TAP) is a fundamental network design problem in which we are given a tree and a set of additional edges, also called links. The task is to find a set of links, of minimum size, whose addition to the tree leads to a 2-edge-connected graph. A long line of results on TAP culminated in the previously best known approximation guarantee of 1.5 achieved by a combinatorial approach due to Kortsarz and Nutov [ACM Transactions on Algorithms 2016], and also by an SDP-based approach by Cheriyan and Gao [Algorithmica 2017]. Moreover, an elegant LP-based (1.5+є)-approximation has also been found very recently by Fiorini, Groß, K'onemann, and Sanitá [SODA 2018]. In this paper, we show that an approximation factor below 1.5 can be achieved, by presenting a 1.458-approximation that is based on several new techniques. By extending prior results of Adjiashvili [SODA 2017], we first present a black-box reduction to a very structured type of instance, which played a crucial role in recent development on the problem, and which we call k-wide. Our main contribution is a new approximation algorithm for O(1)-wide tree instances with approximation guarantee strictly below 1.458, based on one of their fundamental properties: wide trees naturally decompose into smaller subtrees with a constant number of leaves. Previous approaches in similar settings rounded each subtree independently and simply combined the obtained solutions. We show that additionally, when starting with a well-chosen LP, the combined solution can be improved through a new “rewiring” technique, showing that one can replace some pairs of used links by a single link. We can rephrase the rewiring problem as a stochastic version of a matching problem, which may be of independent interest. By showing that large matchings can be obtained in this problem, we obtain that a significant number of rewirings are possible, thus leading to an approximation factor below 1.5.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74276044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
期刊
Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1