首页 > 最新文献

Journal of the ACM (JACM)最新文献

英文 中文
Near-optimal Sample Complexity Bounds for Robust Learning of Gaussian Mixtures via Compression Schemes 基于压缩方案的高斯混合鲁棒学习的近最优样本复杂度界
Pub Date : 2017-10-14 DOI: 10.1145/3417994
H. Ashtiani, S. Ben-David, Nicholas J. A. Harvey, Christopher Liaw, Abbas Mehrabian, Y. Plan
We introduce a novel technique for distribution learning based on a notion of sample compression. Any class of distributions that allows such a compression scheme can be learned with few samples. Moreover, if a class of distributions has such a compression scheme, then so do the classes of products and mixtures of those distributions. As an application of this technique, we prove that ˜Θ(kd2/ε2) samples are necessary and sufficient for learning a mixture of k Gaussians in Rd, up to error ε in total variation distance. This improves both the known upper bounds and lower bounds for this problem. For mixtures of axis-aligned Gaussians, we show that Õ(kd/ε2) samples suffice, matching a known lower bound. Moreover, these results hold in an agnostic learning (or robust estimation) setting, in which the target distribution is only approximately a mixture of Gaussians. Our main upper bound is proven by showing that the class of Gaussians in Rd admits a small compression scheme.
我们介绍了一种基于样本压缩概念的分布学习新技术。允许这种压缩方案的任何一类分布都可以用很少的样本来学习。此外,如果一类分布具有这样的压缩方案,那么这些分布的乘积和混合的类也具有这样的压缩方案。作为该技术的一个应用,我们证明了~ Θ(kd2/ε2)样本对于学习Rd中k个高斯函数的混合物是必要和充分的,直至总变异距离误差为ε。这改进了这个问题已知的上界和下界。对于轴向高斯函数的混合物,我们表明Õ(kd/ε2)样本就足够了,匹配已知的下界。此外,这些结果适用于不可知论学习(或鲁棒估计)设置,其中目标分布仅近似为高斯分布的混合物。我们的主要上界是通过证明一类在Rd中的高斯函数允许一个小的压缩方案来证明的。
{"title":"Near-optimal Sample Complexity Bounds for Robust Learning of Gaussian Mixtures via Compression Schemes","authors":"H. Ashtiani, S. Ben-David, Nicholas J. A. Harvey, Christopher Liaw, Abbas Mehrabian, Y. Plan","doi":"10.1145/3417994","DOIUrl":"https://doi.org/10.1145/3417994","url":null,"abstract":"We introduce a novel technique for distribution learning based on a notion of sample compression. Any class of distributions that allows such a compression scheme can be learned with few samples. Moreover, if a class of distributions has such a compression scheme, then so do the classes of products and mixtures of those distributions. As an application of this technique, we prove that ˜Θ(kd2/ε2) samples are necessary and sufficient for learning a mixture of k Gaussians in Rd, up to error ε in total variation distance. This improves both the known upper bounds and lower bounds for this problem. For mixtures of axis-aligned Gaussians, we show that Õ(kd/ε2) samples suffice, matching a known lower bound. Moreover, these results hold in an agnostic learning (or robust estimation) setting, in which the target distribution is only approximately a mixture of Gaussians. Our main upper bound is proven by showing that the class of Gaussians in Rd admits a small compression scheme.","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76471591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Invited Articles Foreword 特邀文章前言
Pub Date : 2017-10-06 DOI: 10.1145/3140539
É. Tardos
The Invited Article section of this issue consists of two papers. The article “Parallel-Correctness and Transferability for Conjunctive Queries” by Tom J. Ameloot, Gaerano Geck, Bas Ketsman, Frank Neven, and Thomas Schwentick was invited from the 34th Annual ACM Symposium on Principles of Distributed Computing (PODC’15). The article “An Average-case Depth Hierarchy Theorem for Boolean Circuits” by Johan Håstad, Benjamin Rossman, Rocco A. Servedio, and LiYang Tan won a best paper award at the 56th Annual Symposium on Foundations of Computer Science (FOCS’15). We thank the PODC’15 and FOCS’15 Program Committees for their help in selecting these invited articles, and we thank editors Georg Gottlob and Avi Widgerson for handling the articles.
本期特邀文章部分由两篇论文组成。由Tom J. Ameloot、Gaerano Geck、Bas Ketsman、Frank Neven和Thomas Schwentick撰写的文章“连接查询的并行正确性和可转移性”被邀请参加第34届ACM分布式计算原理研讨会(PODC ' 15)。由Johan ha stad, Benjamin Rossman, Rocco a . Servedio和LiYang Tan撰写的文章“布尔电路的平均情况深度层次定理”在第56届计算机科学基础年度研讨会(FOCS ' 15)上获得最佳论文奖。我们感谢PODC ' 15和FOCS ' 15项目委员会在选择这些受邀文章方面的帮助,我们感谢编辑Georg Gottlob和Avi Widgerson对这些文章的处理。
{"title":"Invited Articles Foreword","authors":"É. Tardos","doi":"10.1145/3140539","DOIUrl":"https://doi.org/10.1145/3140539","url":null,"abstract":"The Invited Article section of this issue consists of two papers. The article “Parallel-Correctness and Transferability for Conjunctive Queries” by Tom J. Ameloot, Gaerano Geck, Bas Ketsman, Frank Neven, and Thomas Schwentick was invited from the 34th Annual ACM Symposium on Principles of Distributed Computing (PODC’15). The article “An Average-case Depth Hierarchy Theorem for Boolean Circuits” by Johan Håstad, Benjamin Rossman, Rocco A. Servedio, and LiYang Tan won a best paper award at the 56th Annual Symposium on Foundations of Computer Science (FOCS’15). We thank the PODC’15 and FOCS’15 Program Committees for their help in selecting these invited articles, and we thank editors Georg Gottlob and Avi Widgerson for handling the articles.","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73270815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimating the Unseen 估计看不见的东西
Pub Date : 2017-10-04 DOI: 10.1145/3125643
Paul Valiant, G. Valiant
We show that a class of statistical properties of distributions, which includes such practically relevant properties as entropy, the number of distinct elements, and distance metrics between pairs of distributions, can be estimated given a sublinear sized sample. Specifically, given a sample consisting of independent draws from any distribution over at most k distinct elements, these properties can be estimated accurately using a sample of size O(k log k). For these estimation tasks, this performance is optimal, to constant factors. Complementing these theoretical results, we also demonstrate that our estimators perform exceptionally well, in practice, for a variety of estimation tasks, on a variety of natural distributions, for a wide range of parameters. The key step in our approach is to first use the sample to characterize the “unseen” portion of the distribution—effectively reconstructing this portion of the distribution as accurately as if one had a logarithmic factor larger sample. This goes beyond such tools as the Good-Turing frequency estimation scheme, which estimates the total probability mass of the unobserved portion of the distribution: We seek to estimate the shape of the unobserved portion of the distribution. This work can be seen as introducing a robust, general, and theoretically principled framework that, for many practical applications, essentially amplifies the sample size by a logarithmic factor; we expect that it may be fruitfully used as a component within larger machine learning and statistical analysis systems.
我们证明了一类分布的统计性质,其中包括实际相关的性质,如熵,不同元素的数量,分布对之间的距离度量,可以估计给定一个次线性大小的样本。具体来说,给定一个样本,由最多k个不同元素的任何分布的独立绘图组成,这些属性可以使用大小为O(k log k)的样本进行准确估计。对于这些估计任务,这种性能对于常数因素是最优的。作为这些理论结果的补充,我们还证明了我们的估计器在实践中对于各种估计任务、各种自然分布、各种参数都表现得非常好。我们方法的关键步骤是首先使用样本来描述分布的“看不见的”部分——有效地重建分布的这一部分,就像一个对数因子更大的样本一样准确。这超越了像Good-Turing频率估计方案这样的工具,它估计分布中未观察到的部分的总概率质量:我们试图估计分布中未观察到的部分的形状。这项工作可以看作是引入了一个强大的,一般的,理论上有原则的框架,对于许多实际应用,本质上是通过对数因子放大样本量;我们期望它作为一个组件在更大的机器学习和统计分析系统中得到有效的应用。
{"title":"Estimating the Unseen","authors":"Paul Valiant, G. Valiant","doi":"10.1145/3125643","DOIUrl":"https://doi.org/10.1145/3125643","url":null,"abstract":"We show that a class of statistical properties of distributions, which includes such practically relevant properties as entropy, the number of distinct elements, and distance metrics between pairs of distributions, can be estimated given a sublinear sized sample. Specifically, given a sample consisting of independent draws from any distribution over at most k distinct elements, these properties can be estimated accurately using a sample of size O(k log k). For these estimation tasks, this performance is optimal, to constant factors. Complementing these theoretical results, we also demonstrate that our estimators perform exceptionally well, in practice, for a variety of estimation tasks, on a variety of natural distributions, for a wide range of parameters. The key step in our approach is to first use the sample to characterize the “unseen” portion of the distribution—effectively reconstructing this portion of the distribution as accurately as if one had a logarithmic factor larger sample. This goes beyond such tools as the Good-Turing frequency estimation scheme, which estimates the total probability mass of the unobserved portion of the distribution: We seek to estimate the shape of the unobserved portion of the distribution. This work can be seen as introducing a robust, general, and theoretically principled framework that, for many practical applications, essentially amplifies the sample size by a logarithmic factor; we expect that it may be fruitfully used as a component within larger machine learning and statistical analysis systems.","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86639629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
The Matching Polytope has Exponential Extension Complexity 匹配多边形具有指数扩展复杂度
Pub Date : 2017-09-28 DOI: 10.1145/3127497
T. Rothvoss
A popular method in combinatorial optimization is to express polytopes P, which may potentially have exponentially many facets, as solutions of linear programs that use few extra variables to reduce the number of constraints down to a polynomial. After two decades of standstill, recent years have brought amazing progress in showing lower bounds for the so-called extension complexity, which for a polytope P denotes the smallest number of inequalities necessary to describe a higher-dimensional polytope Q that can be linearly projected on P. However, the central question in this field remained wide open: can the perfect matching polytope be written as an LP with polynomially many constraints? We answer this question negatively. In fact, the extension complexity of the perfect matching polytope in a complete n-node graph is 2Ω (n). By a known reduction, this also improves the lower bound on the extension complexity for the TSP polytope from 2Ω (√ n) to 2Ω (n).
组合优化中的一种流行方法是表示多面体P,它可能具有指数级的许多面,作为线性规划的解,使用少量额外变量将约束数量减少到多项式。经过二十年的停滞,近年来在显示所谓的扩展复杂度的下界方面取得了惊人的进展,对于多边形P来说,扩展复杂度表示描述可以线性投影到P上的高维多边形Q所需的最小不等式数量。然而,该领域的中心问题仍然是开放的:完美匹配多边形是否可以被写为具有多项式多个约束的LP ?我们对这个问题的回答是否定的。事实上,完全n节点图中完美匹配多面体的扩展复杂度为2Ω (n)。通过已知的约简,这也将TSP多面体的扩展复杂度下界从2Ω(√n)提高到2Ω (n)。
{"title":"The Matching Polytope has Exponential Extension Complexity","authors":"T. Rothvoss","doi":"10.1145/3127497","DOIUrl":"https://doi.org/10.1145/3127497","url":null,"abstract":"A popular method in combinatorial optimization is to express polytopes P, which may potentially have exponentially many facets, as solutions of linear programs that use few extra variables to reduce the number of constraints down to a polynomial. After two decades of standstill, recent years have brought amazing progress in showing lower bounds for the so-called extension complexity, which for a polytope P denotes the smallest number of inequalities necessary to describe a higher-dimensional polytope Q that can be linearly projected on P. However, the central question in this field remained wide open: can the perfect matching polytope be written as an LP with polynomially many constraints? We answer this question negatively. In fact, the extension complexity of the perfect matching polytope in a complete n-node graph is 2Ω (n). By a known reduction, this also improves the lower bound on the extension complexity for the TSP polytope from 2Ω (√ n) to 2Ω (n).","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73044770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
The Complexity of Mean-Payoff Pushdown Games 平均收益下推游戏的复杂性
Pub Date : 2017-09-15 DOI: 10.1145/3121408
K. Chatterjee, Yaron Velner
Two-player games on graphs are central in many problems in formal verification and program analysis, such as synthesis and verification of open systems. In this work, we consider solving recursive game graphs (or pushdown game graphs) that model the control flow of sequential programs with recursion. While pushdown games have been studied before with qualitative objectives—such as reachability and ω-regular objectives—in this work, we study for the first time such games with the most well-studied quantitative objective, the mean-payoff objective. In pushdown games, two types of strategies are relevant: (1) global strategies, which depend on the entire global history; and (2) modular strategies, which have only local memory and thus do not depend on the context of invocation but rather only on the history of the current invocation of the module. Our main results are as follows: (1) One-player pushdown games with mean-payoff objectives under global strategies are decidable in polynomial time. (2) Two-player pushdown games with mean-payoff objectives under global strategies are undecidable. (3) One-player pushdown games with mean-payoff objectives under modular strategies are NP-hard. (4) Two-player pushdown games with mean-payoff objectives under modular strategies can be solved in NP (i.e., both one-player and two-player pushdown games with mean-payoff objectives under modular strategies are NP-complete). We also establish the optimal strategy complexity by showing that global strategies for mean-payoff objectives require infinite memory even in one-player pushdown games and memoryless modular strategies are sufficient in two-player pushdown games. Finally, we also show that all the problems have the same complexity if the stack boundedness condition is added, where along with the mean-payoff objective the player must also ensure that the stack height is bounded.
图上的二人博弈是形式验证和程序分析中许多问题的核心,例如开放系统的综合和验证。在这项工作中,我们考虑求解递归博弈图(或下推博弈图),该图用递归对顺序程序的控制流进行建模。虽然之前已经用定性目标(如可达性和非规则目标)研究了下推游戏,但在这项工作中,我们第一次用研究得最充分的定量目标(平均收益目标)来研究这种游戏。在下推游戏中,两种类型的策略是相关的:(1)全局策略,它依赖于整个全局历史;(2)模块化策略,它只有局部内存,因此不依赖于调用的上下文,而只依赖于当前模块调用的历史。主要研究结果如下:(1)全局策略下具有平均收益目标的一人下推博弈在多项式时间内是可决定的。(2)全局策略下具有平均收益目标的二人下推博弈是不可确定的。(3)模块化策略下具有平均收益目标的单人下推博弈具有np难度。(4)模块化策略下具有平均收益目标的两人下推博弈可以在NP中求解(即模块化策略下具有平均收益目标的一人下推博弈和两人下推博弈都是NP完全的)。我们还通过证明平均收益目标的全局策略即使在一人下推博弈中也需要无限的内存,而在两人下推博弈中无内存模块化策略是足够的,从而建立了最优策略复杂性。最后,我们还表明,如果添加堆栈有界性条件,那么所有问题都具有相同的复杂性,其中除了平均收益目标外,玩家还必须确保堆栈高度有界。
{"title":"The Complexity of Mean-Payoff Pushdown Games","authors":"K. Chatterjee, Yaron Velner","doi":"10.1145/3121408","DOIUrl":"https://doi.org/10.1145/3121408","url":null,"abstract":"Two-player games on graphs are central in many problems in formal verification and program analysis, such as synthesis and verification of open systems. In this work, we consider solving recursive game graphs (or pushdown game graphs) that model the control flow of sequential programs with recursion. While pushdown games have been studied before with qualitative objectives—such as reachability and ω-regular objectives—in this work, we study for the first time such games with the most well-studied quantitative objective, the mean-payoff objective. In pushdown games, two types of strategies are relevant: (1) global strategies, which depend on the entire global history; and (2) modular strategies, which have only local memory and thus do not depend on the context of invocation but rather only on the history of the current invocation of the module. Our main results are as follows: (1) One-player pushdown games with mean-payoff objectives under global strategies are decidable in polynomial time. (2) Two-player pushdown games with mean-payoff objectives under global strategies are undecidable. (3) One-player pushdown games with mean-payoff objectives under modular strategies are NP-hard. (4) Two-player pushdown games with mean-payoff objectives under modular strategies can be solved in NP (i.e., both one-player and two-player pushdown games with mean-payoff objectives under modular strategies are NP-complete). We also establish the optimal strategy complexity by showing that global strategies for mean-payoff objectives require infinite memory even in one-player pushdown games and memoryless modular strategies are sufficient in two-player pushdown games. Finally, we also show that all the problems have the same complexity if the stack boundedness condition is added, where along with the mean-payoff objective the player must also ensure that the stack height is bounded.","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83940350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Near-Optimal Regret Bounds for Thompson Sampling 汤普森抽样的近最优后悔界
Pub Date : 2017-09-04 DOI: 10.1145/3088510
Shipra Agrawal, Navin Goyal
Thompson Sampling (TS) is one of the oldest heuristics for multiarmed bandit problems. It is a randomized algorithm based on Bayesian ideas and has recently generated significant interest after several studies demonstrated that it has favorable empirical performance compared to the state-of-the-art methods. In this article, a novel and almost tight martingale-based regret analysis for Thompson Sampling is presented. Our technique simultaneously yields both problem-dependent and problem-independent bounds: (1) the first near-optimal problem-independent bound of O(√ NT ln T) on the expected regret and (2) the optimal problem-dependent bound of (1 + ϵ)Σi ln T / d(μi,μ1) + O(N/ϵ2) on the expected regret (this bound was first proven by Kaufmann et al. (2012b)). Our technique is conceptually simple and easily extends to distributions other than the Beta distribution used in the original TS algorithm. For the version of TS that uses Gaussian priors, we prove a problem-independent bound of O(√ NT ln N) on the expected regret and show the optimality of this bound by providing a matching lower bound. This is the first lower bound on the performance of a natural version of Thompson Sampling that is away from the general lower bound of Ω (√ NT) for the multiarmed bandit problem.
汤姆逊抽样(TS)是求解多武装强盗问题的最古老的启发式方法之一。它是一种基于贝叶斯思想的随机算法,最近引起了人们的极大兴趣,因为几项研究表明,与最先进的方法相比,它具有良好的经验表现。本文提出了一种新颖的基于鞅的汤普森抽样后悔分析方法。我们的技术同时产生了问题相关界和问题无关界:(1)期望后悔上的第一个近似最优问题无关界O(√NT ln T)和(2)期望后悔上的最优问题相关界(1 + λ)Σi ln T / d(μi,μ1) + O(N/ϵ2)(该界首先由Kaufmann et al. (2012b)证明)。我们的技术在概念上很简单,并且很容易扩展到原始TS算法中使用的Beta分布以外的分布。对于使用高斯先验的TS版本,我们证明了一个与问题无关的O(√NT ln N)的期望后悔界,并通过提供一个匹配的下界来证明该界的最优性。这是自然版本的汤普森采样性能的第一个下界,它远离了多臂强盗问题的Ω(√NT)的一般下界。
{"title":"Near-Optimal Regret Bounds for Thompson Sampling","authors":"Shipra Agrawal, Navin Goyal","doi":"10.1145/3088510","DOIUrl":"https://doi.org/10.1145/3088510","url":null,"abstract":"Thompson Sampling (TS) is one of the oldest heuristics for multiarmed bandit problems. It is a randomized algorithm based on Bayesian ideas and has recently generated significant interest after several studies demonstrated that it has favorable empirical performance compared to the state-of-the-art methods. In this article, a novel and almost tight martingale-based regret analysis for Thompson Sampling is presented. Our technique simultaneously yields both problem-dependent and problem-independent bounds: (1) the first near-optimal problem-independent bound of O(√ NT ln T) on the expected regret and (2) the optimal problem-dependent bound of (1 + ϵ)Σi ln T / d(μi,μ1) + O(N/ϵ2) on the expected regret (this bound was first proven by Kaufmann et al. (2012b)). Our technique is conceptually simple and easily extends to distributions other than the Beta distribution used in the original TS algorithm. For the version of TS that uses Gaussian priors, we prove a problem-independent bound of O(√ NT ln N) on the expected regret and show the optimality of this bound by providing a matching lower bound. This is the first lower bound on the performance of a natural version of Thompson Sampling that is away from the general lower bound of Ω (√ NT) for the multiarmed bandit problem.","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85566016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 112
Embeddability in R3 is NP-hard R3中的嵌入性是np困难的
Pub Date : 2017-08-25 DOI: 10.1145/3396593
A. D. Mesmay, Y. Rieck, E. Sedgwick, M. Tancer
We prove that the problem of deciding whether a two- or three-dimensional simplicial complex embeds into R3 is NP-hard. Our construction also shows that deciding whether a 3-manifold with boundary tori admits an S3 filling is NP-hard. The former stands in contrast with the lower-dimensional cases, which can be solved in linear time, and the latter with a variety of computational problems in 3-manifold topology, for example, unknot or 3-sphere recognition, which are in NP ∩ co- NP. (Membership of the latter problem in co-NP assumes the Generalized Riemann Hypotheses.) Our reduction encodes a satisfiability instance into the embeddability problem of a 3-manifold with boundary tori, and relies extensively on techniques from low-dimensional topology, most importantly Dehn fillings of manifolds with boundary tori.
我们证明了决定一个二维或三维简单复合体是否嵌入到R3中的问题是np困难的。我们的构造还表明,决定具有边界环面的3流形是否允许S3填充是np困难的。前者与低维情况相反,前者可以在线性时间内解决,后者则与NP∩co- NP中的各种3流形拓扑计算问题相反,例如解结或3球识别。(后一个问题在co-NP中的隶属性以广义黎曼假设为前提。)我们的约简将可满足性实例编码为具有边界环面的3-流形的可嵌入性问题,并广泛依赖于低维拓扑技术,最重要的是具有边界环面的流形的Dehn填充。
{"title":"Embeddability in R3 is NP-hard","authors":"A. D. Mesmay, Y. Rieck, E. Sedgwick, M. Tancer","doi":"10.1145/3396593","DOIUrl":"https://doi.org/10.1145/3396593","url":null,"abstract":"We prove that the problem of deciding whether a two- or three-dimensional simplicial complex embeds into R3 is NP-hard. Our construction also shows that deciding whether a 3-manifold with boundary tori admits an S3 filling is NP-hard. The former stands in contrast with the lower-dimensional cases, which can be solved in linear time, and the latter with a variety of computational problems in 3-manifold topology, for example, unknot or 3-sphere recognition, which are in NP ∩ co- NP. (Membership of the latter problem in co-NP assumes the Generalized Riemann Hypotheses.) Our reduction encodes a satisfiability instance into the embeddability problem of a 3-manifold with boundary tori, and relies extensively on techniques from low-dimensional topology, most importantly Dehn fillings of manifolds with boundary tori.","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90954740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Source Sets 源设置
Pub Date : 2017-08-17 DOI: 10.1145/3073408
P. Abdulla, Stavros Aronis, B. Jonsson, Konstantinos Sagonas
Stateless model checking is a powerful method for program verification that, however, suffers from an exponential growth in the number of explored executions. A successful technique for reducing this number, while still maintaining complete coverage, is Dynamic Partial Order Reduction (DPOR), an algorithm originally introduced by Flanagan and Godefroid in 2005 and since then not only used as a point of reference but also extended by various researchers. In this article, we present a new DPOR algorithm, which is the first to be provably optimal in that it always explores the minimal number of executions. It is based on a novel class of sets, called source sets, that replace the role of persistent sets in previous algorithms. We begin by showing how to modify the original DPOR algorithm to work with source sets, resulting in an efficient and simple-to-implement algorithm, called source-DPOR. Subsequently, we enhance this algorithm with a novel mechanism, called wakeup trees, that allows the resulting algorithm, called optimal-DPOR, to achieve optimality. Both algorithms are then extended to computational models where processes may disable each other, for example, via locks. Finally, we discuss tradeoffs of the source- and optimal-DPOR algorithm and present programs that illustrate significant time and space performance differences between them. We have implemented both algorithms in a publicly available stateless model checking tool for Erlang programs, while the source-DPOR algorithm is at the core of a publicly available stateless model checking tool for C/pthread programs running on machines with relaxed memory models. Experiments show that source sets significantly increase the performance of stateless model checking compared to using the original DPOR algorithm and that wakeup trees incur only a small overhead in both time and space in practice.
无状态模型检查是一种功能强大的程序验证方法,然而,这种方法在探索执行的数量上呈指数级增长。动态偏序约简(ddpor)是一种成功的技术,可以在保持完全覆盖的情况下减少这个数字,这种算法最初是由Flanagan和Godefroid在2005年引入的,从那时起,它不仅被用作参考点,而且被各种研究人员扩展。在本文中,我们提出了一种新的DPOR算法,它是第一个被证明是最优的算法,因为它总是探索最小的执行次数。它基于一种新的集类,称为源集,它取代了以前算法中持久集的角色。首先,我们将展示如何修改原始DPOR算法以处理源集,从而产生一种高效且易于实现的算法,称为source-DPOR。随后,我们使用一种称为唤醒树的新机制来增强该算法,该机制允许生成的算法(称为optimal-DPOR)实现最优性。然后将这两种算法扩展到计算模型中,其中进程可能会相互禁用,例如通过锁。最后,我们讨论了源dpor算法和最优dpor算法的权衡,并给出了说明它们之间显著的时间和空间性能差异的程序。我们已经在Erlang程序的一个公开可用的无状态模型检查工具中实现了这两种算法,而source-DPOR算法是在具有宽松内存模型的机器上运行的C/pthread程序的一个公开可用的无状态模型检查工具的核心。实验表明,与使用原始DPOR算法相比,源集显著提高了无状态模型检查的性能,并且唤醒树在实践中只产生很小的时间和空间开销。
{"title":"Source Sets","authors":"P. Abdulla, Stavros Aronis, B. Jonsson, Konstantinos Sagonas","doi":"10.1145/3073408","DOIUrl":"https://doi.org/10.1145/3073408","url":null,"abstract":"Stateless model checking is a powerful method for program verification that, however, suffers from an exponential growth in the number of explored executions. A successful technique for reducing this number, while still maintaining complete coverage, is Dynamic Partial Order Reduction (DPOR), an algorithm originally introduced by Flanagan and Godefroid in 2005 and since then not only used as a point of reference but also extended by various researchers. In this article, we present a new DPOR algorithm, which is the first to be provably optimal in that it always explores the minimal number of executions. It is based on a novel class of sets, called source sets, that replace the role of persistent sets in previous algorithms. We begin by showing how to modify the original DPOR algorithm to work with source sets, resulting in an efficient and simple-to-implement algorithm, called source-DPOR. Subsequently, we enhance this algorithm with a novel mechanism, called wakeup trees, that allows the resulting algorithm, called optimal-DPOR, to achieve optimality. Both algorithms are then extended to computational models where processes may disable each other, for example, via locks. Finally, we discuss tradeoffs of the source- and optimal-DPOR algorithm and present programs that illustrate significant time and space performance differences between them. We have implemented both algorithms in a publicly available stateless model checking tool for Erlang programs, while the source-DPOR algorithm is at the core of a publicly available stateless model checking tool for C/pthread programs running on machines with relaxed memory models. Experiments show that source sets significantly increase the performance of stateless model checking compared to using the original DPOR algorithm and that wakeup trees incur only a small overhead in both time and space in practice.","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86183353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Invited Article Foreword 特邀文章前言
Pub Date : 2017-08-17 DOI: 10.1145/3119408
É. Tardos
{"title":"Invited Article Foreword","authors":"É. Tardos","doi":"10.1145/3119408","DOIUrl":"https://doi.org/10.1145/3119408","url":null,"abstract":"","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90287170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Online Bipartite Matching with Amortized O(log 2 n) Replacements 具有O(log 2 n)个平摊替换的在线二部匹配
Pub Date : 2017-07-19 DOI: 10.1145/3344999
A. Bernstein, J. Holm, E. Rotenberg
In the online bipartite matching problem with replacements, all the vertices on one side of the bipartition are given, and the vertices on the other side arrive one-by-one with all their incident edges. The goal is to maintain a maximum matching while minimizing the number of changes (replacements) to the matching. We show that the greedy algorithm that always takes the shortest augmenting path from the newly inserted vertex (denoted the SAP protocol) uses at most amortized O(log 2 n) replacements per insertion, where n is the total number of vertices inserted. This is the first analysis to achieve a polylogarithmic number of replacements for any replacement strategy, almost matching the Ω (log n) lower bound. The previous best strategy known achieved amortized O(√ n) replacements [Bosek, Leniowski, Sankowski, Zych, FOCS 2014]. For the SAP protocol in particular, nothing better than the trivial O(n) bound was known except in special cases. Our analysis immediately implies the same upper bound of O(log 2 n) reassignments for the capacitated assignment problem, where each vertex on the static side of the bipartition is initialized with the capacity to serve a number of vertices. We also analyze the problem of minimizing the maximum server load. We show that if the final graph has maximum server load L, then the SAP protocol makes amortized O(min { L log2 n , √ nlog n}) reassignments. We also show that this is close to tight, because Ω (min { L, √ n}) reassignments can be necessary.
在带替换的在线二部匹配问题中,给定了二部分割的一边的所有顶点,另一边的顶点带着所有的关联边一个接一个地到达。目标是在保持最大匹配的同时尽量减少对匹配的更改(替换)数量。我们证明贪心算法总是从新插入的顶点(表示SAP协议)获取最短的扩展路径,每次插入最多使用平摊O(log 2 n)次替换,其中n是插入顶点的总数。这是第一个实现任何替换策略的多对数替换数的分析,几乎匹配Ω (log n)的下界。先前已知的最佳策略实现了平摊O(√n)次替换[Bosek, Leniowski, Sankowski, Zych, FOCS 2014]。特别是对于SAP协议,除了在特殊情况下,没有什么比平凡的O(n)界更好的了。我们的分析立即表明,对于有容量分配问题,O(log 2 n)重新分配的上界是相同的,其中双分区静态侧的每个顶点都初始化为服务多个顶点的能力。我们还分析了最小化最大服务器负载的问题。我们证明,如果最终的图具有最大的服务器负载L,那么SAP协议将平摊O(min {L log2 n,√nlog n})个重分配。我们还证明了这是接近紧密的,因为Ω (min {L,√n})重赋值可能是必要的。
{"title":"Online Bipartite Matching with Amortized O(log 2 n) Replacements","authors":"A. Bernstein, J. Holm, E. Rotenberg","doi":"10.1145/3344999","DOIUrl":"https://doi.org/10.1145/3344999","url":null,"abstract":"In the online bipartite matching problem with replacements, all the vertices on one side of the bipartition are given, and the vertices on the other side arrive one-by-one with all their incident edges. The goal is to maintain a maximum matching while minimizing the number of changes (replacements) to the matching. We show that the greedy algorithm that always takes the shortest augmenting path from the newly inserted vertex (denoted the SAP protocol) uses at most amortized O(log 2 n) replacements per insertion, where n is the total number of vertices inserted. This is the first analysis to achieve a polylogarithmic number of replacements for any replacement strategy, almost matching the Ω (log n) lower bound. The previous best strategy known achieved amortized O(√ n) replacements [Bosek, Leniowski, Sankowski, Zych, FOCS 2014]. For the SAP protocol in particular, nothing better than the trivial O(n) bound was known except in special cases. Our analysis immediately implies the same upper bound of O(log 2 n) reassignments for the capacitated assignment problem, where each vertex on the static side of the bipartition is initialized with the capacity to serve a number of vertices. We also analyze the problem of minimizing the maximum server load. We show that if the final graph has maximum server load L, then the SAP protocol makes amortized O(min { L log2 n , √ nlog n}) reassignments. We also show that this is close to tight, because Ω (min { L, √ n}) reassignments can be necessary.","PeriodicalId":17199,"journal":{"name":"Journal of the ACM (JACM)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86951707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
期刊
Journal of the ACM (JACM)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1