首页 > 最新文献

Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing最新文献

英文 中文
New hardness results for routing on disjoint paths 不相交路径上走线的新硬度结果
Pub Date : 2016-11-16 DOI: 10.1145/3055399.3055411
Julia Chuzhoy, David H. K. Kim, Rachit Nimavat
In the classical Node-Disjoint Paths (NDP) problem, the input consists of an undirected n-vertex graph G, and a collection M={(s1,t1),…,(sk,tk)} of pairs of its vertices, called source-destination, or demand, pairs. The goal is to route the largest possible number of the demand pairs via node-disjoint paths. The best current approximation for the problem is achieved by a simple greedy algorithm, whose approximation factor is O(√n), while the best current negative result is an Ω(log1/2-δn)-hardness of approximation for any constant δ, under standard complexity assumptions. Even seemingly simple special cases of the problem are still poorly understood: when the input graph is a grid, the best current algorithm achieves an Õ(n1/4)-approximation, and when it is a general planar graph, the best current approximation ratio of an efficient algorithm is Õ(n9/19). The best currently known lower bound for both these versions of the problem is APX-hardness. In this paper we prove that NDP is 2Ω(√logn)-hard to approximate, unless all problems in NP have algorithms with running time nO(logn). Our result holds even when the underlying graph is a planar graph with maximum vertex degree 4, and all source vertices lie on the boundary of a single face (but the destination vertices may lie anywhere in the graph). We extend this result to the closely related Edge-Disjoint Paths problem, showing the same hardness of approximation ratio even for sub-cubic planar graphs with all sources lying on the boundary of a single face.
在经典的节点不相交路径(NDP)问题中,输入由无向n顶点图G和其顶点对的集合M={(s1,t1),…,(sk,tk)}组成,称为源-目的地或需求对。目标是通过节点不相交的路径路由尽可能多的需求对。该问题的最佳电流近似是通过一个简单的贪心算法实现的,其近似因子为O(√n),而最佳电流负结果是Ω(log1/2-δn)-在标准复杂度假设下,任意常数δ的近似硬度。即使是这个问题的看似简单的特殊情况,人们仍然知之甚少:当输入图是一个网格时,当前最好的算法实现了Õ(n1/4)-近似,当输入图是一个一般的平面图时,一个有效算法的最佳当前近似比为Õ(n1/ 19)。对于这两个版本的问题,目前已知的最好的下界是apx硬度。在本文中,我们证明了NDP是2Ω(√logn)-难以近似,除非所有NP问题的算法运行时间为nO(logn)。即使底层图形是顶点度为4的平面图形,并且所有源顶点都位于单个面的边界上(但目标顶点可能位于图中的任何位置),我们的结果仍然成立。我们将这一结果推广到与之密切相关的边不相交路径问题,证明了即使对于所有源都位于单个面边界的次三次平面图,近似比的硬度也是相同的。
{"title":"New hardness results for routing on disjoint paths","authors":"Julia Chuzhoy, David H. K. Kim, Rachit Nimavat","doi":"10.1145/3055399.3055411","DOIUrl":"https://doi.org/10.1145/3055399.3055411","url":null,"abstract":"In the classical Node-Disjoint Paths (NDP) problem, the input consists of an undirected n-vertex graph G, and a collection M={(s1,t1),…,(sk,tk)} of pairs of its vertices, called source-destination, or demand, pairs. The goal is to route the largest possible number of the demand pairs via node-disjoint paths. The best current approximation for the problem is achieved by a simple greedy algorithm, whose approximation factor is O(√n), while the best current negative result is an Ω(log1/2-δn)-hardness of approximation for any constant δ, under standard complexity assumptions. Even seemingly simple special cases of the problem are still poorly understood: when the input graph is a grid, the best current algorithm achieves an Õ(n1/4)-approximation, and when it is a general planar graph, the best current approximation ratio of an efficient algorithm is Õ(n9/19). The best currently known lower bound for both these versions of the problem is APX-hardness. In this paper we prove that NDP is 2Ω(√logn)-hard to approximate, unless all problems in NP have algorithms with running time nO(logn). Our result holds even when the underlying graph is a planar graph with maximum vertex degree 4, and all source vertices lie on the boundary of a single face (but the destination vertices may lie anywhere in the graph). We extend this result to the closely related Edge-Disjoint Paths problem, showing the same hardness of approximation ratio even for sub-cubic planar graphs with all sources lying on the boundary of a single face.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89312751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Real stable polynomials and matroids: optimization and counting 实稳定多项式和拟阵:优化和计数
Pub Date : 2016-11-14 DOI: 10.1145/3055399.3055457
D. Straszak, Nisheeth K. Vishnoi
Several fundamental optimization and counting problems arising in computer science, mathematics and physics can be reduced to one of the following computational tasks involving polynomials and set systems: given an oracle access to an m-variate real polynomial g and to a family of (multi-)subsets ℬ of [m], (1) compute the sum of coefficients of monomials in g corresponding to all the sets that appear in B(1), or find S ε ℬ such that the monomial in g corresponding to S has the largest coefficient in g. Special cases of these problems, such as computing permanents and mixed discriminants, sampling from determinantal point processes, and maximizing sub-determinants with combinatorial constraints have been topics of much recent interest in theoretical computer science. In this paper we present a general convex programming framework geared to solve both of these problems. Subsequently, we show that roughly, when g is a real stable polynomial with non-negative coefficients and B is a matroid, the integrality gap of our convex relaxation is finite and depends only on m (and not on the coefficients of g). Prior to this work, such results were known only in important but sporadic cases that relied heavily on the structure of either g or ℬ; it was not even a priori clear if one could formulate a convex relaxation that has a finite integrality gap beyond these special cases. Two notable examples are a result by Gurvits for real stable polynomials g when ℬ contains one element, and a result by Nikolov and Singh for a family of multi-linear real stable polynomials when B is the partition matroid. This work, which encapsulates almost all interesting cases of g and B, benefits from both - it is inspired by the latter in coming up with the right convex programming relaxation and the former in deriving the integrality gap. However, proving our results requires extensions of both; in that process we come up with new notions and connections between real stable polynomials and matroids which might be of independent interest.
计算机科学、数学和物理学中出现的几个基本优化和计数问题可以归结为以下涉及多项式和集合系统的计算任务之一:给定对一个m变量实多项式g和[m]的一组(多)子集_[1]的oracle访问,(1)计算g中对应于B(1)中出现的所有集合的单项式的系数和,或者找到S ε _使g中对应于S的单项式在g中具有最大的系数。这些问题的特殊情况,例如计算永久和混合判别,从确定性点过程中采样,在组合约束下最大化子行列式已经成为理论计算机科学最近的热门话题。在本文中,我们提出了一个通用的凸规划框架来解决这两个问题。随后,我们粗略地证明,当g是一个具有非负系数的实稳定多项式并且B是一个矩阵时,我们的凸松弛的完整性间隙是有限的并且仅取决于m(而不取决于g的系数)。在此工作之前,这样的结果仅在重要但零星的情况下被知道,这些情况严重依赖于g或eg的结构;在这些特殊情况之外,我们甚至不能先验地确定是否有一个具有有限积分间隙的凸松弛。两个值得注意的例子是Gurvits对一元实稳定多项式g的结果,以及Nikolov和Singh对多元线性实稳定多项式族在B为分拆矩阵时的结果。这项工作,封装了g和B的几乎所有有趣的情况,受益于两者——它的灵感来自于后者,因为它提出了正确的凸规划松弛,而前者则来自于推导完整性间隙。然而,证明我们的结果需要两者的扩展;在这个过程中,我们提出了新的概念和联系,在实稳定多项式和拟阵之间,这可能是独立的兴趣。
{"title":"Real stable polynomials and matroids: optimization and counting","authors":"D. Straszak, Nisheeth K. Vishnoi","doi":"10.1145/3055399.3055457","DOIUrl":"https://doi.org/10.1145/3055399.3055457","url":null,"abstract":"Several fundamental optimization and counting problems arising in computer science, mathematics and physics can be reduced to one of the following computational tasks involving polynomials and set systems: given an oracle access to an m-variate real polynomial g and to a family of (multi-)subsets ℬ of [m], (1) compute the sum of coefficients of monomials in g corresponding to all the sets that appear in B(1), or find S ε ℬ such that the monomial in g corresponding to S has the largest coefficient in g. Special cases of these problems, such as computing permanents and mixed discriminants, sampling from determinantal point processes, and maximizing sub-determinants with combinatorial constraints have been topics of much recent interest in theoretical computer science. In this paper we present a general convex programming framework geared to solve both of these problems. Subsequently, we show that roughly, when g is a real stable polynomial with non-negative coefficients and B is a matroid, the integrality gap of our convex relaxation is finite and depends only on m (and not on the coefficients of g). Prior to this work, such results were known only in important but sporadic cases that relied heavily on the structure of either g or ℬ; it was not even a priori clear if one could formulate a convex relaxation that has a finite integrality gap beyond these special cases. Two notable examples are a result by Gurvits for real stable polynomials g when ℬ contains one element, and a result by Nikolov and Singh for a family of multi-linear real stable polynomials when B is the partition matroid. This work, which encapsulates almost all interesting cases of g and B, benefits from both - it is inspired by the latter in coming up with the right convex programming relaxation and the former in deriving the integrality gap. However, proving our results requires extensions of both; in that process we come up with new notions and connections between real stable polynomials and matroids which might be of independent interest.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74439956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Fully-dynamic minimum spanning forest with improved worst-case update time 具有改进的最坏情况更新时间的全动态最小生成森林
Pub Date : 2016-11-09 DOI: 10.1145/3055399.3055415
Christian Wulff-Nilsen
We give a Las Vegas data structure which maintains a minimum spanning forest in an n-vertex edge-weighted undirected dynamic graph undergoing updates consisting of any mixture of edge insertions and deletions. Each update is supported in O(n1/2 - c) worst-case time w.h.p. where c > 0 is some constant, and this bound also holds in expectation. This is the first data structure achieving an improvement over the O(√n) deterministic worst-case update time of Eppstein et al., a bound that has been standing for 25 years. In fact, it was previously not even known how to maintain a spanning forest of an unweighted graph in worst-case time polynomially faster than Θ(√n). Our result is achieved by first giving a reduction from fully-dynamic to decremental minimum spanning forest preserving worst-case update time up to logarithmic factors. Then decremental minimum spanning forest is solved using several novel techniques, one of which involves keeping track of low-conductance cuts in a dynamic graph. An immediate corollary of our result is the first Las Vegas data structure for fully-dynamic connectivity where each update is handled in worst-case time polynomially faster than Θ(√n) w.h.p.; this data structure has O(1) worst-case query time.
本文给出了一种Las Vegas数据结构,该结构在一个n顶点边加权无向动态图中维持一个最小生成森林,该动态图正在进行由任意边插入和边删除混合组成的更新。每次更新支持在O(n1/2 -c)最坏情况时间w.h.p.,其中c > 0是某个常数,并且该界限也符合期望。这是第一个比Eppstein等人的O(√n)确定性最坏情况更新时间(这个界限已经存在了25年)有所改进的数据结构。事实上,以前甚至不知道如何在最坏情况下多项式地比Θ(√n)更快地维护一个无加权图的生成森林。我们的结果是通过首先给出从全动态到递减最小跨越森林的减少,使最坏情况更新时间达到对数因子。然后采用几种新技术求解最小生成森林,其中一种方法是在动态图中跟踪低电导切割。我们的结果的一个直接推论是第一个拉斯维加斯数据结构,用于全动态连接,其中每个更新在最坏情况下处理的时间多项式快于Θ(√n) w.h.p.;该数据结构的最坏情况查询时间为O(1)。
{"title":"Fully-dynamic minimum spanning forest with improved worst-case update time","authors":"Christian Wulff-Nilsen","doi":"10.1145/3055399.3055415","DOIUrl":"https://doi.org/10.1145/3055399.3055415","url":null,"abstract":"We give a Las Vegas data structure which maintains a minimum spanning forest in an n-vertex edge-weighted undirected dynamic graph undergoing updates consisting of any mixture of edge insertions and deletions. Each update is supported in O(n1/2 - c) worst-case time w.h.p. where c > 0 is some constant, and this bound also holds in expectation. This is the first data structure achieving an improvement over the O(√n) deterministic worst-case update time of Eppstein et al., a bound that has been standing for 25 years. In fact, it was previously not even known how to maintain a spanning forest of an unweighted graph in worst-case time polynomially faster than Θ(√n). Our result is achieved by first giving a reduction from fully-dynamic to decremental minimum spanning forest preserving worst-case update time up to logarithmic factors. Then decremental minimum spanning forest is solved using several novel techniques, one of which involves keeping track of low-conductance cuts in a dynamic graph. An immediate corollary of our result is the first Las Vegas data structure for fully-dynamic connectivity where each update is handled in worst-case time polynomially faster than Θ(√n) w.h.p.; this data structure has O(1) worst-case query time.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81188623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 90
On the complexity of local distributed graph problems 局部分布图问题的复杂性
Pub Date : 2016-11-08 DOI: 10.1145/3055399.3055471
M. Ghaffari, F. Kuhn, Yannic Maus
This paper is centered on the complexity of graph problems in the well-studied LOCAL model of distributed computing, introduced by Linial [FOCS '87]. It is widely known that for many of the classic distributed graph problems (including maximal independent set (MIS) and (Δ+1)-vertex coloring), the randomized complexity is at most polylogarithmic in the size n of the network, while the best deterministic complexity is typically 2O(√logn). Understanding and potentially narrowing down this exponential gap is considered to be one of the central long-standing open questions in the area of distributed graph algorithms. We investigate the problem by introducing a complexity-theoretic framework that allows us to shed some light on the role of randomness in the LOCAL model. We define the SLOCAL model as a sequential version of the LOCAL model. Our framework allows us to prove completeness results with respect to the class of problems which can be solved efficiently in the SLOCAL model, implying that if any of the complete problems can be solved deterministically in logn rounds in the LOCAL model, we can deterministically solve all efficient SLOCAL-problems (including MIS and (Δ+1)-coloring) in logn rounds in the LOCAL model. Perhaps most surprisingly, we show that a rather rudimentary looking graph coloring problem is complete in the above sense: Color the nodes of a graph with colors red and blue such that each node of sufficiently large polylogarithmic degree has at least one neighbor of each color. The problem admits a trivial zero-round randomized solution. The result can be viewed as showing that the only obstacle to getting efficient determinstic algorithms in the LOCAL model is an efficient algorithm to approximately round fractional values into integer values. In addition, our formal framework also allows us to develop polylogarithmic-time randomized distributed algorithms in a simpler way. As a result, we provide a polylog-time distributed approximation scheme for arbitrary distributed covering and packing integer linear programs.
本文主要关注Linial [FOCS '87]引入的分布式计算的LOCAL模型中图问题的复杂性。众所周知,对于许多经典的分布式图问题(包括最大独立集(MIS)和(Δ+1)顶点着色),随机复杂度在网络的大小n中最多是多对数的,而最佳确定性复杂度通常是20(√logn)。理解并潜在地缩小这种指数差距被认为是分布式图算法领域长期存在的核心开放问题之一。我们通过引入一个复杂性理论框架来研究这个问题,该框架使我们能够阐明随机性在局部模型中的作用。我们将SLOCAL模型定义为LOCAL模型的顺序版本。我们的框架允许我们证明关于可以在SLOCAL模型中有效解决的问题类别的完备性结果,这意味着如果任何一个完整问题可以在LOCAL模型中在logn轮中确定性地解决,我们可以在LOCAL模型中在logn轮中确定性地解决所有有效的SLOCAL问题(包括MIS和(Δ+1)-coloring)。也许最令人惊讶的是,我们展示了一个相当基本的图着色问题在上面的意义上是完整的:用红色和蓝色给图的节点着色,使得每个足够大的多对数度的节点至少有一个每种颜色的邻居。这个问题允许一个平凡的零轮随机解。结果表明,在LOCAL模型中获得有效的确定性算法的唯一障碍是将分数值近似四舍五入为整数值的有效算法。此外,我们的正式框架还允许我们以更简单的方式开发多对数时间随机分布算法。因此,我们为任意分布覆盖和填充整数线性规划提供了一个多时间分布逼近格式。
{"title":"On the complexity of local distributed graph problems","authors":"M. Ghaffari, F. Kuhn, Yannic Maus","doi":"10.1145/3055399.3055471","DOIUrl":"https://doi.org/10.1145/3055399.3055471","url":null,"abstract":"This paper is centered on the complexity of graph problems in the well-studied LOCAL model of distributed computing, introduced by Linial [FOCS '87]. It is widely known that for many of the classic distributed graph problems (including maximal independent set (MIS) and (Δ+1)-vertex coloring), the randomized complexity is at most polylogarithmic in the size n of the network, while the best deterministic complexity is typically 2O(√logn). Understanding and potentially narrowing down this exponential gap is considered to be one of the central long-standing open questions in the area of distributed graph algorithms. We investigate the problem by introducing a complexity-theoretic framework that allows us to shed some light on the role of randomness in the LOCAL model. We define the SLOCAL model as a sequential version of the LOCAL model. Our framework allows us to prove completeness results with respect to the class of problems which can be solved efficiently in the SLOCAL model, implying that if any of the complete problems can be solved deterministically in logn rounds in the LOCAL model, we can deterministically solve all efficient SLOCAL-problems (including MIS and (Δ+1)-coloring) in logn rounds in the LOCAL model. Perhaps most surprisingly, we show that a rather rudimentary looking graph coloring problem is complete in the above sense: Color the nodes of a graph with colors red and blue such that each node of sufficiently large polylogarithmic degree has at least one neighbor of each color. The problem admits a trivial zero-round randomized solution. The result can be viewed as showing that the only obstacle to getting efficient determinstic algorithms in the LOCAL model is an efficient algorithm to approximately round fractional values into integer values. In addition, our formal framework also allows us to develop polylogarithmic-time randomized distributed algorithms in a simpler way. As a result, we provide a polylog-time distributed approximation scheme for arbitrary distributed covering and packing integer linear programs.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89131534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 127
Learning from untrusted data 从不可信的数据中学习
Pub Date : 2016-11-07 DOI: 10.1145/3055399.3055491
M. Charikar, J. Steinhardt, G. Valiant
The vast majority of theoretical results in machine learning and statistics assume that the training data is a reliable reflection of the phenomena to be learned. Similarly, most learning techniques used in practice are brittle to the presence of large amounts of biased or malicious data. Motivated by this, we consider two frameworks for studying estimation, learning, and optimization in the presence of significant fractions of arbitrary data. The first framework, list-decodable learning, asks whether it is possible to return a list of answers such that at least one is accurate. For example, given a dataset of n points for which an unknown subset of αn points are drawn from a distribution of interest, and no assumptions are made about the remaining (1 - α)n points, is it possible to return a list of poly(1/α) answers? The second framework, which we term the semi-verified model, asks whether a small dataset of trusted data (drawn from the distribution in question) can be used to extract accurate information from a much larger but untrusted dataset (of which only an α-fraction is drawn from the distribution). We show strong positive results in both settings, and provide an algorithm for robust learning in a very general stochastic optimization setting. This result has immediate implications for robustly estimating the mean of distributions with bounded second moments, robustly learning mixtures of such distributions, and robustly finding planted partitions in random graphs in which significant portions of the graph have been perturbed by an adversary.
机器学习和统计学中的绝大多数理论结果都假设训练数据是待学习现象的可靠反映。同样,在实践中使用的大多数学习技术对大量有偏见或恶意数据的存在是脆弱的。受此启发,我们考虑了两种框架,用于在任意数据的显著部分存在的情况下研究估计、学习和优化。第一个框架是列表可解码学习,它询问是否有可能返回一个答案列表,使得至少有一个答案是准确的。例如,给定一个n个点的数据集,其中αn个点的未知子集是从感兴趣的分布中提取的,并且没有对剩余的(1 - α)n个点进行假设,是否有可能返回poly(1/α)答案列表?第二个框架,我们称之为半验证模型,它询问是否可以使用可信数据的小数据集(从有问题的分布中提取)来从更大但不可信的数据集(其中只有α-部分是从分布中提取的)中提取准确的信息。我们在这两种情况下都显示了强有力的积极结果,并提供了一种在非常一般的随机优化设置下进行鲁棒学习的算法。这个结果对于稳健地估计有界秒矩分布的平均值,稳健地学习这些分布的混合,以及稳健地在图的重要部分被对手扰动的随机图中发现种植分区具有直接的意义。
{"title":"Learning from untrusted data","authors":"M. Charikar, J. Steinhardt, G. Valiant","doi":"10.1145/3055399.3055491","DOIUrl":"https://doi.org/10.1145/3055399.3055491","url":null,"abstract":"The vast majority of theoretical results in machine learning and statistics assume that the training data is a reliable reflection of the phenomena to be learned. Similarly, most learning techniques used in practice are brittle to the presence of large amounts of biased or malicious data. Motivated by this, we consider two frameworks for studying estimation, learning, and optimization in the presence of significant fractions of arbitrary data. The first framework, list-decodable learning, asks whether it is possible to return a list of answers such that at least one is accurate. For example, given a dataset of n points for which an unknown subset of αn points are drawn from a distribution of interest, and no assumptions are made about the remaining (1 - α)n points, is it possible to return a list of poly(1/α) answers? The second framework, which we term the semi-verified model, asks whether a small dataset of trusted data (drawn from the distribution in question) can be used to extract accurate information from a much larger but untrusted dataset (of which only an α-fraction is drawn from the distribution). We show strong positive results in both settings, and provide an algorithm for robust learning in a very general stochastic optimization setting. This result has immediate implications for robustly estimating the mean of distributions with bounded second moments, robustly learning mixtures of such distributions, and robustly finding planted partitions in random graphs in which significant portions of the graph have been perturbed by an adversary.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88682645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 258
Algorithmic discrepancy beyond partial coloring 超越部分着色的算法差异
Pub Date : 2016-11-06 DOI: 10.1145/3055399.3055490
N. Bansal, S. Garg
The partial coloring method is one of the most powerful and widely used method in combinatorial discrepancy problems. However, in many cases it leads to sub-optimal bounds as the partial coloring step must be iterated a logarithmic number of times, and the errors can add up in an adversarial way. We give a new and general algorithmic framework that overcomes the limitations of the partial coloring method and can be applied in a black-box manner to various problems. Using this framework, we give new improved bounds and algorithms for several classic problems in discrepancy. In particular, for Tusnady&'s problem, we give an improved O(log2 n) bound for discrepancy of axis-parallel rectangles and more generally an Od(logdn) bound for d-dimensional boxes in ℝd. Previously, even non-constructively, the best bounds were O(log2.5 n) and Od(logd+0.5n) respectively. Similarly, for the Steinitz problem we give the first algorithm that matches the best known non-constructive bounds due to Banaszczyk in the 𝓁∞ case, and improves the previous algorithmic bounds substantially in the 𝓁2 case. Our framework is based upon a substantial generalization of the techniques developed recently in the context of the Komlós discrepancy problem.
部分着色法是求解组合差异问题中最有效、应用最广泛的方法之一。然而,在许多情况下,它会导致次优边界,因为部分着色步骤必须迭代对数次,并且错误可能以对抗的方式累积。我们给出了一个新的和通用的算法框架,克服了部分着色方法的局限性,并能以黑盒的方式应用于各种问题。在此框架下,我们对几个经典的差异问题给出了新的改进界和算法。特别地,对于Tusnady& s问题,我们给出了一个改进的O(log2 n)界用于轴平行矩形的差异,更一般地说,给出了一个Od(logdn)界用于d维盒子的差异。以前,即使是非建设性的,最好的边界分别是O(log2.5 n)和Od(logd+0.5n)。类似地,对于Steinitz问题,我们给出了在𝓁∞情况下匹配Banaszczyk的最著名的非建设性界的第一个算法,并在𝓁2情况下大大改进了先前的算法界。我们的框架基于最近在Komlós差异问题的背景下开发的技术的大量概括。
{"title":"Algorithmic discrepancy beyond partial coloring","authors":"N. Bansal, S. Garg","doi":"10.1145/3055399.3055490","DOIUrl":"https://doi.org/10.1145/3055399.3055490","url":null,"abstract":"The partial coloring method is one of the most powerful and widely used method in combinatorial discrepancy problems. However, in many cases it leads to sub-optimal bounds as the partial coloring step must be iterated a logarithmic number of times, and the errors can add up in an adversarial way. We give a new and general algorithmic framework that overcomes the limitations of the partial coloring method and can be applied in a black-box manner to various problems. Using this framework, we give new improved bounds and algorithms for several classic problems in discrepancy. In particular, for Tusnady&'s problem, we give an improved O(log2 n) bound for discrepancy of axis-parallel rectangles and more generally an Od(logdn) bound for d-dimensional boxes in ℝd. Previously, even non-constructively, the best bounds were O(log2.5 n) and Od(logd+0.5n) respectively. Similarly, for the Steinitz problem we give the first algorithm that matches the best known non-constructive bounds due to Banaszczyk in the 𝓁∞ case, and improves the previous algorithmic bounds substantially in the 𝓁2 case. Our framework is based upon a substantial generalization of the techniques developed recently in the context of the Komlós discrepancy problem.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86293632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
Uniform sampling through the Lovasz local lemma 通过Lovasz局部引理的均匀抽样
Pub Date : 2016-11-05 DOI: 10.1145/3055399.3055410
Heng Guo, M. Jerrum, Jingcheng Liu
We propose a new algorithmic framework, called “partial rejection sampling”, to draw samples exactly from a product distribution, conditioned on none of a number of bad events occurring. Our framework builds (perhaps surprising) new connections between the variable framework of the Lovász Local Lemma and some clas- sical sampling algorithms such as the “cycle-popping” algorithm for rooted spanning trees by Wilson. Among other applications, we discover new algorithms to sample satisfying assignments of k-CNF formulas with bounded variable occurrences.
我们提出了一种新的算法框架,称为“部分拒绝抽样”,以不发生任何不良事件为条件,从产品分布中准确抽取样本。我们的框架在Lovász局部引理的变量框架和一些经典的采样算法之间建立了(可能令人惊讶的)新的联系,例如Wilson针对有根生成树的“跳出循环”算法。在其他应用中,我们发现了新的算法来采样具有有界变量出现的k-CNF公式的满意赋值。
{"title":"Uniform sampling through the Lovasz local lemma","authors":"Heng Guo, M. Jerrum, Jingcheng Liu","doi":"10.1145/3055399.3055410","DOIUrl":"https://doi.org/10.1145/3055399.3055410","url":null,"abstract":"We propose a new algorithmic framework, called “partial rejection sampling”, to draw samples exactly from a product distribution, conditioned on none of a number of bad events occurring. Our framework builds (perhaps surprising) new connections between the variable framework of the Lovász Local Lemma and some clas- sical sampling algorithms such as the “cycle-popping” algorithm for rooted spanning trees by Wilson. Among other applications, we discover new algorithms to sample satisfying assignments of k-CNF formulas with bounded variable occurrences.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74411089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 76
Twenty (simple) questions 20个(简单)问题
Pub Date : 2016-11-05 DOI: 10.1145/3055399.3055422
Y. Dagan, Yuval Filmus, Ariel Gabizon, S. Moran
A basic combinatorial interpretation of Shannon's entropy function is via the "20 questions" game. This cooperative game is played by two players, Alice and Bob: Alice picks a distribution Π over the numbers {1,…,n}, and announces it to Bob. She then chooses a number x according to Π, and Bob attempts to identify x using as few Yes/No queries as possible, on average. An optimal strategy for the "20 questions" game is given by a Huffman code for Π: Bob's questions reveal the codeword for x bit by bit. This strategy finds x using fewer than H(Π)+1 questions on average. However, the questions asked by Bob could be arbitrary. In this paper, we investigate the following question: *Are there restricted sets of questions that match the performance of Huffman codes, either exactly or approximately? Our first main result shows that for every distribution Π, Bob has a strategy that uses only questions of the form "x < c?" and "x = c?", and uncovers x using at most H(Π)+1 questions on average, matching the performance of Huffman codes in this sense. We also give a natural set of O(rn1/r) questions that achieve a performance of at most H(Π)+r, and show that Ωrn1/r) questions are required to achieve such a guarantee. Our second main result gives a set Q of 1.25n+o(n) questions such that for every distribution Π, Bob can implement an optimal strategy for Π using only questions from Q. We also show that 1.25n-o(n) questions are needed, for infinitely many n. If we allow a small slack of r over the optimal strategy, then roughly (rn)Θ(1/r) questions are necessary and sufficient.
香农熵函数的一个基本组合解释是通过“20个问题”游戏。这个合作博弈是由两个玩家Alice和Bob进行的:Alice在数字{1,…,n}上选择一个分布Π,并将其宣布给Bob。然后,她根据Π选择一个数字x, Bob尝试平均使用尽可能少的Yes/No查询来识别x。“20个问题”游戏的最佳策略是由Π的霍夫曼代码给出的:鲍勃的问题一点一点地揭示x的码字。这个策略发现x平均使用少于H(Π)+1个问题。然而,Bob提出的问题可能是任意的。在本文中,我们研究了以下问题:*是否存在与霍夫曼码的性能完全匹配或近似匹配的问题集?我们的第一个主要结果表明,对于每个分布Π, Bob的策略只使用形式为“x < c?”和“x = c?”的问题,并且平均最多使用H(Π)+1个问题来发现x,在这个意义上与霍夫曼代码的性能相匹配。我们还给出了O(rn1/r)个问题的自然集,这些问题的性能最多达到H(Π)+r,并表明需要Ωrn1/r)个问题才能达到这样的保证。我们的第二个主要结果给出了一个包含1.25n+o(n)个问题的集合Q,这样对于每个分布Π, Bob可以仅使用Q中的问题来实现Π的最优策略。我们还表明,对于无限多个n,需要1.25n-o(n)个问题。如果我们允许r在最优策略上有一个小的松弛,那么大约(rn)Θ(1/r)个问题是必要和充分的。
{"title":"Twenty (simple) questions","authors":"Y. Dagan, Yuval Filmus, Ariel Gabizon, S. Moran","doi":"10.1145/3055399.3055422","DOIUrl":"https://doi.org/10.1145/3055399.3055422","url":null,"abstract":"A basic combinatorial interpretation of Shannon's entropy function is via the \"20 questions\" game. This cooperative game is played by two players, Alice and Bob: Alice picks a distribution Π over the numbers {1,…,n}, and announces it to Bob. She then chooses a number x according to Π, and Bob attempts to identify x using as few Yes/No queries as possible, on average. An optimal strategy for the \"20 questions\" game is given by a Huffman code for Π: Bob's questions reveal the codeword for x bit by bit. This strategy finds x using fewer than H(Π)+1 questions on average. However, the questions asked by Bob could be arbitrary. In this paper, we investigate the following question: *Are there restricted sets of questions that match the performance of Huffman codes, either exactly or approximately? Our first main result shows that for every distribution Π, Bob has a strategy that uses only questions of the form \"x < c?\" and \"x = c?\", and uncovers x using at most H(Π)+1 questions on average, matching the performance of Huffman codes in this sense. We also give a natural set of O(rn1/r) questions that achieve a performance of at most H(Π)+r, and show that Ωrn1/r) questions are required to achieve such a guarantee. Our second main result gives a set Q of 1.25n+o(n) questions such that for every distribution Π, Bob can implement an optimal strategy for Π using only questions from Q. We also show that 1.25n-o(n) questions are needed, for infinitely many n. If we allow a small slack of r over the optimal strategy, then roughly (rn)Θ(1/r) questions are necessary and sufficient.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82995242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
DecreaseKeys are expensive for external memory priority queues 对于外部内存优先级队列来说,reducekeys开销很大
Pub Date : 2016-11-03 DOI: 10.1145/3055399.3055437
Kasper Eenberg, Kasper Green Larsen, Huacheng Yu
One of the biggest open problems in external memory data structures is the priority queue problem with DecreaseKey operations. If only Insert and ExtractMin operations need to be supported, one can design a comparison-based priority queue performing O((N/B)lgM/B N) I/Os over a sequence of N operations, where B is the disk block size in number of words and M is the main memory size in number of words. This matches the lower bound for comparison-based sorting and is hence optimal for comparison-based priority queues. However, if we also need to support DecreaseKeys, the performance of the best known priority queue is only O((N/B) lg2 N) I/Os. The big open question is whether a degradation in performance really is necessary. We answer this question affirmatively by proving a lower bound of Ω((N/B) lglgN B) I/Os for processing a sequence of N intermixed Insert, ExtraxtMin and DecreaseKey operations. Our lower bound is proved in the cell probe model and thus holds also for non-comparison-based priority queues.
外部内存数据结构中最大的开放问题之一是递减键操作的优先级队列问题。如果只需要支持Insert和ExtractMin操作,可以设计一个基于比较的优先级队列,在N个操作序列上执行O((N/B)lgM/B N) I/O,其中B是磁盘块大小(以字为单位),M是主内存大小(以字为单位)。这与基于比较的排序的下限相匹配,因此对于基于比较的优先级队列来说是最优的。但是,如果我们还需要支持reducekeys,则已知的最佳优先级队列的性能仅为O((N/B) lg2 N) I/O。一个悬而未决的大问题是,性能下降是否真的有必要。我们通过证明处理N个混合Insert, ExtraxtMin和reducekey操作序列的Ω((N/B) lglgN B) I/ o的下界,肯定地回答了这个问题。我们的下界在单元探测模型中得到了证明,因此也适用于非基于比较的优先级队列。
{"title":"DecreaseKeys are expensive for external memory priority queues","authors":"Kasper Eenberg, Kasper Green Larsen, Huacheng Yu","doi":"10.1145/3055399.3055437","DOIUrl":"https://doi.org/10.1145/3055399.3055437","url":null,"abstract":"One of the biggest open problems in external memory data structures is the priority queue problem with DecreaseKey operations. If only Insert and ExtractMin operations need to be supported, one can design a comparison-based priority queue performing O((N/B)lgM/B N) I/Os over a sequence of N operations, where B is the disk block size in number of words and M is the main memory size in number of words. This matches the lower bound for comparison-based sorting and is hence optimal for comparison-based priority queues. However, if we also need to support DecreaseKeys, the performance of the best known priority queue is only O((N/B) lg2 N) I/Os. The big open question is whether a degradation in performance really is necessary. We answer this question affirmatively by proving a lower bound of Ω((N/B) lglgN B) I/Os for processing a sequence of N intermixed Insert, ExtraxtMin and DecreaseKey operations. Our lower bound is proved in the cell probe model and thus holds also for non-comparison-based priority queues.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79745123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Finding approximate local minima faster than gradient descent 找到近似的局部最小值比梯度下降更快
Pub Date : 2016-11-03 DOI: 10.1145/3055399.3055464
Naman Agarwal, Z. Zhu, Brian Bullins, Elad Hazan, Tengyu Ma
We design a non-convex second-order optimization algorithm that is guaranteed to return an approximate local minimum in time which scales linearly in the underlying dimension and the number of training examples. The time complexity of our algorithm to find an approximate local minimum is even faster than that of gradient descent to find a critical point. Our algorithm applies to a general class of optimization problems including training a neural network and other non-convex objectives arising in machine learning.
我们设计了一个非凸二阶优化算法,保证在时间上返回一个近似的局部最小值,该最小值在底层维数和训练样本数量上呈线性扩展。该算法查找近似局部最小值的时间复杂度甚至比梯度下降法查找临界点的时间复杂度还要快。我们的算法适用于一般类型的优化问题,包括训练神经网络和机器学习中出现的其他非凸目标。
{"title":"Finding approximate local minima faster than gradient descent","authors":"Naman Agarwal, Z. Zhu, Brian Bullins, Elad Hazan, Tengyu Ma","doi":"10.1145/3055399.3055464","DOIUrl":"https://doi.org/10.1145/3055399.3055464","url":null,"abstract":"We design a non-convex second-order optimization algorithm that is guaranteed to return an approximate local minimum in time which scales linearly in the underlying dimension and the number of training examples. The time complexity of our algorithm to find an approximate local minimum is even faster than that of gradient descent to find a critical point. Our algorithm applies to a general class of optimization problems including training a neural network and other non-convex objectives arising in machine learning.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74443111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 231
期刊
Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1