首页 > 最新文献

Proceedings of the forty-seventh annual ACM symposium on Theory of Computing最新文献

英文 中文
Computing with Tangles 用缠结计算
Pub Date : 2015-02-28 DOI: 10.1145/2746539.2746587
Martin Grohe, Pascal Schweitzer
Tangles of graphs have been introduced by Robertson and Seymour in the context of their graph minor theory. Tangles may be viewed as describing "k-connected components" of a graph (though in a twisted way). They play an important role in graph minor theory. An interesting aspect of tangles is that they cannot only be defined for graphs, but more generally for arbitrary connectivity functions (that is, integer-valued submodular and symmetric set functions). However, tangles are difficult to deal with algorithmically. To start with, it is unclear how to represent them, because they are families of separations and as such may be exponentially large. Our first contribution is a data structure for representing and accessing all tangles of a graph up to some fixed order. Using this data structure, we can prove an algorithmic version of a very general structure theorem due to Carmesin, Diestel, Harman and Hundertmark (for graphs) and Hundertmark (for arbitrary connectivity functions) that yields a canonical tree decomposition whose parts correspond to the maximal tangles. (This may be viewed as a generalisation of the decomposition of a graph into its 3-connected components.)
图的缠结是由Robertson和Seymour在他们的图小理论中引入的。缠结可以被看作是描述图的“k连通分量”(虽然是以扭曲的方式)。它们在图小理论中起着重要的作用。缠结的一个有趣的方面是,它们不仅可以为图定义,而且可以更普遍地为任意连接函数(即整数值子模函数和对称集合函数)定义。然而,缠结很难在算法上处理。首先,不清楚如何表示它们,因为它们是分离的家庭,因此可能是指数级的大。我们的第一个贡献是一个数据结构,用于表示和访问图中固定顺序的所有缠结。使用这种数据结构,我们可以证明Carmesin, Diestel, Harman和Hundertmark(对于图)和Hundertmark(对于任意连通性函数)的一个非常一般的结构定理的算法版本,它产生了一个规范树分解,其部分对应于最大缠结。(这可以看作是将图分解为3连通分量的推广。)
{"title":"Computing with Tangles","authors":"Martin Grohe, Pascal Schweitzer","doi":"10.1145/2746539.2746587","DOIUrl":"https://doi.org/10.1145/2746539.2746587","url":null,"abstract":"Tangles of graphs have been introduced by Robertson and Seymour in the context of their graph minor theory. Tangles may be viewed as describing \"k-connected components\" of a graph (though in a twisted way). They play an important role in graph minor theory. An interesting aspect of tangles is that they cannot only be defined for graphs, but more generally for arbitrary connectivity functions (that is, integer-valued submodular and symmetric set functions). However, tangles are difficult to deal with algorithmically. To start with, it is unclear how to represent them, because they are families of separations and as such may be exponentially large. Our first contribution is a data structure for representing and accessing all tangles of a graph up to some fixed order. Using this data structure, we can prove an algorithmic version of a very general structure theorem due to Carmesin, Diestel, Harman and Hundertmark (for graphs) and Hundertmark (for arbitrary connectivity functions) that yields a canonical tree decomposition whose parts correspond to the maximal tangles. (This may be viewed as a generalisation of the decomposition of a graph into its 3-connected components.)","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75140307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Prioritized Metric Structures and Embedding 优先度量结构和嵌入
Pub Date : 2015-02-19 DOI: 10.1145/2746539.2746623
Michael Elkin, Arnold Filtser, Ofer Neiman
Metric data structures (distance oracles, distance labeling schemes, routing schemes) and low-distortion embeddings provide a powerful algorithmic methodology, which has been successfully applied for approximation algorithms [21], online algorithms [7], distributed algorithms [19] and for computing sparsifiers [28]. However, this methodology appears to have a limitation: the worst-case performance inherently depends on the cardinality of the metric, and one could not specify in advance which vertices/points should enjoy a better service (i.e., stretch/distortion, label size/dimension) than that given by the worst-case guarantee. In this paper we alleviate this limitation by devising a suit of prioritized metric data structures and embeddings. We show that given a priority ranking (x1,x2,...,xn) of the graph vertices (respectively, metric points) one can devise a metric data structure (respectively, embedding) in which the stretch (resp., distortion) incurred by any pair containing a vertex xj will depend on the rank j of the vertex. We also show that other important parameters, such as the label size and (in some sense) the dimension, may depend only on j. In some of our metric data structures (resp., embeddings) we achieve both prioritized stretch (resp., distortion) and label size (resp., dimension) simultaneously. The worst-case performance of our metric data structures and embeddings is typically asymptotically no worse than of their non-prioritized counterparts.
度量数据结构(距离预言器、距离标记方案、路由方案)和低失真嵌入提供了一种强大的算法方法,已成功应用于近似算法[21]、在线算法[7]、分布式算法[19]和计算稀疏器[28]。然而,这种方法似乎有一个局限性:最坏情况的性能本质上取决于度量的基数,并且不能提前指定哪些顶点/点应该享受比最坏情况保证提供的更好的服务(即拉伸/扭曲,标签大小/维度)。在本文中,我们通过设计一套优先度量数据结构和嵌入来缓解这种限制。我们表明,给定图顶点(分别为度量点)的优先级排序(x1,x2,…,xn),可以设计一个度量数据结构(分别为嵌入),其中拉伸(分别为p。任何包含顶点xj的对所产生的(畸变)将取决于顶点的秩j。我们还展示了其他重要的参数,如标签大小和(在某种意义上)维度,可能只依赖于j。,嵌入),我们实现了两个优先级拉伸(响应。(如失真)和标签尺寸(如:,维度)同时。我们的度量数据结构和嵌入在最坏情况下的性能通常并不比它们的非优先级对应项差。
{"title":"Prioritized Metric Structures and Embedding","authors":"Michael Elkin, Arnold Filtser, Ofer Neiman","doi":"10.1145/2746539.2746623","DOIUrl":"https://doi.org/10.1145/2746539.2746623","url":null,"abstract":"Metric data structures (distance oracles, distance labeling schemes, routing schemes) and low-distortion embeddings provide a powerful algorithmic methodology, which has been successfully applied for approximation algorithms [21], online algorithms [7], distributed algorithms [19] and for computing sparsifiers [28]. However, this methodology appears to have a limitation: the worst-case performance inherently depends on the cardinality of the metric, and one could not specify in advance which vertices/points should enjoy a better service (i.e., stretch/distortion, label size/dimension) than that given by the worst-case guarantee. In this paper we alleviate this limitation by devising a suit of prioritized metric data structures and embeddings. We show that given a priority ranking (x1,x2,...,xn) of the graph vertices (respectively, metric points) one can devise a metric data structure (respectively, embedding) in which the stretch (resp., distortion) incurred by any pair containing a vertex xj will depend on the rank j of the vertex. We also show that other important parameters, such as the label size and (in some sense) the dimension, may depend only on j. In some of our metric data structures (resp., embeddings) we achieve both prioritized stretch (resp., distortion) and label size (resp., dimension) simultaneously. The worst-case performance of our metric data structures and embeddings is typically asymptotically no worse than of their non-prioritized counterparts.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74945676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Clustered Integer 3SUM via Additive Combinatorics 基于可加组合的聚类整数3SUM
Pub Date : 2015-02-18 DOI: 10.1145/2746539.2746568
Timothy M. Chan, Moshe Lewenstein
We present a collection of new results on problems related to 3SUM, including: The first truly subquadratic algorithm for computing the (min,+) convolution for monotone increasing sequences with integer values bounded by O(n), solving 3SUM for monotone sets in 2D with integer coordinates bounded by O(n), and preprocessing a binary string for histogram indexing (also called jumbled indexing). The running time is O(n(9+√177)/12, polylog,n)=O(n1.859) with randomization, or O(n1.864) deterministically. This greatly improves the previous n2/2Ω(√log n) time bound obtained from Williams' recent result on all-pairs shortest paths [STOC'14], and answers an open question raised by several researchers studying the histogram indexing problem. The first algorithm for histogram indexing for any constant alphabet size that achieves truly subquadratic preprocessing time and truly sublinear query time. A truly subquadratic algorithm for integer 3SUM in the case when the given set can be partitioned into n1-δ clusters each covered by an interval of length n, for any constant δ>0. An algorithm to preprocess any set of n integers so that subsequently 3SUM on any given subset can be solved in O(n13/7, polylog,n) time. All these results are obtained by a surprising new technique, based on the Balog--Szemeredi--Gowers Theorem from additive combinatorics.
我们提出了一系列与3SUM相关的问题的新结果,包括:第一个真正的次二次算法,用于计算以O(n)为界的整数值单调递增序列的(min,+)卷积,求解以O(n)为界的二维整数坐标单调集的3SUM,以及预处理用于直方图索引(也称为混乱索引)的二进制字符串。运行时间为O(n(9+√177)/12,polylog,n)=O(n1.859)随机化,或O(n1.864)确定性。这大大改进了Williams最近在全对最短路径[STOC'14]上得到的n2/2Ω(√log n)时间限制,并回答了几位研究直方图索引问题的研究人员提出的一个开放性问题。第一个用于任何恒定字母大小的直方图索引的算法,实现了真正的次二次预处理时间和真正的次线性查询时间。对于任意常数δ>0,当给定集合可以被划分为n1-δ个簇,每个簇被长度为n的区间覆盖时,一个真正的整数3SUM的次二次算法。一种对任意n个整数集进行预处理的算法,使任意给定子集上的3SUM在O(n13/7, polylog,n)时间内解出。所有这些结果都是通过一种令人惊讶的新技术获得的,该技术基于加性组合学中的Balog—Szemeredi—Gowers定理。
{"title":"Clustered Integer 3SUM via Additive Combinatorics","authors":"Timothy M. Chan, Moshe Lewenstein","doi":"10.1145/2746539.2746568","DOIUrl":"https://doi.org/10.1145/2746539.2746568","url":null,"abstract":"We present a collection of new results on problems related to 3SUM, including: The first truly subquadratic algorithm for computing the (min,+) convolution for monotone increasing sequences with integer values bounded by O(n), solving 3SUM for monotone sets in 2D with integer coordinates bounded by O(n), and preprocessing a binary string for histogram indexing (also called jumbled indexing). The running time is O(n(9+√177)/12, polylog,n)=O(n1.859) with randomization, or O(n1.864) deterministically. This greatly improves the previous n2/2Ω(√log n) time bound obtained from Williams' recent result on all-pairs shortest paths [STOC'14], and answers an open question raised by several researchers studying the histogram indexing problem. The first algorithm for histogram indexing for any constant alphabet size that achieves truly subquadratic preprocessing time and truly sublinear query time. A truly subquadratic algorithm for integer 3SUM in the case when the given set can be partitioned into n1-δ clusters each covered by an interval of length n, for any constant δ>0. An algorithm to preprocess any set of n integers so that subsequently 3SUM on any given subset can be solved in O(n13/7, polylog,n) time. All these results are obtained by a surprising new technique, based on the Balog--Szemeredi--Gowers Theorem from additive combinatorics.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78932811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 122
Secretary Problems with Non-Uniform Arrival Order 秘书关于不统一到货单的问题
Pub Date : 2015-02-07 DOI: 10.1145/2746539.2746602
Thomas Kesselheim, Robert D. Kleinberg, Rad Niazadeh
For a number of problems in the theory of online algorithms, it is known that the assumption that elements arrive in uniformly random order enables the design of algorithms with much better performance guarantees than under worst-case assumptions. The quintessential example of this phenomenon is the secretary problem, in which an algorithm attempts to stop a sequence at the moment it observes the maximum value in the sequence. As is well known, if the sequence is presented in uniformly random order there is an algorithm that succeeds with probability 1/e, whereas no non-trivial performance guarantee is possible if the elements arrive in worst-case order. In many of the applications of online algorithms, it is reasonable to assume there is some randomness in the input sequence, but unreasonable to assume that the arrival ordering is uniformly random. This work initiates an investigation into relaxations of the random-ordering hypothesis in online algorithms, by focusing on the secretary problem and asking what performance guarantees one can prove under relaxed assumptions. Toward this end, we present two sets of properties of distributions over permutations as sufficient conditions, called the (p,q,δ)-block-independence property} and (k,δ)-uniform-induced-ordering property}. We show these two are asymptotically equivalent by borrowing some techniques from the celebrated approximation theory. Moreover, we show they both imply the existence of secretary algorithms with constant probability of correct selection, approaching the optimal constant 1/e as the related parameters of the property tend towards their extreme values. Both of these properties are significantly weaker than the usual assumption of uniform randomness; we substantiate this by providing several constructions of distributions that satisfy (p,q,δ)-block-independence. As one application of our investigation, we prove that Θ(log log n) is the minimum entropy of any permutation distribution that permits constant probability of correct selection in the secretary problem with $n$ elements. While our block-independence condition is sufficient for constant probability of correct selection, it is not necessary; however, we present complexity-theoretic evidence that no simple necessary and sufficient criterion exists. Finally, we explore the extent to which the performance guarantees of other algorithms are preserved when one relaxes the uniform random ordering assumption to (p,q,δ)-block-independence, obtaining a negative result for the weighted bipartite matching algorithm of Korula and Pal.
对于在线算法理论中的许多问题,已知元素以均匀随机顺序到达的假设使算法设计具有比最坏情况假设更好的性能保证。这种现象的典型例子是秘书问题,在这个问题中,算法试图在它观察到序列中的最大值时停止序列。众所周知,如果序列以均匀随机顺序呈现,则存在一种算法,其成功概率为1/e,而如果元素以最坏情况顺序到达,则不可能有非平凡的性能保证。在许多在线算法的应用中,假设输入序列存在一定的随机性是合理的,但假设到达顺序是均匀随机是不合理的。这项工作启动了对在线算法中随机排序假设松弛的调查,通过关注秘书问题并询问在松弛假设下可以证明的性能保证。为此,我们提出了排列上分布的两组性质作为充分条件,分别称为(p,q,δ)-块无关性和(k,δ)-均匀诱导排序性。我们通过借用著名的近似理论中的一些技巧来证明这两者是渐近等价的。此外,我们还证明了它们都暗示了秘书算法的存在,当属性的相关参数趋于极值时,秘书算法的正确选择概率为常数,逼近最优常数1/e。这两个属性都明显弱于通常的均匀随机性假设;我们通过提供几个满足(p,q,δ)块独立性的分布结构来证实这一点。作为我们研究的一个应用,我们证明Θ(log log n)是任何排列分布的最小熵,它允许在有$n$个元素的秘书问题中有恒定的正确选择概率。虽然我们的块无关条件是足够的,正确选择的概率恒定,但不是必要的;然而,我们提出了复杂性理论证据,证明不存在简单的充分必要判据。最后,我们探讨了当将一致随机排序假设放宽到(p,q,δ)块无关时,其他算法的性能保证在多大程度上保持不变,得到了Korula和Pal的加权二部匹配算法的否定结果。
{"title":"Secretary Problems with Non-Uniform Arrival Order","authors":"Thomas Kesselheim, Robert D. Kleinberg, Rad Niazadeh","doi":"10.1145/2746539.2746602","DOIUrl":"https://doi.org/10.1145/2746539.2746602","url":null,"abstract":"For a number of problems in the theory of online algorithms, it is known that the assumption that elements arrive in uniformly random order enables the design of algorithms with much better performance guarantees than under worst-case assumptions. The quintessential example of this phenomenon is the secretary problem, in which an algorithm attempts to stop a sequence at the moment it observes the maximum value in the sequence. As is well known, if the sequence is presented in uniformly random order there is an algorithm that succeeds with probability 1/e, whereas no non-trivial performance guarantee is possible if the elements arrive in worst-case order. In many of the applications of online algorithms, it is reasonable to assume there is some randomness in the input sequence, but unreasonable to assume that the arrival ordering is uniformly random. This work initiates an investigation into relaxations of the random-ordering hypothesis in online algorithms, by focusing on the secretary problem and asking what performance guarantees one can prove under relaxed assumptions. Toward this end, we present two sets of properties of distributions over permutations as sufficient conditions, called the (p,q,δ)-block-independence property} and (k,δ)-uniform-induced-ordering property}. We show these two are asymptotically equivalent by borrowing some techniques from the celebrated approximation theory. Moreover, we show they both imply the existence of secretary algorithms with constant probability of correct selection, approaching the optimal constant 1/e as the related parameters of the property tend towards their extreme values. Both of these properties are significantly weaker than the usual assumption of uniform randomness; we substantiate this by providing several constructions of distributions that satisfy (p,q,δ)-block-independence. As one application of our investigation, we prove that Θ(log log n) is the minimum entropy of any permutation distribution that permits constant probability of correct selection in the secretary problem with $n$ elements. While our block-independence condition is sufficient for constant probability of correct selection, it is not necessary; however, we present complexity-theoretic evidence that no simple necessary and sufficient criterion exists. Finally, we explore the extent to which the performance guarantees of other algorithms are preserved when one relaxes the uniform random ordering assumption to (p,q,δ)-block-independence, obtaining a negative result for the weighted bipartite matching algorithm of Korula and Pal.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"61 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75074606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Optimal Data-Dependent Hashing for Approximate Near Neighbors 近似近邻的最优数据相关哈希
Pub Date : 2015-01-05 DOI: 10.1145/2746539.2746553
Alexandr Andoni, Ilya P. Razenshteyn
We show an optimal data-dependent hashing scheme for the approximate near neighbor problem. For an n-point dataset in a d-dimensional space our data structure achieves query time O(d ⋅ nρ+o(1)) and space O(n1+ρ+o(1) + d ⋅ n), where ρ=1/(2c2-1) for the Euclidean space and approximation c>1. For the Hamming space, we obtain an exponent of ρ=1/(2c-1). Our result completes the direction set forth in (Andoni, Indyk, Nguyen, Razenshteyn 2014) who gave a proof-of-concept that data-dependent hashing can outperform classic Locality Sensitive Hashing (LSH). In contrast to (Andoni, Indyk, Nguyen, Razenshteyn 2014), the new bound is not only optimal, but in fact improves over the best (optimal) LSH data structures (Indyk, Motwani 1998) (Andoni, Indyk 2006) for all approximation factors c>1. From the technical perspective, we proceed by decomposing an arbitrary dataset into several subsets that are, in a certain sense, pseudo-random.
我们给出了近似近邻问题的最优数据相关哈希方案。对于d维空间中的n点数据集,我们的数据结构实现了查询时间O(d·nρ+ O(1))和空间O(n1+ρ+ O(1) + d·n),其中ρ=1/(2c2-1)对于欧几里得空间和近似c>1。对于Hamming空间,我们得到ρ=1/(2c-1)的指数。我们的结果完成了(Andoni, Indyk, Nguyen, Razenshteyn 2014)中提出的方向,他们给出了一个概念证明,即数据依赖哈希可以优于经典的位置敏感哈希(LSH)。与(Andoni, Indyk, Nguyen, Razenshteyn 2014)相比,新边界不仅是最优的,而且实际上对所有近似因子c bbb10 1都优于最佳(最优)LSH数据结构(Indyk, Motwani 1998) (Andoni, Indyk 2006)。从技术角度来看,我们将任意数据集分解为几个子集,这些子集在某种意义上是伪随机的。
{"title":"Optimal Data-Dependent Hashing for Approximate Near Neighbors","authors":"Alexandr Andoni, Ilya P. Razenshteyn","doi":"10.1145/2746539.2746553","DOIUrl":"https://doi.org/10.1145/2746539.2746553","url":null,"abstract":"We show an optimal data-dependent hashing scheme for the approximate near neighbor problem. For an n-point dataset in a d-dimensional space our data structure achieves query time O(d ⋅ nρ+o(1)) and space O(n1+ρ+o(1) + d ⋅ n), where ρ=1/(2c2-1) for the Euclidean space and approximation c>1. For the Hamming space, we obtain an exponent of ρ=1/(2c-1). Our result completes the direction set forth in (Andoni, Indyk, Nguyen, Razenshteyn 2014) who gave a proof-of-concept that data-dependent hashing can outperform classic Locality Sensitive Hashing (LSH). In contrast to (Andoni, Indyk, Nguyen, Razenshteyn 2014), the new bound is not only optimal, but in fact improves over the best (optimal) LSH data structures (Indyk, Motwani 1998) (Andoni, Indyk 2006) for all approximation factors c>1. From the technical perspective, we proceed by decomposing an arbitrary dataset into several subsets that are, in a certain sense, pseudo-random.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"53 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86925689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 263
Sum of Squares Lower Bounds from Pairwise Independence 两两独立的平方和下界
Pub Date : 2015-01-04 DOI: 10.1145/2746539.2746625
B. Barak, S. Chan, Pravesh Kothari
We prove that for every ε>0 and predicate P:{0,1}k-> {0,1} that supports a pairwise independent distribution, there exists an instance I of the Max P constraint satisfaction problem on n variables such that no assignment can satisfy more than a ~(|P-1(1)|)/(2k)+ε fraction of I's constraints but the degree Ω(n) Sum of Squares semidefinite programming hierarchy cannot certify that I is unsatisfiable. Similar results were previously only known for weaker hierarchies.
证明了对于每一个ε>0且支持两两独立分布的谓词P:{0,1}k->{0,1},存在n个变量的最大P约束满足问题的实例I,使得任何赋值都不能满足I的约束大于1 ~(|P-1(1)|)/(2k)+ε分数,但Ω(n)平方和半定规划层次不能证明I是不可满足的。类似的结果以前只在较弱的等级制度中发现。
{"title":"Sum of Squares Lower Bounds from Pairwise Independence","authors":"B. Barak, S. Chan, Pravesh Kothari","doi":"10.1145/2746539.2746625","DOIUrl":"https://doi.org/10.1145/2746539.2746625","url":null,"abstract":"We prove that for every ε>0 and predicate P:{0,1}k-> {0,1} that supports a pairwise independent distribution, there exists an instance I of the Max P constraint satisfaction problem on n variables such that no assignment can satisfy more than a ~(|P-1(1)|)/(2k)+ε fraction of I's constraints but the degree Ω(n) Sum of Squares semidefinite programming hierarchy cannot certify that I is unsatisfiable. Similar results were previously only known for weaker hierarchies.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73636118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
Solving the Shortest Vector Problem in 2n Time Using Discrete Gaussian Sampling: Extended Abstract 用离散高斯抽样在2n时间内求解最短向量问题:扩展摘要
Pub Date : 2014-12-26 DOI: 10.1145/2746539.2746606
Divesh Aggarwal, D. Dadush, O. Regev, Noah Stephens-Davidowitz
We give a randomized 2n+o(n)-time and space algorithm for solving the Shortest Vector Problem (SVP) on n-dimensional Euclidean lattices. This improves on the previous fastest algorithm: the deterministic ~O(4n)-time and ~O(2n)-space algorithm of Micciancio and Voulgaris (STOC 2010, SIAM J. Comp. 2013). In fact, we give a conceptually simple algorithm that solves the (in our opinion, even more interesting) problem of discrete Gaussian sampling (DGS). More specifically, we show how to sample 2n/2 vectors from the discrete Gaussian distribution at any parameter in 2n+o(n) time and space. (Prior work only solved DGS for very large parameters.) Our SVP result then follows from a natural reduction from SVP to DGS. In addition, we give a more refined algorithm for DGS above the so-called smoothing parameter of the lattice, which can generate 2n/2 discrete Gaussian samples in just 2n/2+o(n) time and space. Among other things, this implies a 2n/2+o(n)-time and space algorithm for 1.93-approximate decision SVP.
针对n维欧几里德格上的最短向量问题(SVP),给出了一种随机化的2n+o(n)时间和空间算法。这改进了之前最快的算法:Micciancio和Voulgaris (STOC 2010, SIAM J. Comp. 2013)的确定性~O(4n)时间和~O(2n)空间算法。事实上,我们给出了一个概念上简单的算法来解决离散高斯抽样(DGS)的问题(在我们看来,甚至更有趣)。更具体地说,我们展示了如何在2n+o(n)时间和空间中从任意参数的离散高斯分布中采样2n/2个向量。(之前的工作只解决了非常大的参数下的DGS。)我们的SVP结果随后从SVP到DGS的自然还原。此外,我们给出了一种更精细的DGS算法,该算法可以在2n/2+o(n)的时间和空间内生成2n/2个离散高斯样本。除此之外,这意味着一个2n/2+o(n)的时间和空间算法用于1.93近似决策SVP。
{"title":"Solving the Shortest Vector Problem in 2n Time Using Discrete Gaussian Sampling: Extended Abstract","authors":"Divesh Aggarwal, D. Dadush, O. Regev, Noah Stephens-Davidowitz","doi":"10.1145/2746539.2746606","DOIUrl":"https://doi.org/10.1145/2746539.2746606","url":null,"abstract":"We give a randomized 2n+o(n)-time and space algorithm for solving the Shortest Vector Problem (SVP) on n-dimensional Euclidean lattices. This improves on the previous fastest algorithm: the deterministic ~O(4n)-time and ~O(2n)-space algorithm of Micciancio and Voulgaris (STOC 2010, SIAM J. Comp. 2013). In fact, we give a conceptually simple algorithm that solves the (in our opinion, even more interesting) problem of discrete Gaussian sampling (DGS). More specifically, we show how to sample 2n/2 vectors from the discrete Gaussian distribution at any parameter in 2n+o(n) time and space. (Prior work only solved DGS for very large parameters.) Our SVP result then follows from a natural reduction from SVP to DGS. In addition, we give a more refined algorithm for DGS above the so-called smoothing parameter of the lattice, which can generate 2n/2 discrete Gaussian samples in just 2n/2+o(n) time and space. Among other things, this implies a 2n/2+o(n)-time and space algorithm for 1.93-approximate decision SVP.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"62 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87263751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 93
Greedy Algorithms for Steiner Forest Steiner森林的贪婪算法
Pub Date : 2014-12-24 DOI: 10.1145/2746539.2746590
Anupam Gupta, Amit Kumar
In the Steiner Forest problem, we are given terminal pairs si, ti, and need to find the cheapest subgraph which connects each of the terminal pairs together. In 1991, Agrawal, Klein, and Ravi gave a primal-dual constant-factor approximation algorithm for this problem. Until this work, the only constant-factor approximations we know are via linear programming relaxations. In this paper, we consider the following greedy algorithm: Given terminal pairs in a metric space, a terminal is active if its distance to its partner is non-zero. Pick the two closest active terminals (say si, tj), set the distance between them to zero, and buy a path connecting them. Recompute the metric, and repeat.} It has long been open to analyze this greedy algorithm. Our main result shows that this algorithm is a constant-factor approximation. We use this algorithm to give new, simpler constructions of cost-sharing schemes for Steiner forest. In particular, the first "group-strict" cost-shares for this problem implies a very simple combinatorial sampling-based algorithm for stochastic Steiner forest.
在斯坦纳森林问题中,我们给定端点对si, ti,并且需要找到将每个端点对连接在一起的最便宜子图。1991年,Agrawal, Klein,和Ravi给出了这个问题的一个原对偶常因子近似算法。在这项工作之前,我们所知道的唯一常数因子近似是通过线性规划松弛。本文考虑以下贪婪算法:给定度量空间中的终端对,如果一个终端与其伙伴的距离不为零,则该终端是活动的。选择两个最近的有源端子(例如si, tj),将它们之间的距离设置为零,并购买连接它们的路径。重新计算度量,并重复。对于这种贪心算法的分析早已开放。我们的主要结果表明,该算法是一个常因子近似。我们用这个算法给出了新的、更简单的斯坦纳森林成本分担方案的构造。特别是,该问题的第一个“组严格”成本分担意味着一个非常简单的基于随机斯坦纳森林的组合抽样算法。
{"title":"Greedy Algorithms for Steiner Forest","authors":"Anupam Gupta, Amit Kumar","doi":"10.1145/2746539.2746590","DOIUrl":"https://doi.org/10.1145/2746539.2746590","url":null,"abstract":"In the Steiner Forest problem, we are given terminal pairs si, ti, and need to find the cheapest subgraph which connects each of the terminal pairs together. In 1991, Agrawal, Klein, and Ravi gave a primal-dual constant-factor approximation algorithm for this problem. Until this work, the only constant-factor approximations we know are via linear programming relaxations. In this paper, we consider the following greedy algorithm: Given terminal pairs in a metric space, a terminal is active if its distance to its partner is non-zero. Pick the two closest active terminals (say si, tj), set the distance between them to zero, and buy a path connecting them. Recompute the metric, and repeat.} It has long been open to analyze this greedy algorithm. Our main result shows that this algorithm is a constant-factor approximation. We use this algorithm to give new, simpler constructions of cost-sharing schemes for Steiner forest. In particular, the first \"group-strict\" cost-shares for this problem implies a very simple combinatorial sampling-based algorithm for stochastic Steiner forest.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87388950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
A Characterization of the Capacity of Online (causal) Binary Channels 在线(因果)二元信道容量的表征
Pub Date : 2014-12-19 DOI: 10.1145/2746539.2746591
Zitan Chen, S. Jaggi, M. Langberg
In the binary online (or "causal") channel coding model, a sender wishes to communicate a message to a receiver by transmitting a codeword x = (x1,...,xn) ∈ {0,1}n bit by bit via a channel limited to at most pn corruptions. The channel is "online" in the sense that at the ith step of communication the channel decides whether to corrupt the ith bit or not based on its view so far, i.e., its decision depends only on the transmitted bits (x1,...,xi). This is in contrast to the classical adversarial channel in which the error is chosen by a channel that has full knowledge of the transmitted codeword x. In this work we study the capacity of binary online channels for two corruption models: the bit-flip model in which the channel may flip at most pn of the bits of the transmitted codeword, and the erasure model in which the channel may erase at most pn bits of the transmitted codeword. Specifically, for both error models we give a full characterization of the capacity as a function of p. The online channel (in both the bit-flip and erasure case) has seen a number of recent studies which present both upper and lower bounds on its capacity. In this work, we present and analyze a coding scheme that improves on the previously suggested lower bounds and matches the previously suggested upper bounds thus implying a tight characterization.
在二进制在线(或“因果”)信道编码模型中,发送方希望通过限制在最多pn损坏的信道一点点地传输码字x = (x1,…,xn)∈{0,1}n来与接收方通信。信道“在线”的意思是,在通信的第i步,信道根据迄今为止的视图决定是否损坏第i位,也就是说,它的决定仅取决于传输的比特数(x1,…,xi)。这与经典的对抗性信道形成对比,在对抗性信道中,错误是由一个完全知道传输码字x的信道选择的。在这项工作中,我们研究了两种损坏模型下二进制在线信道的容量:比特翻转模型,其中信道最多可以翻转传输码字的pn位,以及擦除模型,其中信道最多可以擦除传输码字的pn位。具体来说,对于这两种误差模型,我们给出了容量作为p函数的完整表征。在线通道(在位翻转和擦除情况下)已经看到了许多最近的研究,这些研究给出了其容量的上限和下限。在这项工作中,我们提出并分析了一种编码方案,该方案改进了先前建议的下界,并匹配了先前建议的上界,从而暗示了严格的表征。
{"title":"A Characterization of the Capacity of Online (causal) Binary Channels","authors":"Zitan Chen, S. Jaggi, M. Langberg","doi":"10.1145/2746539.2746591","DOIUrl":"https://doi.org/10.1145/2746539.2746591","url":null,"abstract":"In the binary online (or \"causal\") channel coding model, a sender wishes to communicate a message to a receiver by transmitting a codeword x = (x1,...,xn) ∈ {0,1}n bit by bit via a channel limited to at most pn corruptions. The channel is \"online\" in the sense that at the ith step of communication the channel decides whether to corrupt the ith bit or not based on its view so far, i.e., its decision depends only on the transmitted bits (x1,...,xi). This is in contrast to the classical adversarial channel in which the error is chosen by a channel that has full knowledge of the transmitted codeword x. In this work we study the capacity of binary online channels for two corruption models: the bit-flip model in which the channel may flip at most pn of the bits of the transmitted codeword, and the erasure model in which the channel may erase at most pn bits of the transmitted codeword. Specifically, for both error models we give a full characterization of the capacity as a function of p. The online channel (in both the bit-flip and erasure case) has seen a number of recent studies which present both upper and lower bounds on its capacity. In this work, we present and analyze a coding scheme that improves on the previously suggested lower bounds and matches the previously suggested upper bounds thus implying a tight characterization.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88946349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Inapproximability of Truthful Mechanisms via Generalizations of the VC Dimension VC维推广下真实机制的不可逼近性
Pub Date : 2014-12-19 DOI: 10.1145/2746539.2746597
Amit Daniely, Michael Schapira, Gal Shahaf
Algorithmic mechanism design (AMD) studies the delicate interplay between computational efficiency, truthfulness, and optimality. We focus on AMD's paradigmatic problem: combinatorial auctions. We present a new generalization of the VC dimension to multivalued collections of functions, which encompasses the classical VC dimension, Natarajan dimension, and Steele dimension. We present a corresponding generalization of the Sauer-Shelah Lemma and harness this VC machinery to establish inapproximability results for deterministic truthful mechanisms. Our results essentially unify all inapproximability results for deterministic truthful mechanisms for combinatorial auctions to date and establish new separation gaps between truthful and non-truthful algorithms.
算法机制设计(AMD)研究计算效率、真实性和最优性之间微妙的相互作用。我们关注的是AMD的典型问题:组合拍卖。将VC维推广到函数的多值集合,包括经典的VC维、Natarajan维和Steele维。我们提出了相应的Sauer-Shelah引理的推广,并利用该VC机制建立了确定性真实机制的不近似结果。我们的结果基本上统一了迄今为止组合拍卖的确定性真实机制的所有不可近似性结果,并在真实和非真实算法之间建立了新的分离差距。
{"title":"Inapproximability of Truthful Mechanisms via Generalizations of the VC Dimension","authors":"Amit Daniely, Michael Schapira, Gal Shahaf","doi":"10.1145/2746539.2746597","DOIUrl":"https://doi.org/10.1145/2746539.2746597","url":null,"abstract":"Algorithmic mechanism design (AMD) studies the delicate interplay between computational efficiency, truthfulness, and optimality. We focus on AMD's paradigmatic problem: combinatorial auctions. We present a new generalization of the VC dimension to multivalued collections of functions, which encompasses the classical VC dimension, Natarajan dimension, and Steele dimension. We present a corresponding generalization of the Sauer-Shelah Lemma and harness this VC machinery to establish inapproximability results for deterministic truthful mechanisms. Our results essentially unify all inapproximability results for deterministic truthful mechanisms for combinatorial auctions to date and establish new separation gaps between truthful and non-truthful algorithms.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82918180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
期刊
Proceedings of the forty-seventh annual ACM symposium on Theory of Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1