首页 > 最新文献

Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing最新文献

英文 中文
Pseudodeterministic constructions in subexponential time 次指数时间内的伪确定性结构
Pub Date : 2016-12-06 DOI: 10.1145/3055399.3055500
I. Oliveira, R. Santhanam
We study pseudodeterministic constructions, i.e., randomized algorithms which output the same solution on most computation paths. We establish unconditionally that there is an infinite sequence {pn} of primes and a randomized algorithm A running in expected sub-exponential time such that for each n, on input 1|pn|, A outputs pn with probability 1. In other words, our result provides a pseudodeterministic construction of primes in sub-exponential time which works infinitely often. This result follows from a more general theorem about pseudodeterministic constructions. A property Q ⊆ {0,1}* is ϒ-dense if for large enough n, |Q ∩ {0,1}n| ≥ ϒ2n. We show that for each c > 0 at least one of the following holds: (1) There is a pseudodeterministic polynomial time construction of a family {Hn} of sets, Hn ⊆ {0,1}n, such that for each (1/nc)-dense property Q Ε DTIME(nc) and every large enough n, Hn ∩ Q ≠ ∅ or (2) There is a deterministic sub-exponential time construction of a family {H′n} of sets, H′n ∩ {0,1}n, such that for each (1/nc)-dense property Q Ε DTIME(nc) and for infinitely many values of n, H′n ∩ Q ≠ ∅. We provide further algorithmic applications that might be of independent interest. Perhaps intriguingly, while our main results are unconditional, they have a non-constructive element, arising from a sequence of applications of the hardness versus randomness paradigm.
我们研究伪确定性结构,即在大多数计算路径上输出相同解的随机算法。我们无条件地建立了一个无限素数序列{pn}和一个随机算法a在期望的次指数时间内运行,使得对于每个n,在输入1|pn|时,a以1的概率输出pn。换句话说,我们的结果提供了在次指数时间内无限频繁地工作的素数的伪确定性构造。这个结果来自于一个关于伪确定性结构的更一般的定理。如果n足够大,|Q∩{0,1}n|≥ϒ2n,则性质Q≥ϒ-dense。我们至少表明每个c > 0以下持有之一:(1)有一个pseudodeterministic多项式时间建设家庭集{Hn}, Hn⊆{0,1}n,这样对于每个(1 / nc)密集的房地产问ΕDTIME (nc)和每一个足够大的n, Hn∩问≠∅或(2)有一个确定性的次级多项式时间建设家庭集{H’},H’∩{0,1}n,这样对于每个(1 / nc)密集的房地产问ΕDTIME (nc)和无穷多值n, H’∩问≠∅。我们提供进一步的算法应用程序,可能是独立的兴趣。也许有趣的是,虽然我们的主要结果是无条件的,但它们有一个非建设性的元素,产生于硬度与随机性范例的一系列应用。
{"title":"Pseudodeterministic constructions in subexponential time","authors":"I. Oliveira, R. Santhanam","doi":"10.1145/3055399.3055500","DOIUrl":"https://doi.org/10.1145/3055399.3055500","url":null,"abstract":"We study pseudodeterministic constructions, i.e., randomized algorithms which output the same solution on most computation paths. We establish unconditionally that there is an infinite sequence {pn} of primes and a randomized algorithm A running in expected sub-exponential time such that for each n, on input 1|pn|, A outputs pn with probability 1. In other words, our result provides a pseudodeterministic construction of primes in sub-exponential time which works infinitely often. This result follows from a more general theorem about pseudodeterministic constructions. A property Q ⊆ {0,1}* is ϒ-dense if for large enough n, |Q ∩ {0,1}n| ≥ ϒ2n. We show that for each c > 0 at least one of the following holds: (1) There is a pseudodeterministic polynomial time construction of a family {Hn} of sets, Hn ⊆ {0,1}n, such that for each (1/nc)-dense property Q Ε DTIME(nc) and every large enough n, Hn ∩ Q ≠ ∅ or (2) There is a deterministic sub-exponential time construction of a family {H′n} of sets, H′n ∩ {0,1}n, such that for each (1/nc)-dense property Q Ε DTIME(nc) and for infinitely many values of n, H′n ∩ Q ≠ ∅. We provide further algorithmic applications that might be of independent interest. Perhaps intriguingly, while our main results are unconditional, they have a non-constructive element, arising from a sequence of applications of the hardness versus randomness paradigm.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89679093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Removal lemmas with polynomial bounds 具有多项式界的去除引理
Pub Date : 2016-11-30 DOI: 10.1145/3055399.3055404
Lior Gishboliner, A. Shapira
We give new sufficient and necessary criteria guaranteeing that a hereditary graph property can be tested with a polynomial query complexity. Although both are simple combinatorial criteria, they imply almost all prior positive and negative results of this type, as well as many new ones. One striking application of our results is that every semi-algebraic graph property (e.g., being an interval graph, a unit-disc graph etc.) can be tested with a polynomial query complexity. This confirms a conjecture of Alon. The proofs combine probabilistic ideas together with a novel application of a conditional regularity lemma for matrices, due to Alon, Fischer and Newman.
我们给出了新的充分必要的准则,保证了遗传图性质可以用多项式查询复杂度进行检验。虽然两者都是简单的组合标准,但它们暗示了这种类型的几乎所有先前的正面和负面结果,以及许多新的结果。我们的结果的一个引人注目的应用是,每个半代数图的性质(例如,区间图,单位盘图等)都可以用多项式查询复杂度进行测试。这证实了阿隆的一个猜想。这些证明结合了概率思想和对矩阵的条件正则引理的新应用,这是由Alon, Fischer和Newman提出的。
{"title":"Removal lemmas with polynomial bounds","authors":"Lior Gishboliner, A. Shapira","doi":"10.1145/3055399.3055404","DOIUrl":"https://doi.org/10.1145/3055399.3055404","url":null,"abstract":"We give new sufficient and necessary criteria guaranteeing that a hereditary graph property can be tested with a polynomial query complexity. Although both are simple combinatorial criteria, they imply almost all prior positive and negative results of this type, as well as many new ones. One striking application of our results is that every semi-algebraic graph property (e.g., being an interval graph, a unit-disc graph etc.) can be tested with a polynomial query complexity. This confirms a conjecture of Alon. The proofs combine probabilistic ideas together with a novel application of a conditional regularity lemma for matrices, due to Alon, Fischer and Newman.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82437428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Exponential separation of quantum communication and classical information 量子通信与经典信息的指数分离
Pub Date : 2016-11-28 DOI: 10.1145/3055399.3055401
Anurag Anshu, D. Touchette, Penghui Yao, Nengkun Yu
We exhibit a Boolean function for which the quantum communication complexity is exponentially larger than the classical information complexity. An exponential separation in the other direction was already known from the work of Kerenidis et. al. [SICOMP 44, pp. 1550-1572], hence our work implies that these two complexity measures are incomparable. As classical information complexity is an upper bound on quantum information complexity, which in turn is equal to amortized quantum communication complexity, our work implies that a tight direct sum result for distributional quantum communication complexity cannot hold. The function we use to present such a separation is the Symmetric k-ary Pointer Jumping function introduced by Rao and Sinha [ECCC TR15-057], whose classical communication complexity is exponentially larger than its classical information complexity. In this paper, we show that the quantum communication complexity of this function is polynomially equivalent to its classical communication complexity. The high-level idea behind our proof is arguably the simplest so far for such an exponential separation between information and communication, driven by a sequence of round-elimination arguments, allowing us to simplify further the approach of Rao and Sinha. As another application of the techniques that we develop, a simple proof for an optimal trade-off between Alice's and Bob's communication is given, even when allowing pre-shared entanglement, while computing the related Greater-Than function on n bits: say Bob communicates at most b bits, then Alice must send n/2O (b) bits to Bob. We also present a classical protocol achieving this bound.
我们展示了一个布尔函数,它的量子通信复杂度指数大于经典信息复杂度。另一个方向的指数分离已经从Kerenidis等人的工作中得知[SICOMP 44, pp. 1550-1572],因此我们的工作表明这两种复杂性度量是不可比较的。由于经典信息复杂度是量子信息复杂度的上界,而量子信息复杂度又等于平摊量子通信复杂度,因此我们的研究表明,对分布式量子通信复杂度的严密直接和结果是不成立的。我们用来表示这种分离的函数是Rao和Sinha [ECCC TR15-057]引入的对称k-ary指针跳跃函数,其经典通信复杂度指数大于其经典信息复杂度。在本文中,我们证明了该函数的量子通信复杂度多项式等价于它的经典通信复杂度。我们的证明背后的高级思想可以说是迄今为止最简单的信息和通信之间的指数分离,由一轮消去参数序列驱动,允许我们进一步简化Rao和Sinha的方法。作为我们开发的技术的另一个应用,给出了Alice和Bob通信之间最佳权衡的简单证明,即使允许预共享纠缠,同时计算n位上的相关大于大于函数:假设Bob最多通信b位,那么Alice必须向Bob发送n/2O (b)位。我们还提出了一个经典协议来实现这个边界。
{"title":"Exponential separation of quantum communication and classical information","authors":"Anurag Anshu, D. Touchette, Penghui Yao, Nengkun Yu","doi":"10.1145/3055399.3055401","DOIUrl":"https://doi.org/10.1145/3055399.3055401","url":null,"abstract":"We exhibit a Boolean function for which the quantum communication complexity is exponentially larger than the classical information complexity. An exponential separation in the other direction was already known from the work of Kerenidis et. al. [SICOMP 44, pp. 1550-1572], hence our work implies that these two complexity measures are incomparable. As classical information complexity is an upper bound on quantum information complexity, which in turn is equal to amortized quantum communication complexity, our work implies that a tight direct sum result for distributional quantum communication complexity cannot hold. The function we use to present such a separation is the Symmetric k-ary Pointer Jumping function introduced by Rao and Sinha [ECCC TR15-057], whose classical communication complexity is exponentially larger than its classical information complexity. In this paper, we show that the quantum communication complexity of this function is polynomially equivalent to its classical communication complexity. The high-level idea behind our proof is arguably the simplest so far for such an exponential separation between information and communication, driven by a sequence of round-elimination arguments, allowing us to simplify further the approach of Rao and Sinha. As another application of the techniques that we develop, a simple proof for an optimal trade-off between Alice's and Bob's communication is given, even when allowing pre-shared entanglement, while computing the related Greater-Than function on n bits: say Bob communicates at most b bits, then Alice must send n/2O (b) bits to Bob. We also present a classical protocol achieving this bound.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78902603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Sampling random spanning trees faster than matrix multiplication 随机生成树的采样速度比矩阵乘法快
Pub Date : 2016-11-22 DOI: 10.1145/3055399.3055499
D. Durfee, Rasmus Kyng, John Peebles, Anup B. Rao, Sushant Sachdeva
We present an algorithm that, with high probability, generates a random spanning tree from an edge-weighted undirected graph in (n5/3 m1/3) time. The tree is sampled from a distribution where the probability of each tree is proportional to the product of its edge weights. This improves upon the previous best algorithm due to Colbourn et al. that runs in matrix multiplication time, O(nω). For the special case of unweighted graphs, this improves upon the best previously known running time of Õ(min{nω,m√n,m4/3}) for m ⪢ n7/4 (Colbourn et al. '96, Kelner-Madry '09, Madry et al. '15). The effective resistance metric is essential to our algorithm, as in the work of Madry et al., but we eschew determinant-based and random walk-based techniques used by previous algorithms. Instead, our algorithm is based on Gaussian elimination, and the fact that effective resistance is preserved in the graph resulting from eliminating a subset of vertices (called a Schur complement). As part of our algorithm, we show how to compute -approximate effective resistances for a set S of vertex pairs via approximate Schur complements in Õ(m+(n + |S|)ε-2) time, without using the Johnson-Lindenstrauss lemma which requires Õ( min{(m + |S|)ε2, m+nε-4 +|S|ε2}) time. We combine this approximation procedure with an error correction procedure for handling edges where our estimate isn't sufficiently accurate.
我们提出了一种算法,在(n5/ 3m3 /3)时间内,以高概率从一个边加权无向图生成一棵随机生成树。树从一个分布中抽样,其中每棵树的概率与其边权的乘积成正比。这改进了先前由Colbourn等人提出的最佳算法,该算法在矩阵乘法时间O(nω)内运行。对于非加权图的特殊情况,这改进了先前已知的m⪢n7/4 (Colbourn等)的最佳运行时间Õ(min{nω,m√n,m4/3})。1996年,Kelner-Madry, 2009年,Madry等。15)。有效的阻力度量对我们的算法至关重要,就像Madry等人的工作一样,但我们避免了以前算法使用的基于确定性和基于随机游动的技术。相反,我们的算法基于高斯消去,并且通过消除顶点子集(称为Schur补)而在图中保留有效阻力的事实。作为我们算法的一部分,我们展示了如何通过近似Schur补在Õ(m+(n +|S|)ε-2)时间内计算顶点对集合S的近似有效电阻,而不使用需要Õ(min{(m +|S|)ε2, m+nε-4 +|S|ε2})时间的Johnson-Lindenstrauss引理。我们将这个近似过程与误差校正过程结合起来,以处理我们的估计不够准确的边缘。
{"title":"Sampling random spanning trees faster than matrix multiplication","authors":"D. Durfee, Rasmus Kyng, John Peebles, Anup B. Rao, Sushant Sachdeva","doi":"10.1145/3055399.3055499","DOIUrl":"https://doi.org/10.1145/3055399.3055499","url":null,"abstract":"We present an algorithm that, with high probability, generates a random spanning tree from an edge-weighted undirected graph in (n5/3 m1/3) time. The tree is sampled from a distribution where the probability of each tree is proportional to the product of its edge weights. This improves upon the previous best algorithm due to Colbourn et al. that runs in matrix multiplication time, O(nω). For the special case of unweighted graphs, this improves upon the best previously known running time of Õ(min{nω,m√n,m4/3}) for m ⪢ n7/4 (Colbourn et al. '96, Kelner-Madry '09, Madry et al. '15). The effective resistance metric is essential to our algorithm, as in the work of Madry et al., but we eschew determinant-based and random walk-based techniques used by previous algorithms. Instead, our algorithm is based on Gaussian elimination, and the fact that effective resistance is preserved in the graph resulting from eliminating a subset of vertices (called a Schur complement). As part of our algorithm, we show how to compute -approximate effective resistances for a set S of vertex pairs via approximate Schur complements in Õ(m+(n + |S|)ε-2) time, without using the Johnson-Lindenstrauss lemma which requires Õ( min{(m + |S|)ε2, m+nε-4 +|S|ε2}) time. We combine this approximation procedure with an error correction procedure for handling edges where our estimate isn't sufficiently accurate.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89507233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 64
Simple mechanisms for subadditive buyers via duality 通过对偶性的次加性购买者的简单机制
Pub Date : 2016-11-21 DOI: 10.1145/3055399.3055465
Yang Cai, Mingfei Zhao
We provide simple and approximately revenue-optimal mechanisms in the multi-item multi-bidder settings. We unify and improve all previous results, as well as generalize the results to broader cases. In particular, we prove that the better of the following two simple, deterministic and Dominant Strategy Incentive Compatible mechanisms, a sequential posted price mechanism or an anonymous sequential posted price mechanism with entry fee, achieves a constant fraction of the optimal revenue among all randomized, Bayesian Incentive Compatible mechanisms, when buyers' valuations are XOS over independent items. If the buyers' valuations are subadditive over independent items, the approximation factor degrades to O(logm), where m is the number of items. We obtain our results by first extending the Cai-Devanur-Weinberg duality framework to derive an effective benchmark of the optimal revenue for subadditive bidders, and then analyzing this upper bound with new techniques.
我们在多项目多投标人设置中提供简单且近似收益最优的机制。我们统一和改进了所有以前的结果,并将结果推广到更广泛的案例。特别地,我们证明了以下两种简单、确定性和优势策略激励兼容机制中较好的一种,即顺序发布价格机制或带有入场费的匿名顺序发布价格机制,在所有随机贝叶斯激励兼容机制中,当买家对独立项目的估值为XOS时,实现了最优收益的恒定比例。如果购买者的估价是独立项目的次加性,则近似因子退化为O(logm),其中m是项目的数量。本文首先对Cai-Devanur-Weinberg对偶框架进行了扩展,得到了次可加投标人最优收益的有效基准,然后用新技术对该上界进行了分析。
{"title":"Simple mechanisms for subadditive buyers via duality","authors":"Yang Cai, Mingfei Zhao","doi":"10.1145/3055399.3055465","DOIUrl":"https://doi.org/10.1145/3055399.3055465","url":null,"abstract":"We provide simple and approximately revenue-optimal mechanisms in the multi-item multi-bidder settings. We unify and improve all previous results, as well as generalize the results to broader cases. In particular, we prove that the better of the following two simple, deterministic and Dominant Strategy Incentive Compatible mechanisms, a sequential posted price mechanism or an anonymous sequential posted price mechanism with entry fee, achieves a constant fraction of the optimal revenue among all randomized, Bayesian Incentive Compatible mechanisms, when buyers' valuations are XOS over independent items. If the buyers' valuations are subadditive over independent items, the approximation factor degrades to O(logm), where m is the number of items. We obtain our results by first extending the Cai-Devanur-Weinberg duality framework to derive an effective benchmark of the optimal revenue for subadditive bidders, and then analyzing this upper bound with new techniques.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81380835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 112
Approximate near neighbors for general symmetric norms 一般对称模的近似近邻
Pub Date : 2016-11-18 DOI: 10.1145/3055399.3055418
Alexandr Andoni, Huy L. Nguyen, Aleksandar Nikolov, Ilya P. Razenshteyn, Erik Waingarten
We show that every symmetric normed space admits an efficient nearest neighbor search data structure with doubly-logarithmic approximation. Specifically, for every n, d = no(1), and every d-dimensional symmetric norm ||·||, there exists a data structure for (loglogn)-approximate nearest neighbor search over ||·|| for n-point datasets achieving no(1) query time and n1+o(1) space. The main technical ingredient of the algorithm is a low-distortion embedding of a symmetric norm into a low-dimensional iterated product of top-k norms. We also show that our techniques cannot be extended to general norms.
我们证明了每一个对称赋范空间允许一种有效的双对数逼近的最近邻搜索数据结构。具体来说,对于每一个n, d = no(1),以及每一个d维对称范数||·||,存在一个对n点数据集实现不(1)查询时间和n1+o(1)空间的(loglogn)-近似近邻搜索在||·||上的数据结构。该算法的主要技术成分是将对称范数低失真地嵌入到上k个范数的低维迭代积中。我们还表明,我们的技术不能扩展到一般规范。
{"title":"Approximate near neighbors for general symmetric norms","authors":"Alexandr Andoni, Huy L. Nguyen, Aleksandar Nikolov, Ilya P. Razenshteyn, Erik Waingarten","doi":"10.1145/3055399.3055418","DOIUrl":"https://doi.org/10.1145/3055399.3055418","url":null,"abstract":"We show that every symmetric normed space admits an efficient nearest neighbor search data structure with doubly-logarithmic approximation. Specifically, for every n, d = no(1), and every d-dimensional symmetric norm ||·||, there exists a data structure for (loglogn)-approximate nearest neighbor search over ||·|| for n-point datasets achieving no(1) query time and n1+o(1) space. The main technical ingredient of the algorithm is a low-distortion embedding of a symmetric norm into a low-dimensional iterated product of top-k norms. We also show that our techniques cannot be extended to general norms.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89438758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
A reverse Minkowski theorem 逆闵可夫斯基定理
Pub Date : 2016-11-18 DOI: 10.1145/3055399.3055434
O. Regev, Noah Stephens-Davidowitz
We prove a conjecture due to Dadush, showing that if ℒ⊂ ℝn is a lattice such that det(ℒ′) 1 for all sublattices ℒ′ ⊆ ℒ, then $$sum_{y∈ℒ}^e-t2||y||2≤3/2,$$ where t := 10(logn + 2). From this we also derive bounds on the number of short lattice vectors and on the covering radius.
我们证明了一个由Dadush引起的猜想,证明了如果∑∧∈n是一个格,使得所有子格∑∑∑都满足det(∑’)1,则$$sum_{y∈ℒ}^e-t2||y||2≤3/2,$$,其中t:= 10(logn + 2)。由此我们还推导出了格短向量个数和覆盖半径的界。
{"title":"A reverse Minkowski theorem","authors":"O. Regev, Noah Stephens-Davidowitz","doi":"10.1145/3055399.3055434","DOIUrl":"https://doi.org/10.1145/3055399.3055434","url":null,"abstract":"We prove a conjecture due to Dadush, showing that if ℒ⊂ ℝn is a lattice such that det(ℒ′) 1 for all sublattices ℒ′ ⊆ ℒ, then $$sum_{y∈ℒ}^e-t2||y||2≤3/2,$$ where t := 10(logn + 2). From this we also derive bounds on the number of short lattice vectors and on the covering radius.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79357512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Almost-polynomial ratio ETH-hardness of approximating densest k-subgraph 近似最密集k子图的几乎多项式比率eth -硬度
Pub Date : 2016-11-18 DOI: 10.1145/3055399.3055412
Pasin Manurangsi
In the Densest k-Subgraph (DkS) problem, given an undirected graph G and an integer k, the goal is to find a subgraph of G on k vertices that contains maximum number of edges. Even though Bhaskara et al.'s state-of-the-art algorithm for the problem achieves only O(n1/4 + ϵ) approximation ratio, previous attempts at proving hardness of approximation, including those under average case assumptions, fail to achieve a polynomial ratio; the best ratios ruled out under any worst case assumption and any average case assumption are only any constant (Raghavendra and Steurer) and 2O(log2/3 n) (Alon et al.) respectively. In this work, we show, assuming the exponential time hypothesis (ETH), that there is no polynomial-time algorithm that approximates Densest k-Subgraph to within n1/(loglogn)c factor of the optimum, where c > 0 is a universal constant independent of n. In addition, our result has perfect completeness, meaning that we prove that it is ETH-hard to even distinguish between the case in which G contains a k-clique and the case in which every induced k-subgraph of G has density at most 1/n-1/(loglogn)c in polynomial time. Moreover, if we make a stronger assumption that there is some constant ε > 0 such that no subexponential-time algorithm can distinguish between a satisfiable 3SAT formula and one which is only (1 - ε)-satisfiable (also known as Gap-ETH), then the ratio above can be improved to nf(n) for any function f whose limit is zero as n goes to infinity (i.e. f ϵ o(1)).
在密度k-子图(DkS)问题中,给定一个无向图G和一个整数k,目标是在k个顶点上找到包含最大边数的G的子图。尽管Bhaskara等人对该问题的最先进算法仅实现了O(n1/4 + λ)近似比,但之前证明近似硬度的尝试,包括在平均情况假设下的尝试,都未能实现多项式比;在任何最坏情况假设和任何平均情况假设下排除的最佳比率分别为任何常数(Raghavendra和Steurer)和2O(log2/3 n) (Alon等人)。在这项工作中,我们证明,假设指数时间假设(ETH),不存在多项式时间算法逼近最优的n1/(loglog)c因子,其中c > 0是一个独立于n的普遍常数。此外,我们的结果具有完美的完备性。这意味着我们证明了很难区分G包含k团的情况和G的每个诱导k子图在多项式时间内密度不超过1/n-1/(loglog)c的情况。此外,如果我们做出一个更强的假设,即存在一些常数ε> 0,使得没有子指数时间算法可以区分一个可满足的3SAT公式和一个只有(1 - ε)-可满足的(也称为Gap-ETH),那么对于任何函数f,其极限为零,当n趋于无穷(即f λ o(1)),上述比率可以改进为nf(n)。
{"title":"Almost-polynomial ratio ETH-hardness of approximating densest k-subgraph","authors":"Pasin Manurangsi","doi":"10.1145/3055399.3055412","DOIUrl":"https://doi.org/10.1145/3055399.3055412","url":null,"abstract":"In the Densest k-Subgraph (DkS) problem, given an undirected graph G and an integer k, the goal is to find a subgraph of G on k vertices that contains maximum number of edges. Even though Bhaskara et al.'s state-of-the-art algorithm for the problem achieves only O(n1/4 + ϵ) approximation ratio, previous attempts at proving hardness of approximation, including those under average case assumptions, fail to achieve a polynomial ratio; the best ratios ruled out under any worst case assumption and any average case assumption are only any constant (Raghavendra and Steurer) and 2O(log2/3 n) (Alon et al.) respectively. In this work, we show, assuming the exponential time hypothesis (ETH), that there is no polynomial-time algorithm that approximates Densest k-Subgraph to within n1/(loglogn)c factor of the optimum, where c > 0 is a universal constant independent of n. In addition, our result has perfect completeness, meaning that we prove that it is ETH-hard to even distinguish between the case in which G contains a k-clique and the case in which every induced k-subgraph of G has density at most 1/n-1/(loglogn)c in polynomial time. Moreover, if we make a stronger assumption that there is some constant ε > 0 such that no subexponential-time algorithm can distinguish between a satisfiable 3SAT formula and one which is only (1 - ε)-satisfiable (also known as Gap-ETH), then the ratio above can be improved to nf(n) for any function f whose limit is zero as n goes to infinity (i.e. f ϵ o(1)).","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90462494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 143
Online and dynamic algorithms for set cover 集覆盖的在线和动态算法
Pub Date : 2016-11-17 DOI: 10.1145/3055399.3055493
Anupam Gupta, Ravishankar Krishnaswamy, Amit Kumar, Debmalya Panigrahi
In this paper, we give new results for the set cover problem in the fully dynamic model. In this model, the set of "active" elements to be covered changes over time. The goal is to maintain a near-optimal solution for the currently active elements, while making few changes in each timestep. This model is popular in both dynamic and online algorithms: in the former, the goal is to minimize the update time of the solution, while in the latter, the recourse (number of changes) is bounded. We present generic techniques for the dynamic set cover problem inspired by the classic greedy and primal-dual offline algorithms for set cover. The former leads to a competitive ratio of O(lognt), where nt is the number of currently active elements at timestep t, while the latter yields competitive ratios dependent on ft, the maximum number of sets that a currently active element belongs to. We demonstrate that these techniques are useful for obtaining tight results in both settings: update time bounds and limited recourse, exhibiting algorithmic techniques common to these two parallel threads of research.
本文给出了全动态模型下集覆盖问题的新结果。在这个模型中,要覆盖的“活动”元素的集合随着时间的推移而变化。我们的目标是为当前活动的元素维护一个近乎最优的解决方案,同时在每个时间步中进行很少的更改。该模型在动态算法和在线算法中都很流行:在动态算法中,目标是最小化解决方案的更新时间,而在在线算法中,追索权(更改次数)是有限的。在经典的贪婪和原始对偶离线集覆盖算法的启发下,我们提出了动态集覆盖问题的一般技术。前者导致竞争比为O(logt),其中nt是时间步长为t时当前活动元素的数量,而后者产生的竞争比取决于ft,即当前活动元素所属的最大集合数。我们证明了这些技术对于在两种情况下获得严格的结果是有用的:更新时间界限和有限的资源,展示了这两个并行研究线程共同的算法技术。
{"title":"Online and dynamic algorithms for set cover","authors":"Anupam Gupta, Ravishankar Krishnaswamy, Amit Kumar, Debmalya Panigrahi","doi":"10.1145/3055399.3055493","DOIUrl":"https://doi.org/10.1145/3055399.3055493","url":null,"abstract":"In this paper, we give new results for the set cover problem in the fully dynamic model. In this model, the set of \"active\" elements to be covered changes over time. The goal is to maintain a near-optimal solution for the currently active elements, while making few changes in each timestep. This model is popular in both dynamic and online algorithms: in the former, the goal is to minimize the update time of the solution, while in the latter, the recourse (number of changes) is bounded. We present generic techniques for the dynamic set cover problem inspired by the classic greedy and primal-dual offline algorithms for set cover. The former leads to a competitive ratio of O(lognt), where nt is the number of currently active elements at timestep t, while the latter yields competitive ratios dependent on ft, the maximum number of sets that a currently active element belongs to. We demonstrate that these techniques are useful for obtaining tight results in both settings: update time bounds and limited recourse, exhibiting algorithmic techniques common to these two parallel threads of research.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82643370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 72
Probabilistic rank and matrix rigidity 概率秩和矩阵刚性
Pub Date : 2016-11-17 DOI: 10.1145/3055399.3055484
Josh Alman, Richard Ryan Williams
We consider a notion of probabilistic rank and probabilistic sign-rank of a matrix, which measure the extent to which a matrix can be probabilistically represented by low-rank matrices. We demonstrate several connections with matrix rigidity, communication complexity, and circuit lower bounds. The most interesting outcomes are: The Walsh-Hadamard Transform is Not Very Rigid. We give surprising upper bounds on the rigidity of a family of matrices whose rigidity has been extensively studied, and was conjectured to be highly rigid. For the 2n X 2n Walsh-Hadamard transform Hn (a.k.a. Sylvester matrices, a.k.a. the communication matrix of Inner Product modulo 2), we show how to modify only 2ε n entries in each row and make the rank of Hn drop below 2n(1-Ω(ε2/log(1/εε))), for all small ε > 0, over any field. That is, it is not possible to prove arithmetic circuit lower bounds on Hadamard matrices such as Hn, via L. Valiant's matrix rigidity approach. We also show non-trivial rigidity upper bounds for Hn with smaller target rank. Matrix Rigidity and Threshold Circuit Lower Bounds. We give new consequences of rigid matrices for Boolean circuit complexity. First, we show that explicit n X n Boolean matrices which maintain rank at least 2(logn)1-δ after n2/2(logn)δ/2 modified entries (over any field, for any δ > 0) would yield an explicit function that does not have sub-quadratic-size AC0 circuits with two layers of arbitrary linear threshold gates. Second, we prove that explicit 0/1 matrices over ℝ which are modestly more rigid than the best known rigidity lower bounds for sign-rank would imply exponential-gate lower bounds for the infamously difficult class of depth-two linear threshold circuits with arbitrary weights on both layers. In particular, we show that matrices defined by these seemingly-difficult circuit classes actually have low probabilistic rank and sign-rank, respectively. An Equivalence Between Communication, Probabilistic Rank, and Rigidity. It has been known since Razborov [1989] that explicit rigidity lower bounds would resolve longstanding lower-bound problems in communication complexity, but it seemed possible that communication lower bounds could be proved without making progress on matrix rigidity. We show that for every function f which is randomly self-reducible in a natural way (the inner product mod 2 is an example), bounding the communication complexity of f (in a precise technical sense) is equivalent to bounding the rigidity of the matrix of f, via an equivalence with probabilistic rank.
我们考虑了矩阵的概率秩和概率符号秩的概念,它们衡量了一个矩阵可以被低秩矩阵概率表示的程度。我们演示了几种与矩阵刚性、通信复杂性和电路下限有关的连接。最有趣的结果是:沃尔什-阿达玛变换不是很严格。我们给出了一类矩阵的刚性的惊人上界,这些矩阵的刚性已被广泛研究,并被推测为高刚性。对于2n X 2n的Walsh-Hadamard变换Hn(即Sylvester矩阵,即内积模2的通信矩阵),我们展示了如何在任何域上对所有小ε > 0的情况下,仅修改每一行中的2ε n项,使Hn的秩降至2n(1-Ω(ε2/log(1/ε)))以下。也就是说,不可能通过L. Valiant的矩阵刚性方法来证明Hadamard矩阵(如Hn)上的算术电路下界。我们还给出了目标秩较小的Hn的非平凡刚性上界。矩阵刚度和阈值电路下限。给出了关于布尔电路复杂度的刚性矩阵的新结果。首先,我们证明了显式n X n布尔矩阵在n2/2(logn)δ/2修改条目(在任何字段上,对于任何δ > 0)之后保持至少2(logn)1-δ的秩将产生一个显式函数,该显式函数不具有具有两层任意线性阈值门的次二次大小的AC0电路。其次,我们证明了显式的0/1矩阵,它比最著名的符号秩的刚性下界稍微更严格,对于在两层上具有任意权值的深度二线性阈值电路的臭名昭著的困难类,它意味着指数门下界。特别地,我们证明了由这些看似困难的电路类定义的矩阵实际上分别具有低概率秩和符号秩。通信、概率秩和刚性之间的等价。自Razborov[1989]以来,人们已经知道,明确的刚性下界将解决通信复杂性中长期存在的下界问题,但似乎有可能在矩阵刚性没有取得进展的情况下证明通信下界。我们证明了对于每一个以自然方式随机自约的函数f(内积模2就是一个例子),通过与概率秩的等价,限定f的通信复杂度(在精确的技术意义上)等于限定f的矩阵的刚性。
{"title":"Probabilistic rank and matrix rigidity","authors":"Josh Alman, Richard Ryan Williams","doi":"10.1145/3055399.3055484","DOIUrl":"https://doi.org/10.1145/3055399.3055484","url":null,"abstract":"We consider a notion of probabilistic rank and probabilistic sign-rank of a matrix, which measure the extent to which a matrix can be probabilistically represented by low-rank matrices. We demonstrate several connections with matrix rigidity, communication complexity, and circuit lower bounds. The most interesting outcomes are: The Walsh-Hadamard Transform is Not Very Rigid. We give surprising upper bounds on the rigidity of a family of matrices whose rigidity has been extensively studied, and was conjectured to be highly rigid. For the 2n X 2n Walsh-Hadamard transform Hn (a.k.a. Sylvester matrices, a.k.a. the communication matrix of Inner Product modulo 2), we show how to modify only 2ε n entries in each row and make the rank of Hn drop below 2n(1-Ω(ε2/log(1/εε))), for all small ε > 0, over any field. That is, it is not possible to prove arithmetic circuit lower bounds on Hadamard matrices such as Hn, via L. Valiant's matrix rigidity approach. We also show non-trivial rigidity upper bounds for Hn with smaller target rank. Matrix Rigidity and Threshold Circuit Lower Bounds. We give new consequences of rigid matrices for Boolean circuit complexity. First, we show that explicit n X n Boolean matrices which maintain rank at least 2(logn)1-δ after n2/2(logn)δ/2 modified entries (over any field, for any δ > 0) would yield an explicit function that does not have sub-quadratic-size AC0 circuits with two layers of arbitrary linear threshold gates. Second, we prove that explicit 0/1 matrices over ℝ which are modestly more rigid than the best known rigidity lower bounds for sign-rank would imply exponential-gate lower bounds for the infamously difficult class of depth-two linear threshold circuits with arbitrary weights on both layers. In particular, we show that matrices defined by these seemingly-difficult circuit classes actually have low probabilistic rank and sign-rank, respectively. An Equivalence Between Communication, Probabilistic Rank, and Rigidity. It has been known since Razborov [1989] that explicit rigidity lower bounds would resolve longstanding lower-bound problems in communication complexity, but it seemed possible that communication lower bounds could be proved without making progress on matrix rigidity. We show that for every function f which is randomly self-reducible in a natural way (the inner product mod 2 is an example), bounding the communication complexity of f (in a precise technical sense) is equivalent to bounding the rigidity of the matrix of f, via an equivalence with probabilistic rank.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86455311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
期刊
Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1