首页 > 最新文献

Proceedings of the forty-eighth annual ACM symposium on Theory of Computing最新文献

英文 中文
Breaking the logarithmic barrier for truthful combinatorial auctions with submodular bidders 打破对数障碍的真实组合拍卖与子模块投标人
Pub Date : 2016-02-18 DOI: 10.1145/2897518.2897569
Shahar Dobzinski
We study a central problem in Algorithmic Mechanism Design: constructing truthful mechanisms for welfare maximization in combinatorial auctions with submodular bidders. Dobzinski, Nisan, and Schapira provided the first mechanism that guarantees a non-trivial approximation ratio of O(log^2 m) [STOC'06], where m is the number of items. This was subsequently improved to O( log m log log m) [Dobzinski, APPROX'07] and then to O(m) [Krysta and Vocking, ICALP'12]. In this paper we develop the first mechanism that breaks the logarithmic barrier. Specifically, the mechanism provides an approximation ratio of O( m). Similarly to previous constructions, our mechanism uses polynomially many value and demand queries, and in fact provides the same approximation ratio for the larger class of XOS (a.k.a. fractionally subadditive) valuations. We also develop a computationally efficient implementation of the mechanism for combinatorial auctions with budget additive bidders. Although in general computing a demand query is NP-hard for budget additive valuations, we observe that the specific form of demand queries that our mechanism uses can be efficiently computed when bidders are budget additive.
本文研究了算法机制设计中的一个核心问题:构建具有子模块投标人的组合拍卖中福利最大化的真实机制。Dobzinski, Nisan和Schapira提供了第一种保证非平凡近似比为O(log^2 m)的机制[STOC'06],其中m为项目数。随后将其改进为O(log m log log m) [Dobzinski, APPROX'07],然后再改进为O(m) [Krysta and Vocking, ICALP'12]。在本文中,我们开发了第一个打破对数障碍的机制。具体来说,该机制提供了O(m)的近似比率。与之前的结构类似,我们的机制使用多项式的许多值和需求查询,实际上为更大类别的XOS(也称为分数次加性)估值提供了相同的近似比率。我们还开发了一种计算效率高的机制,用于与预算附加投标人的组合拍卖。尽管在一般情况下,计算需求查询对于预算附加估值是np困难的,但我们观察到,当投标人是预算附加估值时,我们的机制使用的特定形式的需求查询可以有效地计算出来。
{"title":"Breaking the logarithmic barrier for truthful combinatorial auctions with submodular bidders","authors":"Shahar Dobzinski","doi":"10.1145/2897518.2897569","DOIUrl":"https://doi.org/10.1145/2897518.2897569","url":null,"abstract":"We study a central problem in Algorithmic Mechanism Design: constructing truthful mechanisms for welfare maximization in combinatorial auctions with submodular bidders. Dobzinski, Nisan, and Schapira provided the first mechanism that guarantees a non-trivial approximation ratio of O(log^2 m) [STOC'06], where m is the number of items. This was subsequently improved to O( log m log log m) [Dobzinski, APPROX'07] and then to O(m) [Krysta and Vocking, ICALP'12]. In this paper we develop the first mechanism that breaks the logarithmic barrier. Specifically, the mechanism provides an approximation ratio of O( m). Similarly to previous constructions, our mechanism uses polynomially many value and demand queries, and in fact provides the same approximation ratio for the larger class of XOS (a.k.a. fractionally subadditive) valuations. We also develop a computationally efficient implementation of the mechanism for combinatorial auctions with budget additive bidders. Although in general computing a demand query is NP-hard for budget additive valuations, we observe that the specific form of demand queries that our mechanism uses can be efficiently computed when bidders are budget additive.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128436518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Bipartite perfect matching is in quasi-NC 拟nc中的二部完美匹配
Pub Date : 2016-01-23 DOI: 10.1145/2897518.2897564
Stephen A. Fenner, R. Gurjar, T. Thierauf
We show that the bipartite perfect matching problem is in quasi- NC2. That is, it has uniform circuits of quasi-polynomial size nO(logn), and O(log2 n) depth. Previously, only an exponential upper bound was known on the size of such circuits with poly-logarithmic depth. We obtain our result by an almost complete derandomization of the famous Isolation Lemma when applied to yield an efficient randomized parallel algorithm for the bipartite perfect matching problem.
我们证明了拟NC2中的二部完美匹配问题。也就是说,它具有准多项式大小为nO(logn)、深度为O(log2n)的一致电路。在此之前,对于这种具有多对数深度的电路,只有一个指数上界是已知的。我们通过对著名的隔离引理的几乎完全非随机化得到了结果,并应用于二部完美匹配问题的有效随机并行算法。
{"title":"Bipartite perfect matching is in quasi-NC","authors":"Stephen A. Fenner, R. Gurjar, T. Thierauf","doi":"10.1145/2897518.2897564","DOIUrl":"https://doi.org/10.1145/2897518.2897564","url":null,"abstract":"We show that the bipartite perfect matching problem is in quasi- NC2. That is, it has uniform circuits of quasi-polynomial size nO(logn), and O(log2 n) depth. Previously, only an exponential upper bound was known on the size of such circuits with poly-logarithmic depth. We obtain our result by an almost complete derandomization of the famous Isolation Lemma when applied to yield an efficient randomized parallel algorithm for the bipartite perfect matching problem.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"110 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131746955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 77
Graph isomorphism in quasipolynomial time [extended abstract] 拟多项式时间下的图同构[扩展摘要]
Pub Date : 2015-12-11 DOI: 10.1145/2897518.2897542
L. Babai
We show that the Graph Isomorphism (GI) problem and the more general problems of String Isomorphism (SI) andCoset Intersection (CI) can be solved in quasipolynomial(exp((logn)O(1))) time. The best previous bound for GI was exp(O( √n log n)), where n is the number of vertices (Luks, 1983); for the other two problems, the bound was similar, exp(O~(√ n)), where n is the size of the permutation domain (Babai, 1983). Following the approach of Luks’s seminal 1980/82 paper, the problem we actually address is SI. This problem takes two strings of length n and a permutation group G of degree n (the “ambient group”) as input (G is given by a list of generators) and asks whether or not one of the strings can be transformed into the other by some element of G. Luks’s divide-and-conquer algorithm for SI proceeds by recursion on the ambient group. We build on Luks’s framework and attack the obstructions to efficient Luks recurrence via an interplay between local and global symmetry. We construct group theoretic “local certificates” to certify the presence or absence of local symmetry, aggregate the negative certificates to canonical k-ary relations where k = O(log n), and employ combinatorial canonical partitioning techniques to split the k-ary relational structure for efficient divide-and- conquer. We show that in a well–defined sense, Johnson graphs are the only obstructions to effective canonical partitioning. The central element of the algorithm is the “local certificates” routine which is based on a new group theoretic result, the “Unaffected stabilizers lemma,” that allows us to construct global automorphisms out of local information.
我们证明了图同构(GI)问题以及更一般的弦同构(SI)和协集交集(CI)问题可以在拟多项式(exp((logn)O(1))时间内解决。GI的最佳前界是exp(O(√n log n)),其中n是顶点数(Luks, 1983);对于另外两个问题,边界是相似的,exp(O~(√n)),其中n是排列域的大小(Babai, 1983)。按照Luks 1980/82年开创性论文的方法,我们实际上要解决的问题是SI。这个问题以两个长度为n的字符串和一个阶数为n的置换群G(“环境群”)作为输入(G由一个生成器列表给出),并询问其中一个字符串是否可以通过G的某些元素转换为另一个字符串。Luks的分治算法通过递归处理环境群。我们以Luks的框架为基础,通过局部对称性和全局对称性之间的相互作用来克服阻碍Luks有效递归的因素。我们构造了群论的“局部证书”来证明局部对称的存在或不存在,将否定证书聚合为k = O(log n)的正则k元关系,并采用组合正则划分技术对k元关系结构进行分割,实现高效的分而治之。我们证明了在定义良好的意义上,Johnson图是有效规范划分的唯一障碍。该算法的核心元素是“局部证书”例程,该例程基于一个新的群理论结果,即“未受影响的稳定器引理”,该引理允许我们从局部信息中构造全局自同构。
{"title":"Graph isomorphism in quasipolynomial time [extended abstract]","authors":"L. Babai","doi":"10.1145/2897518.2897542","DOIUrl":"https://doi.org/10.1145/2897518.2897542","url":null,"abstract":"We show that the Graph Isomorphism (GI) problem and the more general problems of String Isomorphism (SI) andCoset Intersection (CI) can be solved in quasipolynomial(exp((logn)O(1))) time. The best previous bound for GI was exp(O( √n log n)), where n is the number of vertices (Luks, 1983); for the other two problems, the bound was similar, exp(O~(√ n)), where n is the size of the permutation domain (Babai, 1983). Following the approach of Luks’s seminal 1980/82 paper, the problem we actually address is SI. This problem takes two strings of length n and a permutation group G of degree n (the “ambient group”) as input (G is given by a list of generators) and asks whether or not one of the strings can be transformed into the other by some element of G. Luks’s divide-and-conquer algorithm for SI proceeds by recursion on the ambient group. We build on Luks’s framework and attack the obstructions to efficient Luks recurrence via an interplay between local and global symmetry. We construct group theoretic “local certificates” to certify the presence or absence of local symmetry, aggregate the negative certificates to canonical k-ary relations where k = O(log n), and employ combinatorial canonical partitioning techniques to split the k-ary relational structure for efficient divide-and- conquer. We show that in a well–defined sense, Johnson graphs are the only obstructions to effective canonical partitioning. The central element of the algorithm is the “local certificates” routine which is based on a new group theoretic result, the “Unaffected stabilizers lemma,” that allows us to construct global automorphisms out of local information.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126996801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 631
Fast spectral algorithms from sum-of-squares proofs: tensor decomposition and planted sparse vectors 从平方和证明的快速光谱算法:张量分解和种植稀疏向量
Pub Date : 2015-12-08 DOI: 10.1145/2897518.2897529
Samuel B. Hopkins, T. Schramm, Jonathan Shi, David Steurer
We consider two problems that arise in machine learning applications: the problem of recovering a planted sparse vector in a random linear subspace and the problem of decomposing a random low-rank overcomplete 3-tensor. For both problems, the best known guarantees are based on the sum-of-squares method. We develop new algorithms inspired by analyses of the sum-of-squares method. Our algorithms achieve the same or similar guarantees as sum-of-squares for these problems but the running time is significantly faster. For the planted sparse vector problem, we give an algorithm with running time nearly linear in the input size that approximately recovers a planted sparse vector with up to constant relative sparsity in a random subspace of ℝn of dimension up to Ω(√n). These recovery guarantees match the best known ones of Barak, Kelner, and Steurer (STOC 2014) up to logarithmic factors. For tensor decomposition, we give an algorithm with running time close to linear in the input size (with exponent ≈ 1.125) that approximately recovers a component of a random 3-tensor over ℝn of rank up to Ω(n4/3). The best previous algorithm for this problem due to Ge and Ma (RANDOM 2015) works up to rank Ω(n3/2) but requires quasipolynomial time.
我们考虑了机器学习应用中出现的两个问题:在随机线性子空间中恢复种植稀疏向量的问题和分解随机低秩过完备3张量的问题。对于这两个问题,最著名的保证是基于平方和方法。我们从平方和方法的分析中得到启发,开发了新的算法。对于这些问题,我们的算法实现了与平方和相同或类似的保证,但运行时间明显更快。对于种植稀疏向量问题,我们给出了一种运行时间在输入大小上接近线性的算法,该算法在维数为Ω(√n)的随机子空间中近似恢复一个相对稀疏度为常数的种植稀疏向量。这些恢复保证在对数因子上与Barak, Kelner和Steurer (STOC 2014)最著名的保证相匹配。对于张量分解,我们给出了一个运行时间在输入大小(指数≈1.125)上接近线性的算法,该算法近似地恢复了一个秩为Ω(n4/3)的随机3-张量的分量。由于Ge和Ma (RANDOM 2015),该问题的最佳先前算法可以达到Ω(n3/2)的排名,但需要准多项式时间。
{"title":"Fast spectral algorithms from sum-of-squares proofs: tensor decomposition and planted sparse vectors","authors":"Samuel B. Hopkins, T. Schramm, Jonathan Shi, David Steurer","doi":"10.1145/2897518.2897529","DOIUrl":"https://doi.org/10.1145/2897518.2897529","url":null,"abstract":"We consider two problems that arise in machine learning applications: the problem of recovering a planted sparse vector in a random linear subspace and the problem of decomposing a random low-rank overcomplete 3-tensor. For both problems, the best known guarantees are based on the sum-of-squares method. We develop new algorithms inspired by analyses of the sum-of-squares method. Our algorithms achieve the same or similar guarantees as sum-of-squares for these problems but the running time is significantly faster. For the planted sparse vector problem, we give an algorithm with running time nearly linear in the input size that approximately recovers a planted sparse vector with up to constant relative sparsity in a random subspace of ℝn of dimension up to Ω(√n). These recovery guarantees match the best known ones of Barak, Kelner, and Steurer (STOC 2014) up to logarithmic factors. For tensor decomposition, we give an algorithm with running time close to linear in the input size (with exponent ≈ 1.125) that approximately recovers a component of a random 3-tensor over ℝn of rank up to Ω(n4/3). The best previous algorithm for this problem due to Ge and Ma (RANDOM 2015) works up to rank Ω(n3/2) but requires quasipolynomial time.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115126972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 118
Sparsified Cholesky and multigrid solvers for connection laplacians 连接拉普拉斯算子的稀疏化Cholesky和多网格求解方法
Pub Date : 2015-12-07 DOI: 10.1145/2897518.2897640
Rasmus Kyng, Y. Lee, Richard Peng, Sushant Sachdeva, D. Spielman
We introduce the sparsified Cholesky and sparsified multigrid algorithms for solving systems of linear equations. These algorithms accelerate Gaussian elimination by sparsifying the nonzero matrix entries created by the elimination process. We use these new algorithms to derive the first nearly linear time algorithms for solving systems of equations in connection Laplacians---a generalization of Laplacian matrices that arise in many problems in image and signal processing. We also prove that every connection Laplacian has a linear sized approximate inverse. This is an LU factorization with a linear number of nonzero entries that is a strong approximation of the original matrix. Using such a factorization one can solve systems of equations in a connection Laplacian in linear time. Such a factorization was unknown even for ordinary graph Laplacians.
介绍了求解线性方程组的稀疏化Cholesky算法和稀疏化多网格算法。这些算法通过稀疏化消去过程中产生的非零矩阵条目来加速高斯消去。我们使用这些新算法推导了第一个求解连接拉普拉斯矩阵方程组的近线性时间算法——拉普拉斯矩阵是在图像和信号处理中的许多问题中出现的一种推广。我们还证明了每个连接拉普拉斯矩阵都有一个线性大小的近似逆。这是一个具有线性数目的非零项的LU分解,它是原始矩阵的强近似。利用这种分解方法,可以在线性时间内求解连接拉普拉斯方程中的方程组。即使对于普通的图拉普拉斯算子,这种分解也是未知的。
{"title":"Sparsified Cholesky and multigrid solvers for connection laplacians","authors":"Rasmus Kyng, Y. Lee, Richard Peng, Sushant Sachdeva, D. Spielman","doi":"10.1145/2897518.2897640","DOIUrl":"https://doi.org/10.1145/2897518.2897640","url":null,"abstract":"We introduce the sparsified Cholesky and sparsified multigrid algorithms for solving systems of linear equations. These algorithms accelerate Gaussian elimination by sparsifying the nonzero matrix entries created by the elimination process. We use these new algorithms to derive the first nearly linear time algorithms for solving systems of equations in connection Laplacians---a generalization of Laplacian matrices that arise in many problems in image and signal processing. We also prove that every connection Laplacian has a linear sized approximate inverse. This is an LU factorization with a linear number of nonzero entries that is a strong approximation of the original matrix. Using such a factorization one can solve systems of equations in a connection Laplacian in linear time. Such a factorization was unknown even for ordinary graph Laplacians.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130003526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 156
Exact algorithms via monotone local search 精确算法通过单调局部搜索
Pub Date : 2015-12-05 DOI: 10.1145/2897518.2897551
F. Fomin, Serge Gaspers, D. Lokshtanov, Saket Saurabh
We give a new general approach for designing exact exponential-time algorithms for subset problems. In a subset problem the input implicitly describes a family of sets over a universe of size n and the task is to determine whether the family contains at least one set. A typical example of a subset problem is Weighted d-SAT. Here, the input is a CNF-formula with clauses of size at most d, and an integer W. The universe is the set of variables and the variables have integer weights. The family contains all the subsets S of variables such that the total weight of the variables in S does not exceed W, and setting the variables in S to 1 and the remaining variables to 0 satisfies the formula. Our approach is based on “monotone local search”, where the goal is to extend a partial solution to a solution by adding as few elements as possible. More formally, in the extension problem we are also given as input a subset X of the universe and an integer k. The task is to determine whether one can add at most k elements to X to obtain a set in the (implicitly defined) family. Our main result is that a cknO(1) time algorithm for the extension problem immediately yields a randomized algorithm for finding a solution of any size with running time O((2−1/c)n). In many cases, the extension problem can be reduced to simply finding a solution of size at most k. Furthermore, efficient algorithms for finding small solutions have been extensively studied in the field of parameterized algorithms. Directly applying these algorithms, our theorem yields in one stroke significant improvements over the best known exponential-time algorithms for several well-studied problems, including d-Hitting Set, Feedback Vertex Set, Node Unique Label Cover, and Weighted d-SAT. Our results demonstrate an interesting and very concrete connection between parameterized algorithms and exact exponential-time algorithms. We also show how to derandomize our algorithms at the cost of a subexponential multiplicative factor in the running time. Our derandomization is based on an efficient construction of a new pseudo-random object that might be of independent interest. Finally, we extend our methods to establish new combinatorial upper bounds and develop enumeration algorithms.
我们给出了一种设计精确指数时间子集问题算法的新方法。在子集问题中,输入隐式地描述大小为n的全域上的一个集合族,任务是确定该集合族是否至少包含一个集合。子集问题的一个典型例子是加权d-SAT。这里,输入是一个cnf公式,子句的大小最多为d,子句的大小为整数w。全域是变量的集合,变量的权重为整数。族包含S中变量的总权重不超过W的所有变量子集S,设S中变量为1,其余变量为0满足公式。我们的方法基于“单调局部搜索”,其目标是通过添加尽可能少的元素将部分解决方案扩展为一个解决方案。更正式地说,在扩展问题中,我们也给出了宇宙的子集X和整数k作为输入。任务是确定是否可以向X添加最多k个元素以获得(隐式定义)族中的集合。我们的主要结果是,扩展问题的cknO(1)时间算法立即产生一个随机算法,用于寻找运行时间为O((2−1/c)n)的任意大小的解。在许多情况下,可拓问题可以简化为寻找大小不超过k的解。此外,在参数化算法领域中,寻找小解的有效算法已经得到了广泛的研究。直接应用这些算法,我们的定理在几个研究得很好的问题上,包括d命中集、反馈顶点集、节点唯一标签覆盖和加权d-SAT,比最著名的指数时间算法有了一次显著的改进。我们的结果证明了参数化算法和精确指数时间算法之间有趣且非常具体的联系。我们还展示了如何在运行时间中以次指数乘法因子为代价对算法进行非随机化。我们的非随机化是基于一个新的伪随机对象的有效构造,这个伪随机对象可能是独立的。最后,我们扩展了我们的方法来建立新的组合上界和开发枚举算法。
{"title":"Exact algorithms via monotone local search","authors":"F. Fomin, Serge Gaspers, D. Lokshtanov, Saket Saurabh","doi":"10.1145/2897518.2897551","DOIUrl":"https://doi.org/10.1145/2897518.2897551","url":null,"abstract":"We give a new general approach for designing exact exponential-time algorithms for subset problems. In a subset problem the input implicitly describes a family of sets over a universe of size n and the task is to determine whether the family contains at least one set. A typical example of a subset problem is Weighted d-SAT. Here, the input is a CNF-formula with clauses of size at most d, and an integer W. The universe is the set of variables and the variables have integer weights. The family contains all the subsets S of variables such that the total weight of the variables in S does not exceed W, and setting the variables in S to 1 and the remaining variables to 0 satisfies the formula. Our approach is based on “monotone local search”, where the goal is to extend a partial solution to a solution by adding as few elements as possible. More formally, in the extension problem we are also given as input a subset X of the universe and an integer k. The task is to determine whether one can add at most k elements to X to obtain a set in the (implicitly defined) family. Our main result is that a cknO(1) time algorithm for the extension problem immediately yields a randomized algorithm for finding a solution of any size with running time O((2−1/c)n). In many cases, the extension problem can be reduced to simply finding a solution of size at most k. Furthermore, efficient algorithms for finding small solutions have been extensively studied in the field of parameterized algorithms. Directly applying these algorithms, our theorem yields in one stroke significant improvements over the best known exponential-time algorithms for several well-studied problems, including d-Hitting Set, Feedback Vertex Set, Node Unique Label Cover, and Weighted d-SAT. Our results demonstrate an interesting and very concrete connection between parameterized algorithms and exact exponential-time algorithms. We also show how to derandomize our algorithms at the cost of a subexponential multiplicative factor in the running time. Our derandomization is based on an efficient construction of a new pseudo-random object that might be of independent interest. Finally, we extend our methods to establish new combinatorial upper bounds and develop enumeration algorithms.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128087914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
Cell-probe lower bounds for dynamic problems via a new communication model 基于新通信模型的动态问题的细胞探针下界
Pub Date : 2015-12-04 DOI: 10.1145/2897518.2897556
Huacheng Yu
In this paper, we develop a new communication model to prove a data structure lower bound for the dynamic interval union problem. The problem is to maintain a multiset of intervals I over [0, n] with integer coordinates, supporting the following operations: 1) insert(a, b), add an interval [a, b] to I, provided that a and b are integers in [0, n]; 2) delete(a, b), delete an (existing) interval [a, b] from I; 3) query(), return the total length of the union of all intervals in I. It is related to the two-dimensional case of Klee’s measure problem. We prove that there is a distribution over sequences of operations with O(n) insertions and deletions, and O(n0.01) queries, for which any data structure with any constant error probability requires Ω(nlogn) time in expectation. Interestingly, we use the sparse set disjointness protocol of Håstad and Wigderson to speed up a reduction from a new kind of nondeterministic communication games, for which we prove lower bounds. For applications, we prove lower bounds for several dynamic graph problems by reducing them from dynamic interval union.
本文提出了一种新的通信模型来证明动态区间并集问题的数据结构下界。问题是维护一个区间I / [0, n]的多集,其坐标为整数,支持以下操作:1)插入(a, b),向I添加区间[a, b],假设a和b是[0,n]中的整数;2) delete(a, b),从I中删除(已存在的)区间[a, b];3) query(),返回i中所有区间的并集的总长度。它与二维情况下的Klee测度问题有关。我们证明了在操作序列上存在一个分布,包含O(n)个插入和删除,以及O(n0.01)个查询,对于该分布,任何具有恒定错误概率的数据结构都需要Ω(nlogn)期望时间。有趣的是,我们利用ha斯塔德和威格森的稀疏集不连接协议加速了一类新的不确定性通信对策的约简,并证明了下界。在实际应用中,我们利用动态区间并化的方法证明了若干动态图问题的下界。
{"title":"Cell-probe lower bounds for dynamic problems via a new communication model","authors":"Huacheng Yu","doi":"10.1145/2897518.2897556","DOIUrl":"https://doi.org/10.1145/2897518.2897556","url":null,"abstract":"In this paper, we develop a new communication model to prove a data structure lower bound for the dynamic interval union problem. The problem is to maintain a multiset of intervals I over [0, n] with integer coordinates, supporting the following operations: 1) insert(a, b), add an interval [a, b] to I, provided that a and b are integers in [0, n]; 2) delete(a, b), delete an (existing) interval [a, b] from I; 3) query(), return the total length of the union of all intervals in I. It is related to the two-dimensional case of Klee’s measure problem. We prove that there is a distribution over sequences of operations with O(n) insertions and deletions, and O(n0.01) queries, for which any data structure with any constant error probability requires Ω(nlogn) time in expectation. Interestingly, we use the sparse set disjointness protocol of Håstad and Wigderson to speed up a reduction from a new kind of nondeterministic communication games, for which we prove lower bounds. For applications, we prove lower bounds for several dynamic graph problems by reducing them from dynamic interval union.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123904453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Super-linear gate and super-quadratic wire lower bounds for depth-two and depth-three threshold circuits 深度二和深度三阈值电路的超线性栅极和超二次线下界
Pub Date : 2015-11-24 DOI: 10.1145/2897518.2897636
D. Kane, Ryan Williams
In order to formally understand the power of neural computing, we first need to crack the frontier of threshold circuits with two and three layers, a regime that has been surprisingly intractable to analyze. We prove the first super-linear gate lower bounds and the first super-quadratic wire lower bounds for depth-two linear threshold circuits with arbitrary weights, and depth-three majority circuits computing an explicit function. (1) We prove that for all ε ≪ √log(n)/n, the linear-time computable Andreev’s function cannot be computed on a (1/2+ε)-fraction of n-bit inputs by depth-two circuits of o(ε3 n3/2/log3 n) gates, nor can it be computed with o(ε3 n5/2/log7/2 n) wires. This establishes an average-case “size hierarchy” for threshold circuits, as Andreev’s function is computable by uniform depth-two circuits of o(n3) linear threshold gates, and by uniform depth-three circuits of O(n) majority gates. (2) We present a new function in P based on small-biased sets, which we prove cannot be computed by a majority vote of depth-two threshold circuits of o(n3/2/log3 n) gates, nor with o(n5/2/log7/2n) wires. (3) We give tight average-case (gate and wire) complexity results for computing PARITY with depth-two threshold circuits; the answer turns out to be the same as for depth-two majority circuits. The key is a new method for analyzing random restrictions to linear threshold functions. Our main analytical tool is the Littlewood-Offord Lemma from additive combinatorics.
为了正式理解神经计算的力量,我们首先需要破解两层和三层阈值电路的前沿,这是一个令人惊讶的难以分析的机制。我们证明了具有任意权值的深度-二线性阈值电路和计算显式函数的深度-三多数电路的第一个超线性门下界和第一个超二次线下界。(1)我们证明了对于所有ε≪√log(n)/n,线性时间可计算的Andreev函数不能在0 (ε 3n3 /2/ log3n)门的深度双电路的n位输入的(1/2+ε)分数上计算,也不能在0 (ε 3n5 /2/log7/ 2n)线上计算。这为阈值电路建立了一个平均情况下的“大小层次”,因为Andreev的函数可以通过o(n3)个线性阈值门的均匀深度两个电路和o(n)个多数门的均匀深度三个电路来计算。(2)我们提出了一个基于小偏集的P中的新函数,我们证明了它不能通过o(n3/2/ log3n)门的深度二阈值电路的多数票来计算,也不能用o(n5/2/log7/2n)条线来计算。(3)我们给出了计算深度二阈值电路奇偶校验的严格平均情况(门和线)复杂度结果;答案和深度是一样的——两个多数回路。关键是一种分析线性阈值函数随机约束的新方法。我们主要的分析工具是加性组合学中的Littlewood-Offord引理。
{"title":"Super-linear gate and super-quadratic wire lower bounds for depth-two and depth-three threshold circuits","authors":"D. Kane, Ryan Williams","doi":"10.1145/2897518.2897636","DOIUrl":"https://doi.org/10.1145/2897518.2897636","url":null,"abstract":"In order to formally understand the power of neural computing, we first need to crack the frontier of threshold circuits with two and three layers, a regime that has been surprisingly intractable to analyze. We prove the first super-linear gate lower bounds and the first super-quadratic wire lower bounds for depth-two linear threshold circuits with arbitrary weights, and depth-three majority circuits computing an explicit function. (1) We prove that for all ε ≪ √log(n)/n, the linear-time computable Andreev’s function cannot be computed on a (1/2+ε)-fraction of n-bit inputs by depth-two circuits of o(ε3 n3/2/log3 n) gates, nor can it be computed with o(ε3 n5/2/log7/2 n) wires. This establishes an average-case “size hierarchy” for threshold circuits, as Andreev’s function is computable by uniform depth-two circuits of o(n3) linear threshold gates, and by uniform depth-three circuits of O(n) majority gates. (2) We present a new function in P based on small-biased sets, which we prove cannot be computed by a majority vote of depth-two threshold circuits of o(n3/2/log3 n) gates, nor with o(n5/2/log7/2n) wires. (3) We give tight average-case (gate and wire) complexity results for computing PARITY with depth-two threshold circuits; the answer turns out to be the same as for depth-two majority circuits. The key is a new method for analyzing random restrictions to linear threshold functions. Our main analytical tool is the Littlewood-Offord Lemma from additive combinatorics.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131228858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 57
Lift-and-round to improve weighted completion time on unrelated machines 提升和旋转,以提高加权完成时间在不相关的机器
Pub Date : 2015-11-24 DOI: 10.1145/2897518.2897572
N. Bansal, O. Svensson, A. Srinivasan
We consider the problem of scheduling jobs on unrelated machines so as to minimize the sum of weighted completion times. Our main result is a (3/2-c)-approximation algorithm for some fixed c>0, improving upon the long-standing bound of 3/2. To do this, we first introduce a new lift-and-project based SDP relaxation for the problem. This is necessary as the previous convex programming relaxations have an integrality gap of 3/2. Second, we give a new general bipartite-rounding procedure that produces an assignment with certain strong negative correlation properties.
考虑在不相关机器上调度作业的问题,以使加权完成时间之和最小。我们的主要结果是对某些固定c>0的(3/2-c)近似算法,改进了长期存在的3/2界。为此,我们首先针对该问题引入了一种新的基于升降机和项目的SDP松弛方法。这是必要的,因为前面的凸规划松弛具有3/2的完整性间隙。其次,我们给出了一种新的具有强负相关性质的赋值的一般双部舍入过程。
{"title":"Lift-and-round to improve weighted completion time on unrelated machines","authors":"N. Bansal, O. Svensson, A. Srinivasan","doi":"10.1145/2897518.2897572","DOIUrl":"https://doi.org/10.1145/2897518.2897572","url":null,"abstract":"We consider the problem of scheduling jobs on unrelated machines so as to minimize the sum of weighted completion times. Our main result is a (3/2-c)-approximation algorithm for some fixed c>0, improving upon the long-standing bound of 3/2. To do this, we first introduce a new lift-and-project based SDP relaxation for the problem. This is necessary as the previous convex programming relaxations have an integrality gap of 3/2. Second, we give a new general bipartite-rounding procedure that produces an assignment with certain strong negative correlation properties.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114480695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Simulating branching programs with edit distance and friends: or: a polylog shaved is a lower bound made 模拟分支程序与编辑距离和朋友:或:polylog剃是一个下界
Pub Date : 2015-11-18 DOI: 10.1145/2897518.2897653
Amir Abboud, Thomas Dueholm Hansen, V. V. Williams, Ryan Williams
A recent, active line of work achieves tight lower bounds for fundamental problems under the Strong Exponential Time Hypothesis (SETH). A celebrated result of Backurs and Indyk (STOC’15) proves that computing the Edit Distance of two sequences of length n in truly subquadratic O(n2−ε) time, for some ε>0, is impossible under SETH. The result was extended by follow-up works to simpler looking problems like finding the Longest Common Subsequence (LCS). SETH is a very strong assumption, asserting that even linear size CNF formulas cannot be analyzed for satisfiability with an exponential speedup over exhaustive search. We consider much safer assumptions, e.g. that such a speedup is impossible for SAT on more expressive representations, like subexponential-size NC circuits. Intuitively, this assumption is much more plausible: NC circuits can implement linear algebra and complex cryptographic primitives, while CNFs cannot even approximately compute an XOR of bits. Our main result is a surprising reduction from SAT on Branching Programs to fundamental problems in P like Edit Distance, LCS, and many others. Truly subquadratic algorithms for these problems therefore have far more remarkable consequences than merely faster CNF-SAT algorithms. For example, SAT on arbitrary o(n)-depth bounded fan-in circuits (and therefore also NC-Circuit-SAT) can be solved in (2−ε)n time. An interesting feature of our work is that we get major consequences even from mildly subquadratic algorithms for Edit Distance or LCS. For example, we show that if an arbitrarily large polylog factor is shaved from n2 for Edit Distance then NEXP does not have non-uniform NC1 circuits.
最近,在强指数时间假设(SETH)下,一个活跃的工作线获得了基本问题的紧下界。Backurs和Indyk (STOC ' 15)的一个著名结果证明了在真正的次二次O(n2−ε)时间内计算两个长度为n的序列的编辑距离,对于某些ε>0,在SETH条件下是不可能的。这个结果被后续的工作扩展到更简单的问题上,比如寻找最长公共子序列(LCS)。SETH是一个非常强的假设,它断言即使是线性大小的CNF公式也不能用指数加速来分析穷举搜索的可满足性。我们考虑了更安全的假设,例如,在更具表现力的表示(如次指数大小的NC电路)上,这种加速对于SAT是不可能的。直观地说,这个假设更合理:NC电路可以实现线性代数和复杂的密码原语,而CNFs甚至不能近似地计算位的异或。我们的主要结果是从分支程序的SAT到P中的基本问题,如编辑距离、LCS和许多其他问题的惊人减少。因此,这些问题的真正次二次算法比仅仅更快的CNF-SAT算法有更显著的结果。例如,任意o(n)深度有界扇入电路(因此也包括nc电路-SAT)上的SAT可以在(2−ε)n时间内求解。我们工作的一个有趣的特点是,我们甚至从编辑距离或LCS的温和次二次算法中得到了主要结果。例如,我们表明,如果从编辑距离的n2中去除任意大的多对数因子,则NEXP不具有非均匀的NC1电路。
{"title":"Simulating branching programs with edit distance and friends: or: a polylog shaved is a lower bound made","authors":"Amir Abboud, Thomas Dueholm Hansen, V. V. Williams, Ryan Williams","doi":"10.1145/2897518.2897653","DOIUrl":"https://doi.org/10.1145/2897518.2897653","url":null,"abstract":"A recent, active line of work achieves tight lower bounds for fundamental problems under the Strong Exponential Time Hypothesis (SETH). A celebrated result of Backurs and Indyk (STOC’15) proves that computing the Edit Distance of two sequences of length n in truly subquadratic O(n2−ε) time, for some ε>0, is impossible under SETH. The result was extended by follow-up works to simpler looking problems like finding the Longest Common Subsequence (LCS). SETH is a very strong assumption, asserting that even linear size CNF formulas cannot be analyzed for satisfiability with an exponential speedup over exhaustive search. We consider much safer assumptions, e.g. that such a speedup is impossible for SAT on more expressive representations, like subexponential-size NC circuits. Intuitively, this assumption is much more plausible: NC circuits can implement linear algebra and complex cryptographic primitives, while CNFs cannot even approximately compute an XOR of bits. Our main result is a surprising reduction from SAT on Branching Programs to fundamental problems in P like Edit Distance, LCS, and many others. Truly subquadratic algorithms for these problems therefore have far more remarkable consequences than merely faster CNF-SAT algorithms. For example, SAT on arbitrary o(n)-depth bounded fan-in circuits (and therefore also NC-Circuit-SAT) can be solved in (2−ε)n time. An interesting feature of our work is that we get major consequences even from mildly subquadratic algorithms for Edit Distance or LCS. For example, we show that if an arbitrarily large polylog factor is shaved from n2 for Edit Distance then NEXP does not have non-uniform NC1 circuits.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125069902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 111
期刊
Proceedings of the forty-eighth annual ACM symposium on Theory of Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1