We study a central problem in Algorithmic Mechanism Design: constructing truthful mechanisms for welfare maximization in combinatorial auctions with submodular bidders. Dobzinski, Nisan, and Schapira provided the first mechanism that guarantees a non-trivial approximation ratio of O(log^2 m) [STOC'06], where m is the number of items. This was subsequently improved to O( log m log log m) [Dobzinski, APPROX'07] and then to O(m) [Krysta and Vocking, ICALP'12]. In this paper we develop the first mechanism that breaks the logarithmic barrier. Specifically, the mechanism provides an approximation ratio of O( m). Similarly to previous constructions, our mechanism uses polynomially many value and demand queries, and in fact provides the same approximation ratio for the larger class of XOS (a.k.a. fractionally subadditive) valuations. We also develop a computationally efficient implementation of the mechanism for combinatorial auctions with budget additive bidders. Although in general computing a demand query is NP-hard for budget additive valuations, we observe that the specific form of demand queries that our mechanism uses can be efficiently computed when bidders are budget additive.
本文研究了算法机制设计中的一个核心问题:构建具有子模块投标人的组合拍卖中福利最大化的真实机制。Dobzinski, Nisan和Schapira提供了第一种保证非平凡近似比为O(log^2 m)的机制[STOC'06],其中m为项目数。随后将其改进为O(log m log log m) [Dobzinski, APPROX'07],然后再改进为O(m) [Krysta and Vocking, ICALP'12]。在本文中,我们开发了第一个打破对数障碍的机制。具体来说,该机制提供了O(m)的近似比率。与之前的结构类似,我们的机制使用多项式的许多值和需求查询,实际上为更大类别的XOS(也称为分数次加性)估值提供了相同的近似比率。我们还开发了一种计算效率高的机制,用于与预算附加投标人的组合拍卖。尽管在一般情况下,计算需求查询对于预算附加估值是np困难的,但我们观察到,当投标人是预算附加估值时,我们的机制使用的特定形式的需求查询可以有效地计算出来。
{"title":"Breaking the logarithmic barrier for truthful combinatorial auctions with submodular bidders","authors":"Shahar Dobzinski","doi":"10.1145/2897518.2897569","DOIUrl":"https://doi.org/10.1145/2897518.2897569","url":null,"abstract":"We study a central problem in Algorithmic Mechanism Design: constructing truthful mechanisms for welfare maximization in combinatorial auctions with submodular bidders. Dobzinski, Nisan, and Schapira provided the first mechanism that guarantees a non-trivial approximation ratio of O(log^2 m) [STOC'06], where m is the number of items. This was subsequently improved to O( log m log log m) [Dobzinski, APPROX'07] and then to O(m) [Krysta and Vocking, ICALP'12]. In this paper we develop the first mechanism that breaks the logarithmic barrier. Specifically, the mechanism provides an approximation ratio of O( m). Similarly to previous constructions, our mechanism uses polynomially many value and demand queries, and in fact provides the same approximation ratio for the larger class of XOS (a.k.a. fractionally subadditive) valuations. We also develop a computationally efficient implementation of the mechanism for combinatorial auctions with budget additive bidders. Although in general computing a demand query is NP-hard for budget additive valuations, we observe that the specific form of demand queries that our mechanism uses can be efficiently computed when bidders are budget additive.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128436518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We show that the bipartite perfect matching problem is in quasi- NC2. That is, it has uniform circuits of quasi-polynomial size nO(logn), and O(log2 n) depth. Previously, only an exponential upper bound was known on the size of such circuits with poly-logarithmic depth. We obtain our result by an almost complete derandomization of the famous Isolation Lemma when applied to yield an efficient randomized parallel algorithm for the bipartite perfect matching problem.
{"title":"Bipartite perfect matching is in quasi-NC","authors":"Stephen A. Fenner, R. Gurjar, T. Thierauf","doi":"10.1145/2897518.2897564","DOIUrl":"https://doi.org/10.1145/2897518.2897564","url":null,"abstract":"We show that the bipartite perfect matching problem is in quasi- NC2. That is, it has uniform circuits of quasi-polynomial size nO(logn), and O(log2 n) depth. Previously, only an exponential upper bound was known on the size of such circuits with poly-logarithmic depth. We obtain our result by an almost complete derandomization of the famous Isolation Lemma when applied to yield an efficient randomized parallel algorithm for the bipartite perfect matching problem.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"110 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131746955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We show that the Graph Isomorphism (GI) problem and the more general problems of String Isomorphism (SI) andCoset Intersection (CI) can be solved in quasipolynomial(exp((logn)O(1))) time. The best previous bound for GI was exp(O( √n log n)), where n is the number of vertices (Luks, 1983); for the other two problems, the bound was similar, exp(O~(√ n)), where n is the size of the permutation domain (Babai, 1983). Following the approach of Luks’s seminal 1980/82 paper, the problem we actually address is SI. This problem takes two strings of length n and a permutation group G of degree n (the “ambient group”) as input (G is given by a list of generators) and asks whether or not one of the strings can be transformed into the other by some element of G. Luks’s divide-and-conquer algorithm for SI proceeds by recursion on the ambient group. We build on Luks’s framework and attack the obstructions to efficient Luks recurrence via an interplay between local and global symmetry. We construct group theoretic “local certificates” to certify the presence or absence of local symmetry, aggregate the negative certificates to canonical k-ary relations where k = O(log n), and employ combinatorial canonical partitioning techniques to split the k-ary relational structure for efficient divide-and- conquer. We show that in a well–defined sense, Johnson graphs are the only obstructions to effective canonical partitioning. The central element of the algorithm is the “local certificates” routine which is based on a new group theoretic result, the “Unaffected stabilizers lemma,” that allows us to construct global automorphisms out of local information.
{"title":"Graph isomorphism in quasipolynomial time [extended abstract]","authors":"L. Babai","doi":"10.1145/2897518.2897542","DOIUrl":"https://doi.org/10.1145/2897518.2897542","url":null,"abstract":"We show that the Graph Isomorphism (GI) problem and the more general problems of String Isomorphism (SI) andCoset Intersection (CI) can be solved in quasipolynomial(exp((logn)O(1))) time. The best previous bound for GI was exp(O( √n log n)), where n is the number of vertices (Luks, 1983); for the other two problems, the bound was similar, exp(O~(√ n)), where n is the size of the permutation domain (Babai, 1983). Following the approach of Luks’s seminal 1980/82 paper, the problem we actually address is SI. This problem takes two strings of length n and a permutation group G of degree n (the “ambient group”) as input (G is given by a list of generators) and asks whether or not one of the strings can be transformed into the other by some element of G. Luks’s divide-and-conquer algorithm for SI proceeds by recursion on the ambient group. We build on Luks’s framework and attack the obstructions to efficient Luks recurrence via an interplay between local and global symmetry. We construct group theoretic “local certificates” to certify the presence or absence of local symmetry, aggregate the negative certificates to canonical k-ary relations where k = O(log n), and employ combinatorial canonical partitioning techniques to split the k-ary relational structure for efficient divide-and- conquer. We show that in a well–defined sense, Johnson graphs are the only obstructions to effective canonical partitioning. The central element of the algorithm is the “local certificates” routine which is based on a new group theoretic result, the “Unaffected stabilizers lemma,” that allows us to construct global automorphisms out of local information.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126996801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samuel B. Hopkins, T. Schramm, Jonathan Shi, David Steurer
We consider two problems that arise in machine learning applications: the problem of recovering a planted sparse vector in a random linear subspace and the problem of decomposing a random low-rank overcomplete 3-tensor. For both problems, the best known guarantees are based on the sum-of-squares method. We develop new algorithms inspired by analyses of the sum-of-squares method. Our algorithms achieve the same or similar guarantees as sum-of-squares for these problems but the running time is significantly faster. For the planted sparse vector problem, we give an algorithm with running time nearly linear in the input size that approximately recovers a planted sparse vector with up to constant relative sparsity in a random subspace of ℝn of dimension up to Ω(√n). These recovery guarantees match the best known ones of Barak, Kelner, and Steurer (STOC 2014) up to logarithmic factors. For tensor decomposition, we give an algorithm with running time close to linear in the input size (with exponent ≈ 1.125) that approximately recovers a component of a random 3-tensor over ℝn of rank up to Ω(n4/3). The best previous algorithm for this problem due to Ge and Ma (RANDOM 2015) works up to rank Ω(n3/2) but requires quasipolynomial time.
{"title":"Fast spectral algorithms from sum-of-squares proofs: tensor decomposition and planted sparse vectors","authors":"Samuel B. Hopkins, T. Schramm, Jonathan Shi, David Steurer","doi":"10.1145/2897518.2897529","DOIUrl":"https://doi.org/10.1145/2897518.2897529","url":null,"abstract":"We consider two problems that arise in machine learning applications: the problem of recovering a planted sparse vector in a random linear subspace and the problem of decomposing a random low-rank overcomplete 3-tensor. For both problems, the best known guarantees are based on the sum-of-squares method. We develop new algorithms inspired by analyses of the sum-of-squares method. Our algorithms achieve the same or similar guarantees as sum-of-squares for these problems but the running time is significantly faster. For the planted sparse vector problem, we give an algorithm with running time nearly linear in the input size that approximately recovers a planted sparse vector with up to constant relative sparsity in a random subspace of ℝn of dimension up to Ω(√n). These recovery guarantees match the best known ones of Barak, Kelner, and Steurer (STOC 2014) up to logarithmic factors. For tensor decomposition, we give an algorithm with running time close to linear in the input size (with exponent ≈ 1.125) that approximately recovers a component of a random 3-tensor over ℝn of rank up to Ω(n4/3). The best previous algorithm for this problem due to Ge and Ma (RANDOM 2015) works up to rank Ω(n3/2) but requires quasipolynomial time.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115126972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rasmus Kyng, Y. Lee, Richard Peng, Sushant Sachdeva, D. Spielman
We introduce the sparsified Cholesky and sparsified multigrid algorithms for solving systems of linear equations. These algorithms accelerate Gaussian elimination by sparsifying the nonzero matrix entries created by the elimination process. We use these new algorithms to derive the first nearly linear time algorithms for solving systems of equations in connection Laplacians---a generalization of Laplacian matrices that arise in many problems in image and signal processing. We also prove that every connection Laplacian has a linear sized approximate inverse. This is an LU factorization with a linear number of nonzero entries that is a strong approximation of the original matrix. Using such a factorization one can solve systems of equations in a connection Laplacian in linear time. Such a factorization was unknown even for ordinary graph Laplacians.
{"title":"Sparsified Cholesky and multigrid solvers for connection laplacians","authors":"Rasmus Kyng, Y. Lee, Richard Peng, Sushant Sachdeva, D. Spielman","doi":"10.1145/2897518.2897640","DOIUrl":"https://doi.org/10.1145/2897518.2897640","url":null,"abstract":"We introduce the sparsified Cholesky and sparsified multigrid algorithms for solving systems of linear equations. These algorithms accelerate Gaussian elimination by sparsifying the nonzero matrix entries created by the elimination process. We use these new algorithms to derive the first nearly linear time algorithms for solving systems of equations in connection Laplacians---a generalization of Laplacian matrices that arise in many problems in image and signal processing. We also prove that every connection Laplacian has a linear sized approximate inverse. This is an LU factorization with a linear number of nonzero entries that is a strong approximation of the original matrix. Using such a factorization one can solve systems of equations in a connection Laplacian in linear time. Such a factorization was unknown even for ordinary graph Laplacians.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130003526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Fomin, Serge Gaspers, D. Lokshtanov, Saket Saurabh
We give a new general approach for designing exact exponential-time algorithms for subset problems. In a subset problem the input implicitly describes a family of sets over a universe of size n and the task is to determine whether the family contains at least one set. A typical example of a subset problem is Weighted d-SAT. Here, the input is a CNF-formula with clauses of size at most d, and an integer W. The universe is the set of variables and the variables have integer weights. The family contains all the subsets S of variables such that the total weight of the variables in S does not exceed W, and setting the variables in S to 1 and the remaining variables to 0 satisfies the formula. Our approach is based on “monotone local search”, where the goal is to extend a partial solution to a solution by adding as few elements as possible. More formally, in the extension problem we are also given as input a subset X of the universe and an integer k. The task is to determine whether one can add at most k elements to X to obtain a set in the (implicitly defined) family. Our main result is that a cknO(1) time algorithm for the extension problem immediately yields a randomized algorithm for finding a solution of any size with running time O((2−1/c)n). In many cases, the extension problem can be reduced to simply finding a solution of size at most k. Furthermore, efficient algorithms for finding small solutions have been extensively studied in the field of parameterized algorithms. Directly applying these algorithms, our theorem yields in one stroke significant improvements over the best known exponential-time algorithms for several well-studied problems, including d-Hitting Set, Feedback Vertex Set, Node Unique Label Cover, and Weighted d-SAT. Our results demonstrate an interesting and very concrete connection between parameterized algorithms and exact exponential-time algorithms. We also show how to derandomize our algorithms at the cost of a subexponential multiplicative factor in the running time. Our derandomization is based on an efficient construction of a new pseudo-random object that might be of independent interest. Finally, we extend our methods to establish new combinatorial upper bounds and develop enumeration algorithms.
{"title":"Exact algorithms via monotone local search","authors":"F. Fomin, Serge Gaspers, D. Lokshtanov, Saket Saurabh","doi":"10.1145/2897518.2897551","DOIUrl":"https://doi.org/10.1145/2897518.2897551","url":null,"abstract":"We give a new general approach for designing exact exponential-time algorithms for subset problems. In a subset problem the input implicitly describes a family of sets over a universe of size n and the task is to determine whether the family contains at least one set. A typical example of a subset problem is Weighted d-SAT. Here, the input is a CNF-formula with clauses of size at most d, and an integer W. The universe is the set of variables and the variables have integer weights. The family contains all the subsets S of variables such that the total weight of the variables in S does not exceed W, and setting the variables in S to 1 and the remaining variables to 0 satisfies the formula. Our approach is based on “monotone local search”, where the goal is to extend a partial solution to a solution by adding as few elements as possible. More formally, in the extension problem we are also given as input a subset X of the universe and an integer k. The task is to determine whether one can add at most k elements to X to obtain a set in the (implicitly defined) family. Our main result is that a cknO(1) time algorithm for the extension problem immediately yields a randomized algorithm for finding a solution of any size with running time O((2−1/c)n). In many cases, the extension problem can be reduced to simply finding a solution of size at most k. Furthermore, efficient algorithms for finding small solutions have been extensively studied in the field of parameterized algorithms. Directly applying these algorithms, our theorem yields in one stroke significant improvements over the best known exponential-time algorithms for several well-studied problems, including d-Hitting Set, Feedback Vertex Set, Node Unique Label Cover, and Weighted d-SAT. Our results demonstrate an interesting and very concrete connection between parameterized algorithms and exact exponential-time algorithms. We also show how to derandomize our algorithms at the cost of a subexponential multiplicative factor in the running time. Our derandomization is based on an efficient construction of a new pseudo-random object that might be of independent interest. Finally, we extend our methods to establish new combinatorial upper bounds and develop enumeration algorithms.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128087914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we develop a new communication model to prove a data structure lower bound for the dynamic interval union problem. The problem is to maintain a multiset of intervals I over [0, n] with integer coordinates, supporting the following operations: 1) insert(a, b), add an interval [a, b] to I, provided that a and b are integers in [0, n]; 2) delete(a, b), delete an (existing) interval [a, b] from I; 3) query(), return the total length of the union of all intervals in I. It is related to the two-dimensional case of Klee’s measure problem. We prove that there is a distribution over sequences of operations with O(n) insertions and deletions, and O(n0.01) queries, for which any data structure with any constant error probability requires Ω(nlogn) time in expectation. Interestingly, we use the sparse set disjointness protocol of Håstad and Wigderson to speed up a reduction from a new kind of nondeterministic communication games, for which we prove lower bounds. For applications, we prove lower bounds for several dynamic graph problems by reducing them from dynamic interval union.
{"title":"Cell-probe lower bounds for dynamic problems via a new communication model","authors":"Huacheng Yu","doi":"10.1145/2897518.2897556","DOIUrl":"https://doi.org/10.1145/2897518.2897556","url":null,"abstract":"In this paper, we develop a new communication model to prove a data structure lower bound for the dynamic interval union problem. The problem is to maintain a multiset of intervals I over [0, n] with integer coordinates, supporting the following operations: 1) insert(a, b), add an interval [a, b] to I, provided that a and b are integers in [0, n]; 2) delete(a, b), delete an (existing) interval [a, b] from I; 3) query(), return the total length of the union of all intervals in I. It is related to the two-dimensional case of Klee’s measure problem. We prove that there is a distribution over sequences of operations with O(n) insertions and deletions, and O(n0.01) queries, for which any data structure with any constant error probability requires Ω(nlogn) time in expectation. Interestingly, we use the sparse set disjointness protocol of Håstad and Wigderson to speed up a reduction from a new kind of nondeterministic communication games, for which we prove lower bounds. For applications, we prove lower bounds for several dynamic graph problems by reducing them from dynamic interval union.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123904453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to formally understand the power of neural computing, we first need to crack the frontier of threshold circuits with two and three layers, a regime that has been surprisingly intractable to analyze. We prove the first super-linear gate lower bounds and the first super-quadratic wire lower bounds for depth-two linear threshold circuits with arbitrary weights, and depth-three majority circuits computing an explicit function. (1) We prove that for all ε ≪ √log(n)/n, the linear-time computable Andreev’s function cannot be computed on a (1/2+ε)-fraction of n-bit inputs by depth-two circuits of o(ε3 n3/2/log3 n) gates, nor can it be computed with o(ε3 n5/2/log7/2 n) wires. This establishes an average-case “size hierarchy” for threshold circuits, as Andreev’s function is computable by uniform depth-two circuits of o(n3) linear threshold gates, and by uniform depth-three circuits of O(n) majority gates. (2) We present a new function in P based on small-biased sets, which we prove cannot be computed by a majority vote of depth-two threshold circuits of o(n3/2/log3 n) gates, nor with o(n5/2/log7/2n) wires. (3) We give tight average-case (gate and wire) complexity results for computing PARITY with depth-two threshold circuits; the answer turns out to be the same as for depth-two majority circuits. The key is a new method for analyzing random restrictions to linear threshold functions. Our main analytical tool is the Littlewood-Offord Lemma from additive combinatorics.
{"title":"Super-linear gate and super-quadratic wire lower bounds for depth-two and depth-three threshold circuits","authors":"D. Kane, Ryan Williams","doi":"10.1145/2897518.2897636","DOIUrl":"https://doi.org/10.1145/2897518.2897636","url":null,"abstract":"In order to formally understand the power of neural computing, we first need to crack the frontier of threshold circuits with two and three layers, a regime that has been surprisingly intractable to analyze. We prove the first super-linear gate lower bounds and the first super-quadratic wire lower bounds for depth-two linear threshold circuits with arbitrary weights, and depth-three majority circuits computing an explicit function. (1) We prove that for all ε ≪ √log(n)/n, the linear-time computable Andreev’s function cannot be computed on a (1/2+ε)-fraction of n-bit inputs by depth-two circuits of o(ε3 n3/2/log3 n) gates, nor can it be computed with o(ε3 n5/2/log7/2 n) wires. This establishes an average-case “size hierarchy” for threshold circuits, as Andreev’s function is computable by uniform depth-two circuits of o(n3) linear threshold gates, and by uniform depth-three circuits of O(n) majority gates. (2) We present a new function in P based on small-biased sets, which we prove cannot be computed by a majority vote of depth-two threshold circuits of o(n3/2/log3 n) gates, nor with o(n5/2/log7/2n) wires. (3) We give tight average-case (gate and wire) complexity results for computing PARITY with depth-two threshold circuits; the answer turns out to be the same as for depth-two majority circuits. The key is a new method for analyzing random restrictions to linear threshold functions. Our main analytical tool is the Littlewood-Offord Lemma from additive combinatorics.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131228858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider the problem of scheduling jobs on unrelated machines so as to minimize the sum of weighted completion times. Our main result is a (3/2-c)-approximation algorithm for some fixed c>0, improving upon the long-standing bound of 3/2. To do this, we first introduce a new lift-and-project based SDP relaxation for the problem. This is necessary as the previous convex programming relaxations have an integrality gap of 3/2. Second, we give a new general bipartite-rounding procedure that produces an assignment with certain strong negative correlation properties.
{"title":"Lift-and-round to improve weighted completion time on unrelated machines","authors":"N. Bansal, O. Svensson, A. Srinivasan","doi":"10.1145/2897518.2897572","DOIUrl":"https://doi.org/10.1145/2897518.2897572","url":null,"abstract":"We consider the problem of scheduling jobs on unrelated machines so as to minimize the sum of weighted completion times. Our main result is a (3/2-c)-approximation algorithm for some fixed c>0, improving upon the long-standing bound of 3/2. To do this, we first introduce a new lift-and-project based SDP relaxation for the problem. This is necessary as the previous convex programming relaxations have an integrality gap of 3/2. Second, we give a new general bipartite-rounding procedure that produces an assignment with certain strong negative correlation properties.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114480695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amir Abboud, Thomas Dueholm Hansen, V. V. Williams, Ryan Williams
A recent, active line of work achieves tight lower bounds for fundamental problems under the Strong Exponential Time Hypothesis (SETH). A celebrated result of Backurs and Indyk (STOC’15) proves that computing the Edit Distance of two sequences of length n in truly subquadratic O(n2−ε) time, for some ε>0, is impossible under SETH. The result was extended by follow-up works to simpler looking problems like finding the Longest Common Subsequence (LCS). SETH is a very strong assumption, asserting that even linear size CNF formulas cannot be analyzed for satisfiability with an exponential speedup over exhaustive search. We consider much safer assumptions, e.g. that such a speedup is impossible for SAT on more expressive representations, like subexponential-size NC circuits. Intuitively, this assumption is much more plausible: NC circuits can implement linear algebra and complex cryptographic primitives, while CNFs cannot even approximately compute an XOR of bits. Our main result is a surprising reduction from SAT on Branching Programs to fundamental problems in P like Edit Distance, LCS, and many others. Truly subquadratic algorithms for these problems therefore have far more remarkable consequences than merely faster CNF-SAT algorithms. For example, SAT on arbitrary o(n)-depth bounded fan-in circuits (and therefore also NC-Circuit-SAT) can be solved in (2−ε)n time. An interesting feature of our work is that we get major consequences even from mildly subquadratic algorithms for Edit Distance or LCS. For example, we show that if an arbitrarily large polylog factor is shaved from n2 for Edit Distance then NEXP does not have non-uniform NC1 circuits.
{"title":"Simulating branching programs with edit distance and friends: or: a polylog shaved is a lower bound made","authors":"Amir Abboud, Thomas Dueholm Hansen, V. V. Williams, Ryan Williams","doi":"10.1145/2897518.2897653","DOIUrl":"https://doi.org/10.1145/2897518.2897653","url":null,"abstract":"A recent, active line of work achieves tight lower bounds for fundamental problems under the Strong Exponential Time Hypothesis (SETH). A celebrated result of Backurs and Indyk (STOC’15) proves that computing the Edit Distance of two sequences of length n in truly subquadratic O(n2−ε) time, for some ε>0, is impossible under SETH. The result was extended by follow-up works to simpler looking problems like finding the Longest Common Subsequence (LCS). SETH is a very strong assumption, asserting that even linear size CNF formulas cannot be analyzed for satisfiability with an exponential speedup over exhaustive search. We consider much safer assumptions, e.g. that such a speedup is impossible for SAT on more expressive representations, like subexponential-size NC circuits. Intuitively, this assumption is much more plausible: NC circuits can implement linear algebra and complex cryptographic primitives, while CNFs cannot even approximately compute an XOR of bits. Our main result is a surprising reduction from SAT on Branching Programs to fundamental problems in P like Edit Distance, LCS, and many others. Truly subquadratic algorithms for these problems therefore have far more remarkable consequences than merely faster CNF-SAT algorithms. For example, SAT on arbitrary o(n)-depth bounded fan-in circuits (and therefore also NC-Circuit-SAT) can be solved in (2−ε)n time. An interesting feature of our work is that we get major consequences even from mildly subquadratic algorithms for Edit Distance or LCS. For example, we show that if an arbitrarily large polylog factor is shaved from n2 for Edit Distance then NEXP does not have non-uniform NC1 circuits.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125069902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}