F. Grandoni, Tobias Mömke, Andreas Wiese, Hang Zhou
In the unsplittable flow on a path problem (UFP) we are given a path with edge capacities and a collection of tasks. Each task is characterized by a subpath, a profit, and a demand. Our goal is to compute a maximum profit subset of tasks such that, for each edge e, the total demand of selected tasks that use e does not exceed the capacity of e. The current best polynomial-time approximation factor for this problem is 2+є for any constant є>0 [Anagostopoulos et al.-SODA 2014]. This is the best known factor even in the case of uniform edge capacities [Călinescu et al.-IPCO 2002, TALG 2011]. These results, likewise most prior work, are based on a partition of tasks into large and small depending on their ratio of demand to capacity over their respective edges: these algorithms invoke (1+є)-approximations for large and small tasks separately. The known techniques do not seem to be able to combine a big fraction of large and small tasks together (apart from some special cases and quasi-polynomial-time algorithms). The main contribution of this paper is to overcome this critical barrier. Namely, we present a polynomial-time algorithm that obtains roughly all profit from the optimal large tasks plus one third of the profit from the optimal small tasks. In combination with known results, this implies a polynomial-time (5/3+є)-approximation algorithm for UFP. Our algorithm is based on two main ingredients. First, we prove that there exist certain sub-optimal solutions where, roughly speaking, small tasks are packed into boxes. To prove that such solutions can yield high profit we introduce a horizontal slicing lemma which yields a novel geometric interpretation of certain solutions. The resulting boxed structure has polynomial complexity, hence cannot be guessed directly. Therefore, our second contribution is a dynamic program that guesses this structure (plus a packing of large and small tasks) on the fly, while losing at most one third of the profit of the remaining small tasks.
{"title":"A (5/3 + ε)-approximation for unsplittable flow on a path: placing small tasks into boxes","authors":"F. Grandoni, Tobias Mömke, Andreas Wiese, Hang Zhou","doi":"10.1145/3188745.3188894","DOIUrl":"https://doi.org/10.1145/3188745.3188894","url":null,"abstract":"In the unsplittable flow on a path problem (UFP) we are given a path with edge capacities and a collection of tasks. Each task is characterized by a subpath, a profit, and a demand. Our goal is to compute a maximum profit subset of tasks such that, for each edge e, the total demand of selected tasks that use e does not exceed the capacity of e. The current best polynomial-time approximation factor for this problem is 2+є for any constant є>0 [Anagostopoulos et al.-SODA 2014]. This is the best known factor even in the case of uniform edge capacities [Călinescu et al.-IPCO 2002, TALG 2011]. These results, likewise most prior work, are based on a partition of tasks into large and small depending on their ratio of demand to capacity over their respective edges: these algorithms invoke (1+є)-approximations for large and small tasks separately. The known techniques do not seem to be able to combine a big fraction of large and small tasks together (apart from some special cases and quasi-polynomial-time algorithms). The main contribution of this paper is to overcome this critical barrier. Namely, we present a polynomial-time algorithm that obtains roughly all profit from the optimal large tasks plus one third of the profit from the optimal small tasks. In combination with known results, this implies a polynomial-time (5/3+є)-approximation algorithm for UFP. Our algorithm is based on two main ingredients. First, we prove that there exist certain sub-optimal solutions where, roughly speaking, small tasks are packed into boxes. To prove that such solutions can yield high profit we introduce a horizontal slicing lemma which yields a novel geometric interpretation of certain solutions. The resulting boxed structure has polynomial complexity, hence cannot be guessed directly. Therefore, our second contribution is a dynamic program that guesses this structure (plus a packing of large and small tasks) on the fly, while losing at most one third of the profit of the remaining small tasks.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87920927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mikkel Abrahamsen, Anna Adamaszek, Tillmann Miltzow
We prove that the art gallery problem is equivalent under polynomial time reductions to deciding whether a system of polynomial equations over the real numbers has a solution. The art gallery problem is a classic problem in computational geometry, introduced in 1973 by Victor Klee. Given a simple polygon P and an integer k, the goal is to decide if there exists a set G of k guards within P such that every point p ∈ P is seen by at least one guard g∈ G. Each guard corresponds to a point in the polygon P, and we say that a guard g sees a point p if the line segment pg is contained in P. The art gallery problem has stimulated extensive research in geometry and in algorithms. However, the complexity status of the art gallery problem has not been resolved. It has long been known that the problem is NP-hard, but no one has been able to show that it lies in NP. Recently, the computational geometry community became more aware of the complexity class ∃ ℝ, which has been studied earlier by other communities. The class ∃ ℝ consists of problems that can be reduced in polynomial time to the problem of deciding whether a system of polynomial equations with integer coefficients and any number of real variables has a solution. It can be easily seen that NP ⊆ ∃ ℝ. We prove that the art gallery problem is ∃ ℝ-complete, implying that (1) any system of polynomial equations over the real numbers can be encoded as an instance of the art gallery problem, and (2) the art gallery problem is not in the complexity class NP unless NP=∃ ℝ. As an illustration of our techniques, we can show that for every compact semi-algebraic set S⊂ [0,1]2 there exists a polygon with rational coordinates that enforces one of the guards to be at any position p∈ S, in any optimal guarding. As a corollary of our construction, we prove that for any real algebraic number α there is an instance of the art gallery problem where one of the coordinates of the guards equals α in any guard set of minimum cardinality. That rules out many natural geometric approaches to the problem, as it shows that any approach based on constructing a finite set of candidate points for placing guards has to include points with coordinates being roots of polynomials with arbitrary degree. In the ∃ ℝ-hardness proof for the art gallery problem we introduce a new ∃ ℝ-complete problem ETR-INV. We believe that this problem is of independent interest, as it can be used to obtain ∃ ℝ-hardness proofs for other problems. In particular, ETR-INV has been very recently used to prove ∃ ℝ-hardness of other geometric problems.
{"title":"The art gallery problem is ∃ ℝ-complete","authors":"Mikkel Abrahamsen, Anna Adamaszek, Tillmann Miltzow","doi":"10.1145/3188745.3188868","DOIUrl":"https://doi.org/10.1145/3188745.3188868","url":null,"abstract":"We prove that the art gallery problem is equivalent under polynomial time reductions to deciding whether a system of polynomial equations over the real numbers has a solution. The art gallery problem is a classic problem in computational geometry, introduced in 1973 by Victor Klee. Given a simple polygon P and an integer k, the goal is to decide if there exists a set G of k guards within P such that every point p ∈ P is seen by at least one guard g∈ G. Each guard corresponds to a point in the polygon P, and we say that a guard g sees a point p if the line segment pg is contained in P. The art gallery problem has stimulated extensive research in geometry and in algorithms. However, the complexity status of the art gallery problem has not been resolved. It has long been known that the problem is NP-hard, but no one has been able to show that it lies in NP. Recently, the computational geometry community became more aware of the complexity class ∃ ℝ, which has been studied earlier by other communities. The class ∃ ℝ consists of problems that can be reduced in polynomial time to the problem of deciding whether a system of polynomial equations with integer coefficients and any number of real variables has a solution. It can be easily seen that NP ⊆ ∃ ℝ. We prove that the art gallery problem is ∃ ℝ-complete, implying that (1) any system of polynomial equations over the real numbers can be encoded as an instance of the art gallery problem, and (2) the art gallery problem is not in the complexity class NP unless NP=∃ ℝ. As an illustration of our techniques, we can show that for every compact semi-algebraic set S⊂ [0,1]2 there exists a polygon with rational coordinates that enforces one of the guards to be at any position p∈ S, in any optimal guarding. As a corollary of our construction, we prove that for any real algebraic number α there is an instance of the art gallery problem where one of the coordinates of the guards equals α in any guard set of minimum cardinality. That rules out many natural geometric approaches to the problem, as it shows that any approach based on constructing a finite set of candidate points for placing guards has to include points with coordinates being roots of polynomials with arbitrary degree. In the ∃ ℝ-hardness proof for the art gallery problem we introduce a new ∃ ℝ-complete problem ETR-INV. We believe that this problem is of independent interest, as it can be used to obtain ∃ ℝ-hardness proofs for other problems. In particular, ETR-INV has been very recently used to prove ∃ ℝ-hardness of other geometric problems.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90373654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Chattopadhyay, M. Koucký, B. Loff, Sagnik Mukhopadhyay
We develop a new technique for proving lower bounds in the setting of asymmetric communication, a model that was introduced in the famous works of Miltersen (STOC’94) and Miltersen, Nisan, Safra and Wigderson (STOC’95). At the core of our technique is the first simulation theorem in the asymmetric setting, where Alice gets a p × n matrix x over F2 and Bob gets a vector y ∈ F2n. Alice and Bob need to evaluate f(x· y) for a Boolean function f: {0,1}p → {0,1}. Our simulation theorems show that a deterministic/randomized communication protocol exists for this problem, with cost C· n for Alice and C for Bob, if and only if there exists a deterministic/randomized *parity decision tree* of cost Θ(C) for evaluating f. As applications of this technique, we obtain the following results: 1. The first strong lower-bounds against randomized data-structure schemes for the Vector-Matrix-Vector product problem over F2. Moreover, our method yields strong lower bounds even when the data-structure scheme has tiny advantage over random guessing. 2. The first lower bounds against randomized data-structures schemes for two natural Boolean variants of Orthogonal Vector Counting. 3. We construct an asymmetric communication problem and obtain a deterministic lower-bound for it which is provably better than any lower-bound that may be obtained by the classical Richness Method of Miltersen et al. (STOC ’95). This seems to be the first known limitation of the Richness Method in the context of proving deterministic lower bounds.
{"title":"Simulation beats richness: new data-structure lower bounds","authors":"A. Chattopadhyay, M. Koucký, B. Loff, Sagnik Mukhopadhyay","doi":"10.1145/3188745.3188874","DOIUrl":"https://doi.org/10.1145/3188745.3188874","url":null,"abstract":"We develop a new technique for proving lower bounds in the setting of asymmetric communication, a model that was introduced in the famous works of Miltersen (STOC’94) and Miltersen, Nisan, Safra and Wigderson (STOC’95). At the core of our technique is the first simulation theorem in the asymmetric setting, where Alice gets a p × n matrix x over F2 and Bob gets a vector y ∈ F2n. Alice and Bob need to evaluate f(x· y) for a Boolean function f: {0,1}p → {0,1}. Our simulation theorems show that a deterministic/randomized communication protocol exists for this problem, with cost C· n for Alice and C for Bob, if and only if there exists a deterministic/randomized *parity decision tree* of cost Θ(C) for evaluating f. As applications of this technique, we obtain the following results: 1. The first strong lower-bounds against randomized data-structure schemes for the Vector-Matrix-Vector product problem over F2. Moreover, our method yields strong lower bounds even when the data-structure scheme has tiny advantage over random guessing. 2. The first lower bounds against randomized data-structures schemes for two natural Boolean variants of Orthogonal Vector Counting. 3. We construct an asymmetric communication problem and obtain a deterministic lower-bound for it which is provably better than any lower-bound that may be obtained by the classical Richness Method of Miltersen et al. (STOC ’95). This seems to be the first known limitation of the Richness Method in the context of proving deterministic lower bounds.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"183 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80442511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the query complexity of graph isomorphism in the property testing model for dense graphs. We give an algorithm that makes n1+o(1) queries, improving on the previous best bound of Õ(n5/4). Since the problem is known to require Ω(n) queries, our algorithm is optimal up to a subpolynomial factor. While trying to extend a known connection to distribution testing, discovered by Fischer and Matsliah (SICOMP 2008), one encounters a natural obstacle presented by sampling lower bounds such as the Ω(n2/3)-sample lower bound for distribution closeness testing (Valiant, SICOMP 2011). In the context of graph isomorphism testing, these bounds lead to an n1+Ω(1) barrier for Fischer and Matsliah’s approach. We circumvent this and other limitations by exploiting a geometric representation of the connectivity of vertices. An approximate representation of similarities between vertices can be learned with a near-linear number of queries and allows relaxed versions of sampling and distribution testing problems to be solved more efficiently.
{"title":"The query complexity of graph isomorphism: bypassing distribution testing lower bounds","authors":"Krzysztof Onak, Xiaorui Sun","doi":"10.1145/3188745.3188952","DOIUrl":"https://doi.org/10.1145/3188745.3188952","url":null,"abstract":"We study the query complexity of graph isomorphism in the property testing model for dense graphs. We give an algorithm that makes n1+o(1) queries, improving on the previous best bound of Õ(n5/4). Since the problem is known to require Ω(n) queries, our algorithm is optimal up to a subpolynomial factor. While trying to extend a known connection to distribution testing, discovered by Fischer and Matsliah (SICOMP 2008), one encounters a natural obstacle presented by sampling lower bounds such as the Ω(n2/3)-sample lower bound for distribution closeness testing (Valiant, SICOMP 2011). In the context of graph isomorphism testing, these bounds lead to an n1+Ω(1) barrier for Fischer and Matsliah’s approach. We circumvent this and other limitations by exploiting a geometric representation of the connectivity of vertices. An approximate representation of similarities between vertices can be learned with a near-linear number of queries and allows relaxed versions of sampling and distribution testing problems to be solved more efficiently.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75514335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper makes progress on the problem of explicitly constructing a binary tree code with constant distance and constant alphabet size. We give an explicit binary tree code with constant distance and alphabet size poly(logn), where n is the depth of the tree. This is the first improvement over a two-decade-old construction that has an exponentially larger alphabet of size poly(n). At the core of our construction is the first explicit tree code with constant rate and constant distance, though with non-constant arity - a result of independent interest. This construction adapts the polynomial interpolation framework to the online setting.
{"title":"Explicit binary tree codes with polylogarithmic size alphabet","authors":"Gil Cohen, Bernhard Haeupler, L. Schulman","doi":"10.1145/3188745.3188928","DOIUrl":"https://doi.org/10.1145/3188745.3188928","url":null,"abstract":"This paper makes progress on the problem of explicitly constructing a binary tree code with constant distance and constant alphabet size. We give an explicit binary tree code with constant distance and alphabet size poly(logn), where n is the depth of the tree. This is the first improvement over a two-decade-old construction that has an exponentially larger alphabet of size poly(n). At the core of our construction is the first explicit tree code with constant rate and constant distance, though with non-constant arity - a result of independent interest. This construction adapts the polynomial interpolation framework to the online setting.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76468554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We prove that if every problem in NP has nk-size circuits for a fixed constant k, then for every NP-verifier and every yes-instance x of length n for that verifier, the verifier’s search space has an nO(k3)-size witness circuit: a witness for x that can be encoded with a circuit of only nO(k3) size. An analogous statement is proved for nondeterministic quasi-polynomial time, i.e., NQP = NTIME[nlogO(1) n]. This significantly extends the Easy Witness Lemma of Impagliazzo, Kabanets, and Wigderson [JCSS’02] which only held for larger nondeterministic classes such as NEXP. As a consequence, the connections between circuit-analysis algorithms and circuit lower bounds can be considerably sharpened: algorithms for approximately counting satisfying assignments to given circuits which improve over exhaustive search can imply circuit lower bounds for functions in NQP or even NP. To illustrate, applying known algorithms for satisfiability of ACC ∘ THR circuits [R. Williams, STOC 2014] we conclude that for every fixed k, NQP does not have nlogk n-size ACC ∘ THR circuits.
{"title":"Circuit lower bounds for nondeterministic quasi-polytime: an easy witness lemma for NP and NQP","authors":"Cody Murray, Richard Ryan Williams","doi":"10.1145/3188745.3188910","DOIUrl":"https://doi.org/10.1145/3188745.3188910","url":null,"abstract":"We prove that if every problem in NP has nk-size circuits for a fixed constant k, then for every NP-verifier and every yes-instance x of length n for that verifier, the verifier’s search space has an nO(k3)-size witness circuit: a witness for x that can be encoded with a circuit of only nO(k3) size. An analogous statement is proved for nondeterministic quasi-polynomial time, i.e., NQP = NTIME[nlogO(1) n]. This significantly extends the Easy Witness Lemma of Impagliazzo, Kabanets, and Wigderson [JCSS’02] which only held for larger nondeterministic classes such as NEXP. As a consequence, the connections between circuit-analysis algorithms and circuit lower bounds can be considerably sharpened: algorithms for approximately counting satisfying assignments to given circuits which improve over exhaustive search can imply circuit lower bounds for functions in NQP or even NP. To illustrate, applying known algorithms for satisfiability of ACC ∘ THR circuits [R. Williams, STOC 2014] we conclude that for every fixed k, NQP does not have nlogk n-size ACC ∘ THR circuits.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"75 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82722130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study secret sharing schemes for general (non-threshold) access structures. A general secret sharing scheme for n parties is associated to a monotone function F:{0,1}n→{0,1}. In such a scheme, a dealer distributes shares of a secret s among n parties. Any subset of parties T ⊆ [n] should be able to put together their shares and reconstruct the secret s if F(T)=1, and should have no information about s if F(T)=0. One of the major long-standing questions in information-theoretic cryptography is to minimize the (total) size of the shares in a secret-sharing scheme for arbitrary monotone functions F. There is a large gap between lower and upper bounds for secret sharing. The best known scheme for general F has shares of size 2n−o(n), but the best lower bound is Ω(n2/logn). Indeed, the exponential share size is a direct result of the fact that in all known secret-sharing schemes, the share size grows with the size of a circuit (or formula, or monotone span program) for F. Indeed, several researchers have suggested the existence of a representation size barrier which implies that the right answer is closer to the upper bound, namely, 2n−o(n). In this work, we overcome this barrier by constructing a secret sharing scheme for any access structure with shares of size 20.994n and a linear secret sharing scheme for any access structure with shares of size 20.999n. As a contribution of independent interest, we also construct a secret sharing scheme with shares of size 2Õ(√n) for 2n n/2 monotone access structures, out of a total of 2n n/2· (1+O(logn/n)) of them. Our construction builds on recent works that construct better protocols for the conditional disclosure of secrets (CDS) problem.
{"title":"Breaking the circuit-size barrier in secret sharing","authors":"Tianren Liu, V. Vaikuntanathan","doi":"10.1145/3188745.3188936","DOIUrl":"https://doi.org/10.1145/3188745.3188936","url":null,"abstract":"We study secret sharing schemes for general (non-threshold) access structures. A general secret sharing scheme for n parties is associated to a monotone function F:{0,1}n→{0,1}. In such a scheme, a dealer distributes shares of a secret s among n parties. Any subset of parties T ⊆ [n] should be able to put together their shares and reconstruct the secret s if F(T)=1, and should have no information about s if F(T)=0. One of the major long-standing questions in information-theoretic cryptography is to minimize the (total) size of the shares in a secret-sharing scheme for arbitrary monotone functions F. There is a large gap between lower and upper bounds for secret sharing. The best known scheme for general F has shares of size 2n−o(n), but the best lower bound is Ω(n2/logn). Indeed, the exponential share size is a direct result of the fact that in all known secret-sharing schemes, the share size grows with the size of a circuit (or formula, or monotone span program) for F. Indeed, several researchers have suggested the existence of a representation size barrier which implies that the right answer is closer to the upper bound, namely, 2n−o(n). In this work, we overcome this barrier by constructing a secret sharing scheme for any access structure with shares of size 20.994n and a linear secret sharing scheme for any access structure with shares of size 20.999n. As a contribution of independent interest, we also construct a secret sharing scheme with shares of size 2Õ(√n) for 2n n/2 monotone access structures, out of a total of 2n n/2· (1+O(logn/n)) of them. Our construction builds on recent works that construct better protocols for the conditional disclosure of secrets (CDS) problem.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89976606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eshan Chattopadhyay, P. Hatami, Omer Reingold, Avishay Tal
We present an explicit pseudorandom generator with seed length Õ((logn)w+1) for read-once, oblivious, width w branching programs that can read their input bits in any order. This improves upon the work of Impagliazzo, Meka and Zuckerman (FOCS’12) where they required seed length n1/2+o(1). A central ingredient in our work is the following bound that we prove on the Fourier spectrum of branching programs. For any width w read-once, oblivious branching program B:{0,1}n→ {0,1}, any k ∈ {1,…,n}, [complex formula not displayed] This settles a conjecture posed by Reingold, Steinke and Vadhan (RANDOM’13). Our analysis crucially uses a notion of local monotonicity on the edge labeling of the branching program. We carry critical parts of our proof under the assumption of local monotonicity and show how to deduce our results for unrestricted branching programs.
{"title":"Improved pseudorandomness for unordered branching programs through local monotonicity","authors":"Eshan Chattopadhyay, P. Hatami, Omer Reingold, Avishay Tal","doi":"10.1145/3188745.3188800","DOIUrl":"https://doi.org/10.1145/3188745.3188800","url":null,"abstract":"We present an explicit pseudorandom generator with seed length Õ((logn)w+1) for read-once, oblivious, width w branching programs that can read their input bits in any order. This improves upon the work of Impagliazzo, Meka and Zuckerman (FOCS’12) where they required seed length n1/2+o(1). A central ingredient in our work is the following bound that we prove on the Fourier spectrum of branching programs. For any width w read-once, oblivious branching program B:{0,1}n→ {0,1}, any k ∈ {1,…,n}, [complex formula not displayed] This settles a conjecture posed by Reingold, Steinke and Vadhan (RANDOM’13). Our analysis crucially uses a notion of local monotonicity on the edge labeling of the branching program. We carry critical parts of our proof under the assumption of local monotonicity and show how to deduce our results for unrestricted branching programs.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86203928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Garg, Mika Göös, Pritish Kamath, Dmitry Sokolov
For any unsatisfiable CNF formula F that is hard to refute in the Resolution proof system, we show that a gadget-composed version of F is hard to refute in any proof system whose lines are computed by efficient communication protocols—or, equivalently, that a monotone function associated with F has large monotone circuit complexity. Our result extends to monotone real circuits, which yields new lower bounds for the Cutting Planes proof system.
{"title":"Monotone circuit lower bounds from resolution","authors":"A. Garg, Mika Göös, Pritish Kamath, Dmitry Sokolov","doi":"10.1145/3188745.3188838","DOIUrl":"https://doi.org/10.1145/3188745.3188838","url":null,"abstract":"For any unsatisfiable CNF formula F that is hard to refute in the Resolution proof system, we show that a gadget-composed version of F is hard to refute in any proof system whose lines are computed by efficient communication protocols—or, equivalently, that a monotone function associated with F has large monotone circuit complexity. Our result extends to monotone real circuits, which yields new lower bounds for the Cutting Planes proof system.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84827207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Mahabadi, K. Makarychev, Yury Makarychev, Ilya P. Razenshteyn
We introduce and study the notion of *an outer bi-Lipschitz extension* of a map between Euclidean spaces. The notion is a natural analogue of the notion of *a Lipschitz extension* of a Lipschitz map. We show that for every map f there exists an outer bi-Lipschitz extension f′ whose distortion is greater than that of f by at most a constant factor. This result can be seen as a counterpart of the classic Kirszbraun theorem for outer bi-Lipschitz extensions. We also study outer bi-Lipschitz extensions of near-isometric maps and show upper and lower bounds for them. Then, we present applications of our results to prioritized and terminal dimension reduction problems, described next. We prove a *prioritized* variant of the Johnson–Lindenstrauss lemma: given a set of points X⊂ ℝd of size N and a permutation (”priority ranking”) of X, there exists an embedding f of X into ℝO(logN) with distortion O(loglogN) such that the point of rank j has only O(log3 + ε j) non-zero coordinates – more specifically, all but the first O(log3+ε j) coordinates are equal to 0; the distortion of f restricted to the first j points (according to the ranking) is at most O(loglogj). The result makes a progress towards answering an open question by Elkin, Filtser, and Neiman about prioritized dimension reductions. We prove that given a set X of N points in ℜd, there exists a *terminal* dimension reduction embedding of ℝd into ℝd′, where d′ = O(logN/ε4), which preserves distances ||x−y|| between points x∈ X and y ∈ ℝd, up to a multiplicative factor of 1 ± ε. This improves a recent result by Elkin, Filtser, and Neiman. The dimension reductions that we obtain are nonlinear, and this nonlinearity is necessary.
{"title":"Nonlinear dimension reduction via outer Bi-Lipschitz extensions","authors":"S. Mahabadi, K. Makarychev, Yury Makarychev, Ilya P. Razenshteyn","doi":"10.1145/3188745.3188828","DOIUrl":"https://doi.org/10.1145/3188745.3188828","url":null,"abstract":"We introduce and study the notion of *an outer bi-Lipschitz extension* of a map between Euclidean spaces. The notion is a natural analogue of the notion of *a Lipschitz extension* of a Lipschitz map. We show that for every map f there exists an outer bi-Lipschitz extension f′ whose distortion is greater than that of f by at most a constant factor. This result can be seen as a counterpart of the classic Kirszbraun theorem for outer bi-Lipschitz extensions. We also study outer bi-Lipschitz extensions of near-isometric maps and show upper and lower bounds for them. Then, we present applications of our results to prioritized and terminal dimension reduction problems, described next. We prove a *prioritized* variant of the Johnson–Lindenstrauss lemma: given a set of points X⊂ ℝd of size N and a permutation (”priority ranking”) of X, there exists an embedding f of X into ℝO(logN) with distortion O(loglogN) such that the point of rank j has only O(log3 + ε j) non-zero coordinates – more specifically, all but the first O(log3+ε j) coordinates are equal to 0; the distortion of f restricted to the first j points (according to the ranking) is at most O(loglogj). The result makes a progress towards answering an open question by Elkin, Filtser, and Neiman about prioritized dimension reductions. We prove that given a set X of N points in ℜd, there exists a *terminal* dimension reduction embedding of ℝd into ℝd′, where d′ = O(logN/ε4), which preserves distances ||x−y|| between points x∈ X and y ∈ ℝd, up to a multiplicative factor of 1 ± ε. This improves a recent result by Elkin, Filtser, and Neiman. The dimension reductions that we obtain are nonlinear, and this nonlinearity is necessary.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87407417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}