Dan Alistarh, J. Aspnes, Seth Gilbert, R. Guerraoui
We study the complexity of renaming, a fundamental problem in distributed computing in which a set of processes need to pick distinct names from a given namespace. We prove an individual lower bound of Omega( k ) process steps for deterministic renaming into any namespace of size sub-exponential in k, where k is the number of participants. This bound is tight: it draws an exponential separation between deterministic and randomized solutions, and implies new tight bounds for deterministic fetch-and-increment registers, queues and stacks. The proof of the bound is interesting in its own right, for it relies on the first reduction from renaming to another fundamental problem in distributed computing: mutual exclusion. We complement our individual bound with a global lower bound of Omega( k log ( k / c ) ) on the total step complexity of renaming into a namespace of size ck, for any c geq 1. This applies to randomized algorithms against a strong adversary, and helps derive new global lower bounds for randomized approximate counter and fetch-and-increment implementations, all tight within logarithmic factors.
{"title":"The Complexity of Renaming","authors":"Dan Alistarh, J. Aspnes, Seth Gilbert, R. Guerraoui","doi":"10.1109/FOCS.2011.66","DOIUrl":"https://doi.org/10.1109/FOCS.2011.66","url":null,"abstract":"We study the complexity of renaming, a fundamental problem in distributed computing in which a set of processes need to pick distinct names from a given namespace. We prove an individual lower bound of Omega( k ) process steps for deterministic renaming into any namespace of size sub-exponential in k, where k is the number of participants. This bound is tight: it draws an exponential separation between deterministic and randomized solutions, and implies new tight bounds for deterministic fetch-and-increment registers, queues and stacks. The proof of the bound is interesting in its own right, for it relies on the first reduction from renaming to another fundamental problem in distributed computing: mutual exclusion. We complement our individual bound with a global lower bound of Omega( k log ( k / c ) ) on the total step complexity of renaming into a namespace of size ck, for any c geq 1. This applies to randomized algorithms against a strong adversary, and helps derive new global lower bounds for randomized approximate counter and fetch-and-increment implementations, all tight within logarithmic factors.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125268943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the generalized sorting problem where we are given a set of n elements to be sorted but only a subset of all possible pair wise element comparisons is allowed. The goal is to determine the sorted order using the smallest possible number of allowed comparisons. The generalized sorting problem may be equivalently viewed as follows. Given an undirected graph G(V, E) where V is the set of elements to be sorted and E defines the set of allowed comparisons, adaptively find the smallest subset E¡ä subseteq E of edges to probe such that the directed graph induced by E¡ä contains a Hamiltonian path. When G is a complete graph, we get the standard sorting problem, and it is well-known that Theta(n log n) comparisons are necessary and sufficient. An extensively studied special case of the generalized sorting problem is the nuts and bolts problem where the allowed comparison graph is a complete bipartite graph between two equal-size sets. It is known that for this special case also, there is a deterministic algorithm that sorts using Theta(n log n) comparisons. However, when the allowed comparison graph is arbitrary, to our knowledge, no bound better than the trivial O(n^2) bound is known. Our main result is a randomized algorithm that sorts any allowed comparison graph using O(n^{3/2}) comparisons with high probability (provided the input is sortable). We also study the sorting problem in randomly generated allowed comparison graphs, and show that when the edge probability is p, O(min{ n/p^2, n^{3/2}sqrt{p}) comparisons suffice on average to sort.
{"title":"Algorithms for the Generalized Sorting Problem","authors":"Zhiyi Huang, Sampath Kannan, S. Khanna","doi":"10.1109/FOCS.2011.54","DOIUrl":"https://doi.org/10.1109/FOCS.2011.54","url":null,"abstract":"We study the generalized sorting problem where we are given a set of n elements to be sorted but only a subset of all possible pair wise element comparisons is allowed. The goal is to determine the sorted order using the smallest possible number of allowed comparisons. The generalized sorting problem may be equivalently viewed as follows. Given an undirected graph G(V, E) where V is the set of elements to be sorted and E defines the set of allowed comparisons, adaptively find the smallest subset E¡ä subseteq E of edges to probe such that the directed graph induced by E¡ä contains a Hamiltonian path. When G is a complete graph, we get the standard sorting problem, and it is well-known that Theta(n log n) comparisons are necessary and sufficient. An extensively studied special case of the generalized sorting problem is the nuts and bolts problem where the allowed comparison graph is a complete bipartite graph between two equal-size sets. It is known that for this special case also, there is a deterministic algorithm that sorts using Theta(n log n) comparisons. However, when the allowed comparison graph is arbitrary, to our knowledge, no bound better than the trivial O(n^2) bound is known. Our main result is a randomized algorithm that sorts any allowed comparison graph using O(n^{3/2}) comparisons with high probability (provided the input is sortable). We also study the sorting problem in randomly generated allowed comparison graphs, and show that when the edge probability is p, O(min{ n/p^2, n^{3/2}sqrt{p}) comparisons suffice on average to sort.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127505007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A function f : D ? R has Lipschitz constant c if dR(f(x),f(y)) = c· dD(x,y) for all x,y in D, where dR and dD denote the distance functions on the range and domain of f, respectively. We say a function is Lipschitz if it has Lipschitz constant 1. (Note that rescaling by a factor of 1/c converts a function with a Lipschitz constant c into a Lipschitz function.) In other words, Lipschitz functions are not very sensitive to small changes in the input. We initiate the study of testing and local reconstruction of the Lipschitz property of functions. A property tester has to distinguish functions with the property (in this case, Lipschitz) from functions that are e -far from having the property, that is, differ from every function with the property on at least an e fraction of the domain. A local filter reconstructs an arbitrary function f to ensure that the reconstructed function g has the desired property (in this case, is Lipschitz), changing f only when necessary. A local filter is given a function f and a query x and, after looking up the value of f on a small number of points, it has to output g(x) for some function g, which has the desired property and does not depend on x. If f has the property, g must be equal to f. We consider functions over domains {0,1}d, {1,..., n} and {1,..., n}d, equipped with l1 distance. We design efficient testers of the Lipschitz property for functions of the form f:{0,1}d? d Z, where d ? (0,1] and d Z is the set of integer multiples of d, and of the form f: {1,..., n} ? R, where R is (discretely) metrically convex. In the first case, the tester runs in time O(d· min{d,r}/d e ), where r is the diameter of the image of f, in the second, in time O((log n)/e ). We give corresponding lower bounds of O (d) and O (log n) on the query complexity (in the second case, only for nonadaptive 1-sided error testers). Our lower bound for functions over {0,1}dis tight for the case of the {0,1,2} range and constant e. The first tester implies an algorithm for functions of the form f:{0,1}d? R that distinguishes Lipschitz functions from functions that are e -far from (1+d )-Lipschitz. We also present a local filter of the Lipschitz property for functions of the form f: {1,..., n}d ? R with lookup complexity O((log n+1)d). For functions of the form {0,1}d, we show that every nonadaptive local filter has lookup complexity exponential in d. The testers that we developed have applications to programs analysis. The reconstructors have applications to data privacy. For the first application, the Lipschitz property of the function computed by a program corresponds to a notion of robustness to noise in the data. The application to privacy is based on the fact that a function f of entries in a database of sensitive information can be released with noise of magnitude proportional to a Lipschitz constant of f, while preserving the privacy of individuals whose data is stored in the database (Dwork, McSherry, Nissim and Smith, TCC 2006). We give a
{"title":"Testing and Reconstruction of Lipschitz Functions with Applications to Data Privacy","authors":"Madhav Jha, Sofya Raskhodnikova","doi":"10.1137/110840741","DOIUrl":"https://doi.org/10.1137/110840741","url":null,"abstract":"A function f : D ? R has Lipschitz constant c if dR(f(x),f(y)) = c· dD(x,y) for all x,y in D, where dR and dD denote the distance functions on the range and domain of f, respectively. We say a function is Lipschitz if it has Lipschitz constant 1. (Note that rescaling by a factor of 1/c converts a function with a Lipschitz constant c into a Lipschitz function.) In other words, Lipschitz functions are not very sensitive to small changes in the input. We initiate the study of testing and local reconstruction of the Lipschitz property of functions. A property tester has to distinguish functions with the property (in this case, Lipschitz) from functions that are e -far from having the property, that is, differ from every function with the property on at least an e fraction of the domain. A local filter reconstructs an arbitrary function f to ensure that the reconstructed function g has the desired property (in this case, is Lipschitz), changing f only when necessary. A local filter is given a function f and a query x and, after looking up the value of f on a small number of points, it has to output g(x) for some function g, which has the desired property and does not depend on x. If f has the property, g must be equal to f. We consider functions over domains {0,1}d, {1,..., n} and {1,..., n}d, equipped with l1 distance. We design efficient testers of the Lipschitz property for functions of the form f:{0,1}d? d Z, where d ? (0,1] and d Z is the set of integer multiples of d, and of the form f: {1,..., n} ? R, where R is (discretely) metrically convex. In the first case, the tester runs in time O(d· min{d,r}/d e ), where r is the diameter of the image of f, in the second, in time O((log n)/e ). We give corresponding lower bounds of O (d) and O (log n) on the query complexity (in the second case, only for nonadaptive 1-sided error testers). Our lower bound for functions over {0,1}dis tight for the case of the {0,1,2} range and constant e. The first tester implies an algorithm for functions of the form f:{0,1}d? R that distinguishes Lipschitz functions from functions that are e -far from (1+d )-Lipschitz. We also present a local filter of the Lipschitz property for functions of the form f: {1,..., n}d ? R with lookup complexity O((log n+1)d). For functions of the form {0,1}d, we show that every nonadaptive local filter has lookup complexity exponential in d. The testers that we developed have applications to programs analysis. The reconstructors have applications to data privacy. For the first application, the Lipschitz property of the function computed by a program corresponds to a notion of robustness to noise in the data. The application to privacy is based on the fact that a function f of entries in a database of sensitive information can be released with noise of magnitude proportional to a Lipschitz constant of f, while preserving the privacy of individuals whose data is stored in the database (Dwork, McSherry, Nissim and Smith, TCC 2006). We give a","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124635522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An important element of social choice theory are impossibility theorems, such as Arrow's theorem and Gibbard-Satterthwaite's theorem, which state that under certain natural constraints, social choice mechanisms are impossible to construct. In recent years, beginning in Kalai'01, much work has been done in finding text it{robust} versions of these theorems, showing that impossibility remains even when the constraints are text it{almost} always satisfied. In this work we present an Algebraic scheme for producing such results. We demonstrate it for a variant of Arrow's theorem, found in Dokow and Holzman [5].
{"title":"An Algebraic Proof of a Robust Social Choice Impossibility Theorem","authors":"Dvir Falik, E. Friedgut","doi":"10.1109/FOCS.2011.72","DOIUrl":"https://doi.org/10.1109/FOCS.2011.72","url":null,"abstract":"An important element of social choice theory are impossibility theorems, such as Arrow's theorem and Gibbard-Satterthwaite's theorem, which state that under certain natural constraints, social choice mechanisms are impossible to construct. In recent years, beginning in Kalai'01, much work has been done in finding text it{robust} versions of these theorems, showing that impossibility remains even when the constraints are text it{almost} always satisfied. In this work we present an Algebraic scheme for producing such results. We demonstrate it for a variant of Arrow's theorem, found in Dokow and Holzman [5].","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121416222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The study of combinatorial problems with a submodular objective function has attracted much attention in recent years, and is partly motivated by the importance of such problems to economics, algorithmic game theory and combinatorial optimization. Classical works on these problems are mostly combinatorial in nature. Recently, however, many results based on continuous algorithmic tools have emerged. The main bottleneck of such continuous techniques is how to approximately solve a non-convex relaxation for the sub- modular problem at hand. Thus, the efficient computation of better fractional solutions immediately implies improved approximations for numerous applications. A simple and elegant method, called "continuous greedy", successfully tackles this issue for monotone submodular objective functions, however, only much more complex tools are known to work for general non-monotone submodular objectives. In this work we present a new unified continuous greedy algorithm which finds approximate fractional solutions for both the non-monotone and monotone cases, and improves on the approximation ratio for many applications. For general non-monotone submodular objective functions, our algorithm achieves an improved approximation ratio of about 1/e. For monotone submodular objective functions, our algorithm achieves an approximation ratio that depends on the density of the polytope defined by the problem at hand, which is always at least as good as the previously known best approximation ratio of 1-1/e. Some notable immediate implications are an improved 1/e-approximation for maximizing a non-monotone submodular function subject to a matroid or O(1)-knapsack constraints, and information-theoretic tight approximations for Submodular Max-SAT and Submodular Welfare with k players, for any number of players k. A framework for submodular optimization problems, called the contention resolution framework, was introduced recently by Chekuri et al. [11]. The improved approximation ratio of the unified continuous greedy algorithm implies improved ap- proximation ratios for many problems through this framework. Moreover, via a parameter called stopping time, our algorithm merges the relaxation solving and re-normalization steps of the framework, and achieves, for some applications, further improvements. We also describe new monotone balanced con- tention resolution schemes for various matching, scheduling and packing problems, thus, improving the approximations achieved for these problems via the framework.
{"title":"A Unified Continuous Greedy Algorithm for Submodular Maximization","authors":"Moran Feldman, J. Naor, Roy Schwartz","doi":"10.1109/FOCS.2011.46","DOIUrl":"https://doi.org/10.1109/FOCS.2011.46","url":null,"abstract":"The study of combinatorial problems with a submodular objective function has attracted much attention in recent years, and is partly motivated by the importance of such problems to economics, algorithmic game theory and combinatorial optimization. Classical works on these problems are mostly combinatorial in nature. Recently, however, many results based on continuous algorithmic tools have emerged. The main bottleneck of such continuous techniques is how to approximately solve a non-convex relaxation for the sub- modular problem at hand. Thus, the efficient computation of better fractional solutions immediately implies improved approximations for numerous applications. A simple and elegant method, called \"continuous greedy\", successfully tackles this issue for monotone submodular objective functions, however, only much more complex tools are known to work for general non-monotone submodular objectives. In this work we present a new unified continuous greedy algorithm which finds approximate fractional solutions for both the non-monotone and monotone cases, and improves on the approximation ratio for many applications. For general non-monotone submodular objective functions, our algorithm achieves an improved approximation ratio of about 1/e. For monotone submodular objective functions, our algorithm achieves an approximation ratio that depends on the density of the polytope defined by the problem at hand, which is always at least as good as the previously known best approximation ratio of 1-1/e. Some notable immediate implications are an improved 1/e-approximation for maximizing a non-monotone submodular function subject to a matroid or O(1)-knapsack constraints, and information-theoretic tight approximations for Submodular Max-SAT and Submodular Welfare with k players, for any number of players k. A framework for submodular optimization problems, called the contention resolution framework, was introduced recently by Chekuri et al. [11]. The improved approximation ratio of the unified continuous greedy algorithm implies improved ap- proximation ratios for many problems through this framework. Moreover, via a parameter called stopping time, our algorithm merges the relaxation solving and re-normalization steps of the framework, and achieves, for some applications, further improvements. We also describe new monotone balanced con- tention resolution schemes for various matching, scheduling and packing problems, thus, improving the approximations achieved for these problems via the framework.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132021300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we establish an intimate connection between dynamic range searching in the group model and combinatorial discrepancy. Our result states that, for a broad class of range searching data structures (including all known upper bounds), it must hold that $t_ut_q = Omega(disc^2/lg n)$ where $t_u$ is the worst case update time, $t_q$ the worst case query time and $disc$ is the combinatorial discrepancy of the range searching problem in question. This relation immediately implies a whole range of exceptionally high and near-tight lower bounds for all of the basic range searching problems. We list a few of them in the following:begin{itemize}item For half space range searching in $d$-dimensional space, we get a lower bound of $t_u t_q = Omega(n^{1-1/d}/lg n)$. This comes within a $lg n lg lg n$ factor of the best known upper bound. item For orthogonal range searching in $d$-dimensional space, we get a lower bound of $t_u t_q = Omega(lg^{d-2+mu(d)}n)$, where $mu(d)>0$ is some small but strictly positive function of $d$.item For ball range searching in $d$-dimensional space, we get a lower bound of $t_u t_q = Omega(n^{1-1/d}/lg n)$.end{itemize}We note that the previous highest lower bound for any explicit problem, due to P{v a}tra{c s}cu [STOC'07], states that $t_q =Omega((lg n/lg(lg n+t_u))^2)$, which does however hold for a less restrictive class of data structures. Our result also has implications for the field of combinatorial discrepancy. Using textbook range searching solutions, we improve on the best known discrepancy upper bound for axis-aligned rectangles in dimensions $d geq 3$.
{"title":"On Range Searching in the Group Model and Combinatorial Discrepancy","authors":"Kasper Green Larsen","doi":"10.1137/120865240","DOIUrl":"https://doi.org/10.1137/120865240","url":null,"abstract":"In this paper we establish an intimate connection between dynamic range searching in the group model and combinatorial discrepancy. Our result states that, for a broad class of range searching data structures (including all known upper bounds), it must hold that $t_ut_q = Omega(disc^2/lg n)$ where $t_u$ is the worst case update time, $t_q$ the worst case query time and $disc$ is the combinatorial discrepancy of the range searching problem in question. This relation immediately implies a whole range of exceptionally high and near-tight lower bounds for all of the basic range searching problems. We list a few of them in the following:begin{itemize}item For half space range searching in $d$-dimensional space, we get a lower bound of $t_u t_q = Omega(n^{1-1/d}/lg n)$. This comes within a $lg n lg lg n$ factor of the best known upper bound. item For orthogonal range searching in $d$-dimensional space, we get a lower bound of $t_u t_q = Omega(lg^{d-2+mu(d)}n)$, where $mu(d)>0$ is some small but strictly positive function of $d$.item For ball range searching in $d$-dimensional space, we get a lower bound of $t_u t_q = Omega(n^{1-1/d}/lg n)$.end{itemize}We note that the previous highest lower bound for any explicit problem, due to P{v a}tra{c s}cu [STOC'07], states that $t_q =Omega((lg n/lg(lg n+t_u))^2)$, which does however hold for a less restrictive class of data structures. Our result also has implications for the field of combinatorial discrepancy. Using textbook range searching solutions, we improve on the best known discrepancy upper bound for axis-aligned rectangles in dimensions $d geq 3$.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134526461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We revisit the problem of reliable interactive communication over a noisy channel, and obtain the first fully explicit (randomized) efficient constant-rate emulation procedure for reliable interactive communication. Our protocol works for any discrete memory less noisy channel with constant capacity, and fails with exponentially small probability in the total length of the protocol. Following a work by Schulman [Schulman 1993] our simulation uses a tree-code, yet as opposed to the non-constructive absolute tree-code used by Schulman, we introduce a relaxation in the notion of goodness for a tree code and define a potent tree code. This relaxation allows us to construct an explicit emulation procedure for any two-party protocol. Our results also extend to the case of interactive multiparty communication. We show that a randomly generated tree code (with suitable constant alphabet size) is an efficiently decodable potent tree code with overwhelming probability. Furthermore we are able to partially derandomize this result by means of epsilon-biased distributions using only O(N) random bits, where N is the depth of the tree.
{"title":"Efficient and Explicit Coding for Interactive Communication","authors":"R. Gelles, Ankur Moitra, A. Sahai","doi":"10.1109/FOCS.2011.51","DOIUrl":"https://doi.org/10.1109/FOCS.2011.51","url":null,"abstract":"We revisit the problem of reliable interactive communication over a noisy channel, and obtain the first fully explicit (randomized) efficient constant-rate emulation procedure for reliable interactive communication. Our protocol works for any discrete memory less noisy channel with constant capacity, and fails with exponentially small probability in the total length of the protocol. Following a work by Schulman [Schulman 1993] our simulation uses a tree-code, yet as opposed to the non-constructive absolute tree-code used by Schulman, we introduce a relaxation in the notion of goodness for a tree code and define a potent tree code. This relaxation allows us to construct an explicit emulation procedure for any two-party protocol. Our results also extend to the case of interactive multiparty communication. We show that a randomly generated tree code (with suitable constant alphabet size) is an efficiently decodable potent tree code with overwhelming probability. Furthermore we are able to partially derandomize this result by means of epsilon-biased distributions using only O(N) random bits, where N is the depth of the tree.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134040418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the reconstruction problem for a multivariate polynomial f, we have black box access to $f$ and the goal is to efficiently reconstruct a representation of $f$ in a suitable model of computation. We give a polynomial time randomized algorithm for reconstructing emph{random} multilinear formulas. Our algorithm succeeds with high probability when given black box access to the polynomial computed by a random multilinear formula according to a natural distribution. This is the strongest model of computation for which a reconstruction algorithm is presently known, albeit efficient in a distributional sense rather than in the worst-case. Previous results on this problem considered much weaker models such as depth-3 circuits with various restrictions or read-once formulas. Our proof uses ranks of partial derivative matrices as a key ingredient and combines it with analysis of the algebraic structure of random multilinear formulas. Partial derivative matrices have earlier been used to prove lower bounds in a number of models of arithmetic complexity, including multilinear formulas and constant depth circuits. As such, our results give supporting evidence to the general thesis that mathematical properties that capture efficient computation in a model should also enable learning algorithms for functions efficiently computable in that model.
{"title":"Efficient Reconstruction of Random Multilinear Formulas","authors":"Ankit Gupta, N. Kayal, Satyanarayana V. Lokam","doi":"10.1109/FOCS.2011.70","DOIUrl":"https://doi.org/10.1109/FOCS.2011.70","url":null,"abstract":"In the reconstruction problem for a multivariate polynomial f, we have black box access to $f$ and the goal is to efficiently reconstruct a representation of $f$ in a suitable model of computation. We give a polynomial time randomized algorithm for reconstructing emph{random} multilinear formulas. Our algorithm succeeds with high probability when given black box access to the polynomial computed by a random multilinear formula according to a natural distribution. This is the strongest model of computation for which a reconstruction algorithm is presently known, albeit efficient in a distributional sense rather than in the worst-case. Previous results on this problem considered much weaker models such as depth-3 circuits with various restrictions or read-once formulas. Our proof uses ranks of partial derivative matrices as a key ingredient and combines it with analysis of the algebraic structure of random multilinear formulas. Partial derivative matrices have earlier been used to prove lower bounds in a number of models of arithmetic complexity, including multilinear formulas and constant depth circuits. As such, our results give supporting evidence to the general thesis that mathematical properties that capture efficient computation in a model should also enable learning algorithms for functions efficiently computable in that model.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125923550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For some positive constant eps_0, we give a (3/2-eps_0)-approximation algorithm for the following problem: given a graph G_0=(V,E_0), find the shortest tour that visits every vertex at least once. This is a special case of the metric traveling salesman problem when the underlying metric is defined by shortest path distances in G_0. The result improves on the 3/2-approximation algorithm due to Christofides [C76] for this special case. Similar to Christofides, our algorithm finds a spanning tree whose cost is upper bounded by the optimum, then it finds the minimum cost Eulerian augmentation (or T-join) of that tree. The main difference is in the selection of the spanning tree. Except in certain cases where the solution of LP is nearly integral, we select the spanning tree randomly by sampling from a maximum entropy distribution defined by the linear programming relaxation. Despite the simplicity of the algorithm, the analysis builds on a variety of ideas such as properties of strongly Rayleigh measures from probability theory, graph theoretical results on the structure of near minimum cuts, and the integrality of the T-join polytope from polyhedral theory. Also, as a byproduct of our result, we show new properties of the near minimum cuts of any graph, which may be of independent interest.
{"title":"A Randomized Rounding Approach to the Traveling Salesman Problem","authors":"S. Gharan, A. Saberi, Mohit Singh","doi":"10.1109/FOCS.2011.80","DOIUrl":"https://doi.org/10.1109/FOCS.2011.80","url":null,"abstract":"For some positive constant eps_0, we give a (3/2-eps_0)-approximation algorithm for the following problem: given a graph G_0=(V,E_0), find the shortest tour that visits every vertex at least once. This is a special case of the metric traveling salesman problem when the underlying metric is defined by shortest path distances in G_0. The result improves on the 3/2-approximation algorithm due to Christofides [C76] for this special case. Similar to Christofides, our algorithm finds a spanning tree whose cost is upper bounded by the optimum, then it finds the minimum cost Eulerian augmentation (or T-join) of that tree. The main difference is in the selection of the spanning tree. Except in certain cases where the solution of LP is nearly integral, we select the spanning tree randomly by sampling from a maximum entropy distribution defined by the linear programming relaxation. Despite the simplicity of the algorithm, the analysis builds on a variety of ideas such as properties of strongly Rayleigh measures from probability theory, graph theoretical results on the structure of near minimum cuts, and the integrality of the T-join polytope from polyhedral theory. Also, as a byproduct of our result, we show new properties of the near minimum cuts of any graph, which may be of independent interest.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"875 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114149569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We construct an explicit disperser for affine sources over $F_2^n$ with entropy $k=2^{log^{0.9} n}=n^{o(1)}$. This is a polynomial time computable function $D:F_2^n ar B$ such that for every affine space $V$ of $F_2^n$ that has dimension at least $k$, $D(V)=set{0,1}$. This improves the best previous construction of Ben-Sasson and Kop party (STOC 2009) that achieved $k = Omega(n^{4/5})$.Our technique follows a high level approach that was developed in Barak, Kindler, Shaltiel, Sudakov and Wigderson (J. ACM 2010) and Barak, Rao, Shaltiel and Wigderson (STOC 2006) in the context of dispersers for two independent general sources. The main steps are:begin{itemize}item Adjust the high level approach to make it suitable for affine sources. item Implement a ``challenge-response game'' for affine sources (in the spirit of the two aforementioned papers that introduced such games for two independent general sources).item In order to implement the game, we construct extractors for affine block-wise sources. For this we use ideas and components by Rao (CCC 2009). item Combining the three items above, we obtain dispersers for affine sources with entropy larger than $sqrt{n}$.We use a recursive win-win analysis in the spirit of Rein gold, Shaltiel and Wigderson (SICOMP 2006) and Barak, Rao, Shaltiel and Wigderson (STOC 2006) to get affine dispersers with entropy less than $sqrt{n}$.end{itemize}
{"title":"Dispersers for Affine Sources with Sub-polynomial Entropy","authors":"Ronen Shaltiel","doi":"10.1109/FOCS.2011.37","DOIUrl":"https://doi.org/10.1109/FOCS.2011.37","url":null,"abstract":"We construct an explicit disperser for affine sources over $F_2^n$ with entropy $k=2^{log^{0.9} n}=n^{o(1)}$. This is a polynomial time computable function $D:F_2^n ar B$ such that for every affine space $V$ of $F_2^n$ that has dimension at least $k$, $D(V)=set{0,1}$. This improves the best previous construction of Ben-Sasson and Kop party (STOC 2009) that achieved $k = Omega(n^{4/5})$.Our technique follows a high level approach that was developed in Barak, Kindler, Shaltiel, Sudakov and Wigderson (J. ACM 2010) and Barak, Rao, Shaltiel and Wigderson (STOC 2006) in the context of dispersers for two independent general sources. The main steps are:begin{itemize}item Adjust the high level approach to make it suitable for affine sources. item Implement a ``challenge-response game'' for affine sources (in the spirit of the two aforementioned papers that introduced such games for two independent general sources).item In order to implement the game, we construct extractors for affine block-wise sources. For this we use ideas and components by Rao (CCC 2009). item Combining the three items above, we obtain dispersers for affine sources with entropy larger than $sqrt{n}$.We use a recursive win-win analysis in the spirit of Rein gold, Shaltiel and Wigderson (SICOMP 2006) and Barak, Rao, Shaltiel and Wigderson (STOC 2006) to get affine dispersers with entropy less than $sqrt{n}$.end{itemize}","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122272701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}