We prove that any algorithm for learning parities requires either a memory of quadratic size or an exponential number of samples. This proves a recent conjecture of Steinhardt, Valiant and Wager [15] and shows that for some learning problems a large storage space is crucial. More formally, in the problem of parity learning, an unknown string x ϵ {0,1}n was chosen uniformly at random. A learner tries to learn x from a stream of samples (a1, b1), (a2, b2)..., where each at is uniformly distributed over {0,1}n and bt is the inner product of at and x, modulo 2. We show that any algorithm for parity learning, that uses less than n2/25 bits of memory, requires an exponential number of samples. Previously, there was no non-trivial lower bound on the number of samples needed, for any learning problem, even if the allowed memory size is O(n) (where n is the space needed to store one sample). We also give an application of our result in the field of bounded-storage cryptography. We show an encryption scheme that requires a private key of length n, as well as time complexity of n per encryption/decryption of each bit, and is provenly and unconditionally secure as long as the attacker uses less than n2/25 memory bits and the scheme is used at most an exponential number of times. Previous works on bounded-storage cryptography assumed that the memory size used by the attacker is at most linear in the time needed for encryption/decryption.
{"title":"Fast Learning Requires Good Memory: A Time-Space Lower Bound for Parity Learning","authors":"R. Raz","doi":"10.1145/3186563","DOIUrl":"https://doi.org/10.1145/3186563","url":null,"abstract":"We prove that any algorithm for learning parities requires either a memory of quadratic size or an exponential number of samples. This proves a recent conjecture of Steinhardt, Valiant and Wager [15] and shows that for some learning problems a large storage space is crucial. More formally, in the problem of parity learning, an unknown string x ϵ {0,1}n was chosen uniformly at random. A learner tries to learn x from a stream of samples (a1, b1), (a2, b2)..., where each at is uniformly distributed over {0,1}n and bt is the inner product of at and x, modulo 2. We show that any algorithm for parity learning, that uses less than n2/25 bits of memory, requires an exponential number of samples. Previously, there was no non-trivial lower bound on the number of samples needed, for any learning problem, even if the allowed memory size is O(n) (where n is the space needed to store one sample). We also give an application of our result in the field of bounded-storage cryptography. We show an encryption scheme that requires a private key of length n, as well as time complexity of n per encryption/decryption of each bit, and is provenly and unconditionally secure as long as the attacker uses less than n2/25 memory bits and the scheme is used at most an exponential number of times. Previous works on bounded-storage cryptography assumed that the memory size used by the attacker is at most linear in the time needed for encryption/decryption.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127321074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study problems in distribution property testing: Given sample access to one or more unknown discrete distributions, we want to determine whether they have some global property or are epsilon-far from having the property in L1 distance (equivalently, total variation distance, or "statistical distance").In this work, we give a novel general approach for distribution testing. We describe two techniques: our first technique gives sample-optimal testers, while our second technique gives matching sample lower bounds. As a consequence, we resolve the sample complexity of a wide variety of testing problems. Our upper bounds are obtained via a modular reduction-based approach. Our approach yields optimal testers for numerous problemsby using a standard L2-identity tester as a black-box. Using this recipe, we obtain simple estimators for a wide range of problems, encompassing many problems previously studied in the TCS literature, namely: (1) identity testing to a fixed distribution, (2) closeness testing between two unknown distributions (with equal/unequal sample sizes), (3) independence testing (in any number of dimensions), (4) closeness testing for collections of distributions, and(5) testing histograms. For all of these problems, our testers are sample-optimal, up to constant factors. With the exception of (1), ours are the first sample-optimal testers for the corresponding problems. Moreover, our estimators are significantly simpler to state and analyze compared to previous results. As an important application of our reduction-based technique, we obtain the first adaptive algorithm for testing equivalence betweentwo unknown distributions. The sample complexity of our algorithm depends on the structure of the unknown distributions - as opposed to merely their domain size -and is significantly better compared to the worst-case optimal L1-tester in many natural instances. Moreover, our technique naturally generalizes to other metrics beyond the L1-distance. As an illustration of its flexibility, we use it to obtain the first near-optimal equivalence testerunder the Hellinger distance. Our lower bounds are obtained via a direct information-theoretic approach: Given a candidate hard instance, our proof proceeds by boundingthe mutual information between appropriate random variables. While this is a classical method in information theory, prior to our work, it had not been used in this context. Previous lower bounds relied either on the birthday paradox, oron moment-matching and were thus restricted to symmetric properties. Our lower bound approach does not suffer from any such restrictions and gives tight sample lower bounds for the aforementioned problems.
{"title":"A New Approach for Testing Properties of Discrete Distributions","authors":"Ilias Diakonikolas, D. Kane","doi":"10.1109/FOCS.2016.78","DOIUrl":"https://doi.org/10.1109/FOCS.2016.78","url":null,"abstract":"We study problems in distribution property testing: Given sample access to one or more unknown discrete distributions, we want to determine whether they have some global property or are epsilon-far from having the property in L1 distance (equivalently, total variation distance, or \"statistical distance\").In this work, we give a novel general approach for distribution testing. We describe two techniques: our first technique gives sample-optimal testers, while our second technique gives matching sample lower bounds. As a consequence, we resolve the sample complexity of a wide variety of testing problems. Our upper bounds are obtained via a modular reduction-based approach. Our approach yields optimal testers for numerous problemsby using a standard L2-identity tester as a black-box. Using this recipe, we obtain simple estimators for a wide range of problems, encompassing many problems previously studied in the TCS literature, namely: (1) identity testing to a fixed distribution, (2) closeness testing between two unknown distributions (with equal/unequal sample sizes), (3) independence testing (in any number of dimensions), (4) closeness testing for collections of distributions, and(5) testing histograms. For all of these problems, our testers are sample-optimal, up to constant factors. With the exception of (1), ours are the first sample-optimal testers for the corresponding problems. Moreover, our estimators are significantly simpler to state and analyze compared to previous results. As an important application of our reduction-based technique, we obtain the first adaptive algorithm for testing equivalence betweentwo unknown distributions. The sample complexity of our algorithm depends on the structure of the unknown distributions - as opposed to merely their domain size -and is significantly better compared to the worst-case optimal L1-tester in many natural instances. Moreover, our technique naturally generalizes to other metrics beyond the L1-distance. As an illustration of its flexibility, we use it to obtain the first near-optimal equivalence testerunder the Hellinger distance. Our lower bounds are obtained via a direct information-theoretic approach: Given a candidate hard instance, our proof proceeds by boundingthe mutual information between appropriate random variables. While this is a classical method in information theory, prior to our work, it had not been used in this context. Previous lower bounds relied either on the birthday paradox, oron moment-matching and were thus restricted to symmetric properties. Our lower bound approach does not suffer from any such restrictions and gives tight sample lower bounds for the aforementioned problems.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130277052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Can quantum computers solve optimization problems much more quickly than classical computers? One major piece of evidence for this proposition has been the fact that Quantum Annealing (QA) finds the minimum of some cost functions exponentially more quickly than classical Simulated Annealing (SA). One such cost function is the simple “Hamming weight with a spike” function in which the input is an n-bit string and the objective function is simply the Hamming weight, plus a tall thin barrier centered around Hamming weight n/4. While the global minimum of this cost function can be found by inspection, it is also a plausible toy model of the sort of local minima that arise in realworld optimization problems. It was shown by Farhi, Goldstone and Gutmann [1] that for this example SA takes exponential time and QA takes polynomial time, and the same result was generalized by Reichardt [2] to include barriers with width nζ and height nα for ζ + α ≤ 1/2. This advantage could be explained in terms of quantummechanical “tunneling.” Our work considers a classical algorithm known as Simulated Quantum Annealing (SQA) which relates certain quantum systems to classical Markov chains. By proving that these chains mix rapidly, we show that SQA runs in polynomial time on the Hamming weight with spike problem in much of the parameter regime where QA achieves an exponential advantage over SA. While our analysis only covers this toy model, it can be seen as evidence against the prospect of exponential quantum speedup using tunneling. Our technical contributions include extending the canonical path method for analyzing Markov chains to cover the case when not all vertices can be connected by low-congestion paths. We also develop methods for taking advantage of warm starts and for relating the quantum state in QA to the probability distribution in SQA. These techniques may be of use in future studies of SQA or of rapidly mixing Markov chains in general.
{"title":"Simulated Quantum Annealing Can Be Exponentially Faster Than Classical Simulated Annealing","authors":"E. Crosson, A. Harrow","doi":"10.1109/FOCS.2016.81","DOIUrl":"https://doi.org/10.1109/FOCS.2016.81","url":null,"abstract":"Can quantum computers solve optimization problems much more quickly than classical computers? One major piece of evidence for this proposition has been the fact that Quantum Annealing (QA) finds the minimum of some cost functions exponentially more quickly than classical Simulated Annealing (SA). One such cost function is the simple “Hamming weight with a spike” function in which the input is an n-bit string and the objective function is simply the Hamming weight, plus a tall thin barrier centered around Hamming weight n/4. While the global minimum of this cost function can be found by inspection, it is also a plausible toy model of the sort of local minima that arise in realworld optimization problems. It was shown by Farhi, Goldstone and Gutmann [1] that for this example SA takes exponential time and QA takes polynomial time, and the same result was generalized by Reichardt [2] to include barriers with width nζ and height nα for ζ + α ≤ 1/2. This advantage could be explained in terms of quantummechanical “tunneling.” Our work considers a classical algorithm known as Simulated Quantum Annealing (SQA) which relates certain quantum systems to classical Markov chains. By proving that these chains mix rapidly, we show that SQA runs in polynomial time on the Hamming weight with spike problem in much of the parameter regime where QA achieves an exponential advantage over SA. While our analysis only covers this toy model, it can be seen as evidence against the prospect of exponential quantum speedup using tunneling. Our technical contributions include extending the canonical path method for analyzing Markov chains to cover the case when not all vertices can be connected by low-congestion paths. We also develop methods for taking advantage of warm starts and for relating the quantum state in QA to the probability distribution in SQA. These techniques may be of use in future studies of SQA or of rapidly mixing Markov chains in general.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116028688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The geometric complexity theory program is an approach to separate algebraic complexity classes, more precisely to show the superpolynomial growth of the determinantal complexity dc(perm) of the permanent polynomial. Mulmuley and Sohoni showed that the vanishing behaviour of rectangular Kronecker coefficients could in principle be used to show some lower bounds on dc(perm) and they conjectured that superpolynomial lower bounds on dc(perm) could be shown in this way. In this paper we disprove this conjecture by Mulmuley and Sohoni, i.e., we prove that the vanishing of rectangular Kronecker coefficients cannot be used to prove superpolynomial lower bounds on dc(perm).
{"title":"Rectangular Kronecker Coefficients and Plethysms in Geometric Complexity Theory","authors":"Christian Ikenmeyer, G. Panova","doi":"10.1109/FOCS.2016.50","DOIUrl":"https://doi.org/10.1109/FOCS.2016.50","url":null,"abstract":"The geometric complexity theory program is an approach to separate algebraic complexity classes, more precisely to show the superpolynomial growth of the determinantal complexity dc(perm) of the permanent polynomial. Mulmuley and Sohoni showed that the vanishing behaviour of rectangular Kronecker coefficients could in principle be used to show some lower bounds on dc(perm) and they conjectured that superpolynomial lower bounds on dc(perm) could be shown in this way. In this paper we disprove this conjecture by Mulmuley and Sohoni, i.e., we prove that the vanishing of rectangular Kronecker coefficients cannot be used to prove superpolynomial lower bounds on dc(perm).","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125990465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Regular expressions constitute a fundamental notion in formal language theory and are frequently used in computer science to define search patterns. In particular, regular expression matching and membership testing are widely used computational primitives, employed in many programming languages and text processing utilities. A classic algorithm for these problems constructs and simulates a non-deterministic finite automaton corresponding to the expression, resulting in an O(m n) running time (where m is the length of the pattern and n is the length of the text). This running time can be improved slightly (by a polylogarithmic factor), but no significantly faster solutions are known. At the same time, much faster algorithms exist for various special cases of regular expressions, including dictionary matching, wildcard matching, subset matching, word break problem etc. In this paper, we show that the complexity of regular expression matching can be characterized based on its depth (when interpreted as a formula). Our results hold for expressions involving concatenation, OR, Kleene star and Kleene plus. For regular expressions of depth two (involving any combination of the above operators), we show the following dichotomy: matching and membership testing can be solved in near-linear time, except for "concatenations of stars", which cannot be solved in strongly sub-quadratic time assuming the Strong Exponential Time Hypothesis (SETH). For regular expressions of depth three the picture is more complex. Nevertheless, we show that all problems can either be solved in strongly sub-quadratic time, or cannot be solved in strongly sub-quadratic time assuming SETH. An intriguing special case of membership testing involves regular expressions of the form "a star of an OR of concatenations", e.g., [a|ab|bc]*. This corresponds to the so-called word break problem, for which a dynamic programming algorithm with a runtime of (roughly) O(n √m) is known. We show that the latter bound is not tight and improve the runtime to O(n m0.44...).
{"title":"Which Regular Expression Patterns Are Hard to Match?","authors":"A. Backurs, P. Indyk","doi":"10.1109/FOCS.2016.56","DOIUrl":"https://doi.org/10.1109/FOCS.2016.56","url":null,"abstract":"Regular expressions constitute a fundamental notion in formal language theory and are frequently used in computer science to define search patterns. In particular, regular expression matching and membership testing are widely used computational primitives, employed in many programming languages and text processing utilities. A classic algorithm for these problems constructs and simulates a non-deterministic finite automaton corresponding to the expression, resulting in an O(m n) running time (where m is the length of the pattern and n is the length of the text). This running time can be improved slightly (by a polylogarithmic factor), but no significantly faster solutions are known. At the same time, much faster algorithms exist for various special cases of regular expressions, including dictionary matching, wildcard matching, subset matching, word break problem etc. In this paper, we show that the complexity of regular expression matching can be characterized based on its depth (when interpreted as a formula). Our results hold for expressions involving concatenation, OR, Kleene star and Kleene plus. For regular expressions of depth two (involving any combination of the above operators), we show the following dichotomy: matching and membership testing can be solved in near-linear time, except for \"concatenations of stars\", which cannot be solved in strongly sub-quadratic time assuming the Strong Exponential Time Hypothesis (SETH). For regular expressions of depth three the picture is more complex. Nevertheless, we show that all problems can either be solved in strongly sub-quadratic time, or cannot be solved in strongly sub-quadratic time assuming SETH. An intriguing special case of membership testing involves regular expressions of the form \"a star of an OR of concatenations\", e.g., [a|ab|bc]*. This corresponds to the so-called word break problem, for which a dynamic programming algorithm with a runtime of (roughly) O(n √m) is known. We show that the latter bound is not tight and improve the runtime to O(n m0.44...).","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116750749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Star-convexity is a significant relaxation of the notion of convexity, that allows for functions that do not have (sub)gradients at most points, and may even be discontinuous everywhere except at the global optimum. We introduce a polynomial time algorithm for optimizing the class of star-convex functions, under no Lipschitz or other smoothness assumptions whatsoever, and no restrictions except exponential boundedness on a region about the origin, and Lebesgue measurability. The algorithm's performance is polynomial in the requested number of digits of accuracy and the dimension of the search domain. This contrasts with the previous best known algorithm of Nesterov and Polyak which has exponential dependence on the number of digits of accuracy, but only n! dependence on the dimension n (where ! is the matrix multiplication exponent), and which further requires Lipschitz second differentiability of the function [1].
{"title":"Optimizing Star-Convex Functions","authors":"Jasper C. H. Lee, Paul Valiant","doi":"10.1109/FOCS.2016.71","DOIUrl":"https://doi.org/10.1109/FOCS.2016.71","url":null,"abstract":"Star-convexity is a significant relaxation of the notion of convexity, that allows for functions that do not have (sub)gradients at most points, and may even be discontinuous everywhere except at the global optimum. We introduce a polynomial time algorithm for optimizing the class of star-convex functions, under no Lipschitz or other smoothness assumptions whatsoever, and no restrictions except exponential boundedness on a region about the origin, and Lebesgue measurability. The algorithm's performance is polynomial in the requested number of digits of accuracy and the dimension of the search domain. This contrasts with the previous best known algorithm of Nesterov and Polyak which has exponential dependence on the number of digits of accuracy, but only n! dependence on the dimension n (where ! is the matrix multiplication exponent), and which further requires Lipschitz second differentiability of the function [1].","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123695115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Symbolic matrices in non-commuting variables, and the related structural and algorithmic questions, have a remarkable number of diverse origins and motivations. They arise independently in (commutative) invariant theory and representation theory, linear algebra, optimization, linear system theory, quantum information theory, and naturally in non-commutative algebra. In this paper we present a deterministic polynomial time algorithm for testing if a symbolic matrix in non-commuting variables over Q is invertible or not. The analogous question for commuting variables is the celebrated polynomial identity testing (PIT) for symbolic determinants. In contrast to the commutative case, which has an efficient probabilistic algorithm, the best previous algorithm for the non-commutative setting required exponential time [1] (whether or not randomization is allowed). The main (simple!) technical contribution of this paper is an analysis of an existing “operator scaling” algorithm due to Gurvits [2], which solved some special cases of the same problem we do (these already include optimization problems like matroid intersection). This analysis of the running time of Gurvits' algorithm combines results from some of these different fields. It lower bounds a parameter of quantum maps called capacity, via degree bounds from algebraic geometry on the Left Right group action, which in turn is relevant due to certain characterization of the free skew (non-commutative) field. Via the known connections above, our algorithm efficiently solves several problems in different areas which had only exponential-time algorithms prior to this work. These include the “word problem” for the free skew field (namely identity testing for rational expressions over non-commuting variables), testing if a quantum operator is “rank decreasing”, and the membership problem in the null-cone of a natural group action arising in Geometric Complexity Theory (GCT). Moreover, extending our algorithm to actually compute the non-commutative rank of a symbolic matrix, yields an efficient factor-2 approximation to the standard commutative rank. This naturally suggests the challenge to improve this approximation factor, noting that a fully polynomial approximation scheme may lead to a deterministic PIT algorithm. Finally, our algorithm may also be viewed as efficiently solving a family of structured systems of quadratic equations, which seem general enough to encode interesting decision and optimization problems1.
{"title":"A Deterministic Polynomial Time Algorithm for Non-commutative Rational Identity Testing","authors":"A. Garg, L. Gurvits, R. Oliveira, A. Wigderson","doi":"10.1109/FOCS.2016.95","DOIUrl":"https://doi.org/10.1109/FOCS.2016.95","url":null,"abstract":"Symbolic matrices in non-commuting variables, and the related structural and algorithmic questions, have a remarkable number of diverse origins and motivations. They arise independently in (commutative) invariant theory and representation theory, linear algebra, optimization, linear system theory, quantum information theory, and naturally in non-commutative algebra. In this paper we present a deterministic polynomial time algorithm for testing if a symbolic matrix in non-commuting variables over Q is invertible or not. The analogous question for commuting variables is the celebrated polynomial identity testing (PIT) for symbolic determinants. In contrast to the commutative case, which has an efficient probabilistic algorithm, the best previous algorithm for the non-commutative setting required exponential time [1] (whether or not randomization is allowed). The main (simple!) technical contribution of this paper is an analysis of an existing “operator scaling” algorithm due to Gurvits [2], which solved some special cases of the same problem we do (these already include optimization problems like matroid intersection). This analysis of the running time of Gurvits' algorithm combines results from some of these different fields. It lower bounds a parameter of quantum maps called capacity, via degree bounds from algebraic geometry on the Left Right group action, which in turn is relevant due to certain characterization of the free skew (non-commutative) field. Via the known connections above, our algorithm efficiently solves several problems in different areas which had only exponential-time algorithms prior to this work. These include the “word problem” for the free skew field (namely identity testing for rational expressions over non-commuting variables), testing if a quantum operator is “rank decreasing”, and the membership problem in the null-cone of a natural group action arising in Geometric Complexity Theory (GCT). Moreover, extending our algorithm to actually compute the non-commutative rank of a symbolic matrix, yields an efficient factor-2 approximation to the standard commutative rank. This naturally suggests the challenge to improve this approximation factor, noting that a fully polynomial approximation scheme may lead to a deterministic PIT algorithm. Finally, our algorithm may also be viewed as efficiently solving a family of structured systems of quadratic equations, which seem general enough to encode interesting decision and optimization problems1.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116879401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Locally finding a solution to symmetry-breaking tasks such as vertex-coloring, edge-coloring, maximal matching, maximal independent set, etc., is a long-standing challenge in distributed network computing. More recently, it has also become a challenge in the framework of centralized local computation. We introduce conflict coloring as a general symmetry-breaking task that includes all the aforementioned tasks as specific instantiations - conflict coloring includes all locally checkable labeling tasks from [Naor & Stockmeyer, STOC 1993]. Conflict coloring is characterized by two parameters l and d, where the former measures the amount of freedom given to the nodes for selecting their colors, and the latter measures the number of constraints which colors of adjacent nodes are subject to. We show that, in the standard LOCAL model for distributed network computing, if l/d > Δ, then conflict coloring can be solved in Õ(√Δ)+log*n rounds in n-node graphs with maximum degree Δ, where Õ ignores the polylog factors in Δ. The dependency in n is optimal, as a consequence of the Ω(log*n) lower bound by [Linial, SIAM J. Comp. 1992] for (Δ + 1)-coloring. An important special case of our result is a significant improvement over the best known algorithm for distributed (Δ + 1)-coloring due to [Barenboim, PODC 2015], which required Õ(Δ3/4) + log*n rounds. Improvements for other variants of coloring, including (Δ + 1)-list-coloring, (2Δ-1)-edge-coloring, coloring with forbidden color distances, etc., also follow from our general result on conflict coloring. Likewise, in the framework of centralized local computation algorithms (LCAs), our general result yields an LCA which requires a smaller number of probes than the previously best known algorithm for vertex-coloring, and works for a wide range of coloring problems.
{"title":"Local Conflict Coloring","authors":"P. Fraigniaud, Marc Heinrich, A. Kosowski","doi":"10.1109/FOCS.2016.73","DOIUrl":"https://doi.org/10.1109/FOCS.2016.73","url":null,"abstract":"Locally finding a solution to symmetry-breaking tasks such as vertex-coloring, edge-coloring, maximal matching, maximal independent set, etc., is a long-standing challenge in distributed network computing. More recently, it has also become a challenge in the framework of centralized local computation. We introduce conflict coloring as a general symmetry-breaking task that includes all the aforementioned tasks as specific instantiations - conflict coloring includes all locally checkable labeling tasks from [Naor & Stockmeyer, STOC 1993]. Conflict coloring is characterized by two parameters l and d, where the former measures the amount of freedom given to the nodes for selecting their colors, and the latter measures the number of constraints which colors of adjacent nodes are subject to. We show that, in the standard LOCAL model for distributed network computing, if l/d > Δ, then conflict coloring can be solved in Õ(√Δ)+log*n rounds in n-node graphs with maximum degree Δ, where Õ ignores the polylog factors in Δ. The dependency in n is optimal, as a consequence of the Ω(log*n) lower bound by [Linial, SIAM J. Comp. 1992] for (Δ + 1)-coloring. An important special case of our result is a significant improvement over the best known algorithm for distributed (Δ + 1)-coloring due to [Barenboim, PODC 2015], which required Õ(Δ3/4) + log*n rounds. Improvements for other variants of coloring, including (Δ + 1)-list-coloring, (2Δ-1)-edge-coloring, coloring with forbidden color distances, etc., also follow from our general result on conflict coloring. Likewise, in the framework of centralized local computation algorithms (LCAs), our general result yields an LCA which requires a smaller number of probes than the previously best known algorithm for vertex-coloring, and works for a wide range of coloring problems.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124790294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An extensive body of recent work studies the welfare guarantees of simple and prevalent combinatorial auction formats, such as selling m items via simultaneous second price auctions (SiSPAs) [1], [2], [3]. These guarantees hold even when the auctions are repeatedly executed and the players use no-regret learning algorithms to choose their actions. Unfortunately, off-the-shelf no-regret learning algorithms for these auctions are computationally inefficient as the number of actions available to the players becomes exponential. We show that this obstacle is inevitable: there are no polynomial-time no-regret learning algorithms for SiSPAs, unless RP ⊇ NP, even when the bidders are unit-demand. Our lower bound raises the question of how good outcomes polynomially-bounded bidders may discover in such auctions. To answer this question, we propose a novel concept of learning in auctions, termed "no-envy learning." This notion is founded upon Walrasian equilibrium, and we show that it is both efficiently implementable and results in approximately optimal welfare, even when the bidders have valuations from the broad class of fractionally subadditive (XOS) valuations (assuming demand oracle access to the valuations) or coverage valuations (even without demand oracles). No-envy learning outcomes are a relaxation of no-regret learning outcomes, which maintain their approximate welfare optimality while endowing them with computational tractability. Our positive and negative results extend to several auction formats that have been studied in the literature via the smoothness paradigm. Our positive results for XOS valuations are enabled by a novel Follow-The-Perturbed-Leader algorithm for settings where the number of experts and states of nature are both infinite, and the payoff function of the learner is non-linear. We show that this algorithm has applications outside of auction settings, establishing significant gains in a recent application of no-regret learning in security games. Our efficient learning result for coverage valuations is based on a novel use of convex rounding schemes and a reduction to online convex optimization.
{"title":"Learning in Auctions: Regret is Hard, Envy is Easy","authors":"C. Daskalakis, Vasilis Syrgkanis","doi":"10.1109/FOCS.2016.31","DOIUrl":"https://doi.org/10.1109/FOCS.2016.31","url":null,"abstract":"An extensive body of recent work studies the welfare guarantees of simple and prevalent combinatorial auction formats, such as selling m items via simultaneous second price auctions (SiSPAs) [1], [2], [3]. These guarantees hold even when the auctions are repeatedly executed and the players use no-regret learning algorithms to choose their actions. Unfortunately, off-the-shelf no-regret learning algorithms for these auctions are computationally inefficient as the number of actions available to the players becomes exponential. We show that this obstacle is inevitable: there are no polynomial-time no-regret learning algorithms for SiSPAs, unless RP ⊇ NP, even when the bidders are unit-demand. Our lower bound raises the question of how good outcomes polynomially-bounded bidders may discover in such auctions. To answer this question, we propose a novel concept of learning in auctions, termed \"no-envy learning.\" This notion is founded upon Walrasian equilibrium, and we show that it is both efficiently implementable and results in approximately optimal welfare, even when the bidders have valuations from the broad class of fractionally subadditive (XOS) valuations (assuming demand oracle access to the valuations) or coverage valuations (even without demand oracles). No-envy learning outcomes are a relaxation of no-regret learning outcomes, which maintain their approximate welfare optimality while endowing them with computational tractability. Our positive and negative results extend to several auction formats that have been studied in the literature via the smoothness paradigm. Our positive results for XOS valuations are enabled by a novel Follow-The-Perturbed-Leader algorithm for settings where the number of experts and states of nature are both infinite, and the payoff function of the learner is non-linear. We show that this algorithm has applications outside of auction settings, establishing significant gains in a recent application of no-regret learning in security games. Our efficient learning result for coverage valuations is based on a novel use of convex rounding schemes and a reduction to online convex optimization.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131469964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We prove that there is no fpt-algorithm that can approximate the dominating set problem with any constant ratio, unless FPT = W[1]. Our hardness reduction is built on the second author's recent W[1]-hardness proof of the biclique problem [25]. This yields, among other things, a proof without the PCP machinery that the classical dominating set problem has no polynomial time constant approximation under the exponential time hypothesis.
{"title":"The Constant Inapproximability of the Parameterized Dominating Set Problem","authors":"Yijia Chen, Bingkai Lin","doi":"10.1109/FOCS.2016.61","DOIUrl":"https://doi.org/10.1109/FOCS.2016.61","url":null,"abstract":"We prove that there is no fpt-algorithm that can approximate the dominating set problem with any constant ratio, unless FPT = W[1]. Our hardness reduction is built on the second author's recent W[1]-hardness proof of the biclique problem [25]. This yields, among other things, a proof without the PCP machinery that the classical dominating set problem has no polynomial time constant approximation under the exponential time hypothesis.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122886358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}