We investigate whether, if NP is slightly hard on average, it is very hard on average.
我们研究如果NP平均上稍微困难,它是否平均上非常困难。
{"title":"Hardness amplification within NP","authors":"R. O'Donnell","doi":"10.1145/509907.510015","DOIUrl":"https://doi.org/10.1145/509907.510015","url":null,"abstract":"We investigate whether, if NP is slightly hard on average, it is very hard on average.","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122289824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given, as follows. One of the central questions in topology is determining whether a given curve is knotted or unknotted. An algorithm to decide this question was given by Haken (1961), using the technique of normal surfaces. These surfaces are rigid, discretized surfaces, well suited for algorithmic analysis. Any oriented surface without boundary can be obtained from a sphere by adding "handles". The number of handles is called the genus of the surface, and the smallest genus of a spanning surface for a curve is called the genus of the curve. A curve has genus zero if and only if it is unknotted. Schubert extended Haken's work, giving an algorithm to determine the genus of a curve in any 3-manifold. We examine the problem of deciding whether a polygonal knot in a closed triangulated three-dimensional manifold bounds a surface of genus at most g, 3-MANIFOLD KNOT GENUS. Previous work of Hass, Lagarias and Pippenger had shown that this problem is in PSPACE. No lower bounds on the running time were previously known. We show that this problem is NP-complete.
{"title":"3-MANIFOLD KNOT GENUS is NP-complete","authors":"I. Agol, J. Hass, W. Thurston","doi":"10.1145/509907.510016","DOIUrl":"https://doi.org/10.1145/509907.510016","url":null,"abstract":"Summary form only given, as follows. One of the central questions in topology is determining whether a given curve is knotted or unknotted. An algorithm to decide this question was given by Haken (1961), using the technique of normal surfaces. These surfaces are rigid, discretized surfaces, well suited for algorithmic analysis. Any oriented surface without boundary can be obtained from a sphere by adding \"handles\". The number of handles is called the genus of the surface, and the smallest genus of a spanning surface for a curve is called the genus of the curve. A curve has genus zero if and only if it is unknotted. Schubert extended Haken's work, giving an algorithm to determine the genus of a curve in any 3-manifold. We examine the problem of deciding whether a polygonal knot in a closed triangulated three-dimensional manifold bounds a surface of genus at most g, 3-MANIFOLD KNOT GENUS. Previous work of Hass, Lagarias and Pippenger had shown that this problem is in PSPACE. No lower bounds on the running time were previously known. We show that this problem is NP-complete.","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131938074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A set S in the vector space F/sub p//sup n/ is "good" if it satisfies any of the following (almost) equivalent conditions: (1) S are the rows of a generating matrix for a linear distance code, (2) all (nontrivial) Fourier coefficients of S are bounded away from 1, and (3) the Cayley graph on F/sub p//sup n/ with generators S is a good expander A good set S must have at least cn vectors (with c > 1). We study conditions under which S is the orbit of only a constant number of vectors, under the action of a finite group G on the n coordinates. Such succinctly described sets yield very symmetric codes, and can "amplify" small constant-degree Cayley expanders to exponentially larger ones. For the regular action (the coordinates are named by the elements of the group G), we develop representative theoretic conditions on the group G which guarantee the existence (in fact, abundance) of such few expanding orbits. The condition is a (nearly tight) upper bound on the distribution of dimensions of the irreducible representations of G, and is the main technical contribution of this paper We further show a class of groups for which this condition is implied by the expansion properties of the group G itself! By combining these, we can iterate the amplification process above, and give (near-constant degree) Cayley expanders which are built from Abelian components. For other natural actions, such as of the affine group on a finite field, we give the first explicit construction of such few expanding orbits.
{"title":"Expanders from symmetric codes","authors":"R. Meshulam, A. Wigderson","doi":"10.1145/509907.510004","DOIUrl":"https://doi.org/10.1145/509907.510004","url":null,"abstract":"A set S in the vector space F/sub p//sup n/ is \"good\" if it satisfies any of the following (almost) equivalent conditions: (1) S are the rows of a generating matrix for a linear distance code, (2) all (nontrivial) Fourier coefficients of S are bounded away from 1, and (3) the Cayley graph on F/sub p//sup n/ with generators S is a good expander A good set S must have at least cn vectors (with c > 1). We study conditions under which S is the orbit of only a constant number of vectors, under the action of a finite group G on the n coordinates. Such succinctly described sets yield very symmetric codes, and can \"amplify\" small constant-degree Cayley expanders to exponentially larger ones. For the regular action (the coordinates are named by the elements of the group G), we develop representative theoretic conditions on the group G which guarantee the existence (in fact, abundance) of such few expanding orbits. The condition is a (nearly tight) upper bound on the distribution of dimensions of the irreducible representations of G, and is the main technical contribution of this paper We further show a class of groups for which this condition is implied by the expansion properties of the group G itself! By combining these, we can iterate the amplification process above, and give (near-constant degree) Cayley expanders which are built from Abelian components. For other natural actions, such as of the affine group on a finite field, we give the first explicit construction of such few expanding orbits.","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116843906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael R. Capalbo, Omer Reingold, S. Vadhan, A. Wigderson
The main concrete result of this paper is the first explicit construction of constant degree lossless expanders. In these graphs, the expansion factor is almost as large as possible: (1—ε)D, where D is the degree and ε is an arbitrarily small constant. The best previous explicit constructions gave expansion factor D/2, which is too weak for many applications. The D/2 bound was obtained via the eigenvalue method, and is known that that method cannot give better bounds.The main abstract contribution of this paper is the introduction and initial study of randomness conductors, a notion which generalizes extractors, expanders, condensers and other similar objects. In all these functions, certain guarantee on the input "entropy" is converted to a guarantee on the output "entropy". For historical reasons, specific objects used specific guarantees of different flavors. We show that the flexibility afforded by the conductor definition leads to interesting combinations of these objects, and to better constructions such as those above.The main technical tool in these constructions is a natural generalization to conductors of the zig-zag graph product, previously defined for expanders and extractors.
{"title":"Randomness conductors and constant-degree lossless expanders","authors":"Michael R. Capalbo, Omer Reingold, S. Vadhan, A. Wigderson","doi":"10.1145/509907.510003","DOIUrl":"https://doi.org/10.1145/509907.510003","url":null,"abstract":"The main concrete result of this paper is the first explicit construction of constant degree lossless expanders. In these graphs, the expansion factor is almost as large as possible: (1—ε)D, where D is the degree and ε is an arbitrarily small constant. The best previous explicit constructions gave expansion factor D/2, which is too weak for many applications. The D/2 bound was obtained via the eigenvalue method, and is known that that method cannot give better bounds.The main abstract contribution of this paper is the introduction and initial study of randomness conductors, a notion which generalizes extractors, expanders, condensers and other similar objects. In all these functions, certain guarantee on the input \"entropy\" is converted to a guarantee on the output \"entropy\". For historical reasons, specific objects used specific guarantees of different flavors. We show that the flexibility afforded by the conductor definition leads to interesting combinations of these objects, and to better constructions such as those above.The main technical tool in these constructions is a natural generalization to conductors of the zig-zag graph product, previously defined for expanders and extractors.","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"171 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114745514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tugkan Batu, S. Dasgupta, Ravi Kumar, R. Rubinfeld
The Shannon entropy is a measure of the randomness of a distribution, and plays a central role in statistics, information theory, and data compression. Knowing the entropy of a random source can shed light on the compressibility of data produced by such a source. We consider the complexity of approximating the entropy under various different assumptions on the way the input is presented.
{"title":"The complexity of approximating the entropy","authors":"Tugkan Batu, S. Dasgupta, Ravi Kumar, R. Rubinfeld","doi":"10.1145/509907.510005","DOIUrl":"https://doi.org/10.1145/509907.510005","url":null,"abstract":"The Shannon entropy is a measure of the randomness of a distribution, and plays a central role in statistics, information theory, and data compression. Knowing the entropy of a random source can shed light on the compressibility of data produced by such a source. We consider the complexity of approximating the entropy under various different assumptions on the way the input is presented.","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128035038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We prove exponential lower bounds on the size of a bounded depth Frege proof of a Tseitin graph-based contradiction, whenever the underlying graph is an expander. This is the first example of a contradiction, naturally formalized as a 3-CNF, that has no short bounded depth Frege proofs. Previously, lower bounds of this type were known only for the pigeonhole principle [18, 17], and for Tseitin contradictions based on complete graphs [19].Our proof is a novel reduction of a Tseitin formula of an expander graph to the pigeonhole principle, in a manner resembling that done by Fu and Urquhart [19] for complete graphs.In the proof we introduce a general method for removing extension variables without significantly increasing the proof size, which may be interesting in its own right.
{"title":"Hard examples for bounded depth frege","authors":"Eli Ben-Sasson","doi":"10.1145/509907.509988","DOIUrl":"https://doi.org/10.1145/509907.509988","url":null,"abstract":"We prove exponential lower bounds on the size of a bounded depth Frege proof of a Tseitin graph-based contradiction, whenever the underlying graph is an expander. This is the first example of a contradiction, naturally formalized as a 3-CNF, that has no short bounded depth Frege proofs. Previously, lower bounds of this type were known only for the pigeonhole principle [18, 17], and for Tseitin contradictions based on complete graphs [19].Our proof is a novel reduction of a Tseitin formula of an expander graph to the pigeonhole principle, in a manner resembling that done by Fu and Urquhart [19] for complete graphs.In the proof we introduce a general method for removing extension variables without significantly increasing the proof size, which may be interesting in its own right.","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132974568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shared entanglement is a resource available to parties communicating over a quantum channel, much akin to public coins in classical communication protocols: the two parties may be given some number of quantum bits jointly prepared in a fixed superposition, prior to communicating with each other. The quantum channel is then said to be "entanglement-assisted." Shared randomness does not help in the transmission of information from one party to another. Moreover, it does not significantly reduce the classical complexity of computing functions vis-a-vis private-coin protocols. On the other hand, prior entanglement leads to startling phenomena such as "quantum teleportation" and "superdense coding." The problem of characterising the power of prior entanglement has baffled many researchers, especially in the setting of bounded-error protocols. It is open whether it leads to more than a factor of two savings (using superdense coding) or more than an additive O(log) savings (when used to create shared randomness). Few lower bounds are known for communication problems in this setting, and are all derived using sophisticated information theoretic techniques. In this paper, we focus on the most basic problem in the setting of communication over an entanglement-assisted quantum channel, that of communicating classical bits from one party to another. We derive optimal bounds on the number of quantum bits required for this task, for any given probability of error.
{"title":"On communication over an entanglement-assisted quantum channel","authors":"A. Nayak, J. Salzman","doi":"10.1145/509907.510007","DOIUrl":"https://doi.org/10.1145/509907.510007","url":null,"abstract":"Shared entanglement is a resource available to parties communicating over a quantum channel, much akin to public coins in classical communication protocols: the two parties may be given some number of quantum bits jointly prepared in a fixed superposition, prior to communicating with each other. The quantum channel is then said to be \"entanglement-assisted.\" Shared randomness does not help in the transmission of information from one party to another. Moreover, it does not significantly reduce the classical complexity of computing functions vis-a-vis private-coin protocols. On the other hand, prior entanglement leads to startling phenomena such as \"quantum teleportation\" and \"superdense coding.\" The problem of characterising the power of prior entanglement has baffled many researchers, especially in the setting of bounded-error protocols. It is open whether it leads to more than a factor of two savings (using superdense coding) or more than an additive O(log) savings (when used to create shared randomness). Few lower bounds are known for communication problems in this setting, and are all derived using sophisticated information theoretic techniques. In this paper, we focus on the most basic problem in the setting of communication over an entanglement-assisted quantum channel, that of communicating classical bits from one party to another. We derive optimal bounds on the number of quantum bits required for this task, for any given probability of error.","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133158947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The first non-trivial time-space tradeoff lower bounds have been shown for decision problems in P using notions derived from the study of two-party communication complexity. These results are proven directly for branching programs, natural generalizations of decision trees to directed graphs that provide elegant models of both non-uniform time T and space S simultaneously. We develop a new lower bound criterion, based on extending two-party communication complexity ideas to multiparty communication complexity. Applying this criterion to an explicit Boolean function based on a multilinear form over F/sub 2/. for suitable s, we show lower bounds that yield T = /spl Omega/(n log/sup 2/ n) when S /spl les/ n/sup 1-/spl epsi// log |D| for large input domain D. Finally, we develop lower bounds for nearest-neighbor problems involving n data points in a variety of d-dimensional metric spaces.
{"title":"Time-space tradeoffs, multiparty communication complexity, and nearest-neighbor problems","authors":"P. Beame, Erik Vee","doi":"10.1145/509907.510006","DOIUrl":"https://doi.org/10.1145/509907.510006","url":null,"abstract":"The first non-trivial time-space tradeoff lower bounds have been shown for decision problems in P using notions derived from the study of two-party communication complexity. These results are proven directly for branching programs, natural generalizations of decision trees to directed graphs that provide elegant models of both non-uniform time T and space S simultaneously. We develop a new lower bound criterion, based on extending two-party communication complexity ideas to multiparty communication complexity. Applying this criterion to an explicit Boolean function based on a multilinear form over F/sub 2/. for suitable s, we show lower bounds that yield T = /spl Omega/(n log/sup 2/ n) when S /spl les/ n/sup 1-/spl epsi// log |D| for large input domain D. Finally, we develop lower bounds for nearest-neighbor problems involving n data points in a variety of d-dimensional metric spaces.","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125707358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We point out how the methods of Nisan [31, 32], originally developed for derandomizing space-bounded computations, may be applied to obtain polynomial-time and NC derandomizations of several probabilistic algorithms. Our list includes the randomized rounding steps of linear and semi-definite programming relaxations of optimization problems, parallel derandomization of discrepancy-type problems, and the Johnson--Lindenstrauss lemma, to name a few.A fascinating aspect of this style of derandomization is the fact that we often carry out the derandomizations directly from the statements about the correctness of probabilistic algorithms, rather than carefully mimicking their proofs.
{"title":"Algorithmic derandomization via complexity theory","authors":"D. Sivakumar","doi":"10.1145/509907.509996","DOIUrl":"https://doi.org/10.1145/509907.509996","url":null,"abstract":"We point out how the methods of Nisan [31, 32], originally developed for derandomizing space-bounded computations, may be applied to obtain polynomial-time and NC derandomizations of several probabilistic algorithms. Our list includes the randomized rounding steps of linear and semi-definite programming relaxations of optimization problems, parallel derandomization of discrepancy-type problems, and the Johnson--Lindenstrauss lemma, to name a few.A fascinating aspect of this style of derandomization is the fact that we often carry out the derandomizations directly from the statements about the correctness of probabilistic algorithms, rather than carefully mimicking their proofs.","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125757746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
(MATH) We define a new family of collision resistant hash functions whose security is based on the worst case hardness of approximating the covering radius of a lattice within a factor O(πn2log n), where π is a value between 1 and √ over n that depends on the solution of the closest vector problem in certain "almost perfect" lattices. Even for π = √ over n, this improves the smallest (worst-case) inapproximability factor for lattice problems known to imply the existence of one-way functions. (Previously known best factor was O(n3+ε) for the shortest independent vector problem, due to Cai and Nerurkar, based on work of Ajtai.) Using standard transference theorems from the geometry of numbers, our result immediately gives a connection between the worst-case and average-case complexity of the shortest vector problem with connection factor O(πn3}log n), improving the best previously known connection factor O(n4+ε), also due to Ajtai, Cai and Nerurkar.
{"title":"Improved cryptographic hash functions with worst-case/average-case connection","authors":"Daniele Micciancio","doi":"10.1145/509907.509995","DOIUrl":"https://doi.org/10.1145/509907.509995","url":null,"abstract":"(MATH) We define a new family of collision resistant hash functions whose security is based on the worst case hardness of approximating the covering radius of a lattice within a factor <i>O</i>(π<i>n</i><sup>2</sup>log <i>n</i>), where π is a value between <i>1</i> and √ over <i>n</i> that depends on the solution of the closest vector problem in certain \"almost perfect\" lattices. Even for π = √ over <i>n</i>, this improves the smallest (worst-case) inapproximability factor for lattice problems known to imply the existence of one-way functions. (Previously known best factor was <i>O</i>(<i>n</i><sup>3+ε</sup>) for the shortest independent vector problem, due to Cai and Nerurkar, based on work of Ajtai.) Using standard transference theorems from the geometry of numbers, our result immediately gives a connection between the worst-case and average-case complexity of the shortest vector problem with connection factor <i>O</i>(π<i>n</i><sup>3</sup>}log <i>n</i>), improving the best previously known connection factor <i>O</i>(<i>n</i><sup>4+ε</sup>), also due to Ajtai, Cai and Nerurkar.","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122542404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}