Several new complexity classes of search problems that lie between the classes FP and FNP are defined. These classes are contained in the class TFNP of search problems that always have a solution. A problem in each of these new classes is defined in terms of an implicitly given, exponentially large graph, very much like PLS (polynomial local search). The existence of the solution sought is established by means of a simple graph-theoretic lemma with an inefficiently constructive proof. Several class containments and collapses, resulting in the two new classes PDLF contained in PLF are shown; the relation of either class of PLS is open. PLF contains several important problems for which no polynomial-time algorithm is presently known.<>
{"title":"On graph-theoretic lemmata and complexity classes","authors":"C. Papadimitriou","doi":"10.1109/FSCS.1990.89602","DOIUrl":"https://doi.org/10.1109/FSCS.1990.89602","url":null,"abstract":"Several new complexity classes of search problems that lie between the classes FP and FNP are defined. These classes are contained in the class TFNP of search problems that always have a solution. A problem in each of these new classes is defined in terms of an implicitly given, exponentially large graph, very much like PLS (polynomial local search). The existence of the solution sought is established by means of a simple graph-theoretic lemma with an inefficiently constructive proof. Several class containments and collapses, resulting in the two new classes PDLF contained in PLF are shown; the relation of either class of PLS is open. PLF contains several important problems for which no polynomial-time algorithm is presently known.<<ETX>>","PeriodicalId":271949,"journal":{"name":"Proceedings [1990] 31st Annual Symposium on Foundations of Computer Science","volume":"23 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120810776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The problem of perfectly secure communication in a general network in which processors and communication lines may be faulty is studied. Lower bounds are obtained on the connectivity required for successful secure communication. Efficient algorithms that operate with this connectivity and rely on no complexity theoretic assumptions are derived. These are the first algorithms for secure communication in a general network to achieve simultaneously the goals of perfect secrecy, perfect resiliency, and a worst case time which is linear in the diameter of the network.<>
{"title":"Perfectly secure message transmission","authors":"D. Dolev, C. Dwork, Orli Waarts, M. Yung","doi":"10.1145/138027.138036","DOIUrl":"https://doi.org/10.1145/138027.138036","url":null,"abstract":"The problem of perfectly secure communication in a general network in which processors and communication lines may be faulty is studied. Lower bounds are obtained on the connectivity required for successful secure communication. Efficient algorithms that operate with this connectivity and rely on no complexity theoretic assumptions are derived. These are the first algorithms for secure communication in a general network to achieve simultaneously the goals of perfect secrecy, perfect resiliency, and a worst case time which is linear in the diameter of the network.<<ETX>>","PeriodicalId":271949,"journal":{"name":"Proceedings [1990] 31st Annual Symposium on Foundations of Computer Science","volume":"546 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131865288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Let K be any field, and let F: K/sup n/ to K/sup n/ be a bijection with the property that both F and F/sup -1/ are computable using only arithmetic operations from K. Motivated by cryptographic considerations, the authors concern themselves with the relationship between the arithmetic complexity of F and the arithmetic complexity of F/sup -1/. They give strong relations between the complexity of F and F/sup -1/ when F is an automorphism in the sense of algebraic geometry (i.e. a formal bijection defined by n polynomials in n variables with a formal inverse of the same form). These constitute all such bijections in the case in which K is infinite. The authors show that at polynomially bounded degree, if an automorphism F has a polynomial-size arithmetic circuit, then F/sup -1/ has a polynomial-size arithmetic circuit. Furthermore, this result is uniform in the sense that there is an efficient algorithm for finding such a circuit for F/sup -1/, given such a circuit for F. This algorithm can also be used to check whether a circuit defines an automorphism F. If K is the Boolean field GF(2), then a circuit defining a bijection does not necessarily define an automorphism. However, it is shown in this case that, given any K/sup n/ to K/sup n/ bijection, there always exists an automorphism defining that bijection. This is not generally true for an arbitrary finite field.<>
{"title":"Efficiently inverting bijections given by straight line programs","authors":"Carl Sturtivant, Zhi-Li Zhang","doi":"10.1109/FSCS.1990.89551","DOIUrl":"https://doi.org/10.1109/FSCS.1990.89551","url":null,"abstract":"Let K be any field, and let F: K/sup n/ to K/sup n/ be a bijection with the property that both F and F/sup -1/ are computable using only arithmetic operations from K. Motivated by cryptographic considerations, the authors concern themselves with the relationship between the arithmetic complexity of F and the arithmetic complexity of F/sup -1/. They give strong relations between the complexity of F and F/sup -1/ when F is an automorphism in the sense of algebraic geometry (i.e. a formal bijection defined by n polynomials in n variables with a formal inverse of the same form). These constitute all such bijections in the case in which K is infinite. The authors show that at polynomially bounded degree, if an automorphism F has a polynomial-size arithmetic circuit, then F/sup -1/ has a polynomial-size arithmetic circuit. Furthermore, this result is uniform in the sense that there is an efficient algorithm for finding such a circuit for F/sup -1/, given such a circuit for F. This algorithm can also be used to check whether a circuit defines an automorphism F. If K is the Boolean field GF(2), then a circuit defining a bijection does not necessarily define an automorphism. However, it is shown in this case that, given any K/sup n/ to K/sup n/ bijection, there always exists an automorphism defining that bijection. This is not generally true for an arbitrary finite field.<<ETX>>","PeriodicalId":271949,"journal":{"name":"Proceedings [1990] 31st Annual Symposium on Foundations of Computer Science","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127899710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The first algebraic average-case complete problem is presented. The focus of attention is the modular group, i.e., the multiplicative group SL/sub 2/(Z) of two-by-two integer matrices of determinant 1. By default, in this study matrices are elements of the modular group. The problem is arguably the simplest natural average-case complete problem to date.<>
{"title":"Matrix decomposition problem is complete for the average case","authors":"Y. Gurevich","doi":"10.1109/FSCS.1990.89603","DOIUrl":"https://doi.org/10.1109/FSCS.1990.89603","url":null,"abstract":"The first algebraic average-case complete problem is presented. The focus of attention is the modular group, i.e., the multiplicative group SL/sub 2/(Z) of two-by-two integer matrices of determinant 1. By default, in this study matrices are elements of the modular group. The problem is arguably the simplest natural average-case complete problem to date.<<ETX>>","PeriodicalId":271949,"journal":{"name":"Proceedings [1990] 31st Annual Symposium on Foundations of Computer Science","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128837193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The maximal number of character comparisons made by a linear-time string matching algorithm, given a text string of length n and a pattern string of length m over a general alphabet, is investigated. The number is denoted by c(n,m) or approximated by (1+C)n, where C is a universal constant. The subscript 'online' is added when attention is restricted to online algorithms, and the superscript '1' is added when algorithms that find only one occurrence of the pattern in the text are considered. It is well known that n>
{"title":"On the exact complexity of string matching","authors":"L. Colussi, Z. Galil, R. Giancarlo","doi":"10.1109/FSCS.1990.89532","DOIUrl":"https://doi.org/10.1109/FSCS.1990.89532","url":null,"abstract":"The maximal number of character comparisons made by a linear-time string matching algorithm, given a text string of length n and a pattern string of length m over a general alphabet, is investigated. The number is denoted by c(n,m) or approximated by (1+C)n, where C is a universal constant. The subscript 'online' is added when attention is restricted to online algorithms, and the superscript '1' is added when algorithms that find only one occurrence of the pattern in the text are considered. It is well known that n<or=c(n,m)<or=2n-m+1 or 0<or=C<or=1. These bounds are improved, and C/sub online/ is determined exactly.<<ETX>>","PeriodicalId":271949,"journal":{"name":"Proceedings [1990] 31st Annual Symposium on Foundations of Computer Science","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116181962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Several versions of the problem of generating triangular meshes for finite-element methods are studied. It is shown how to triangulate a planar point set or a polygonally bounded domain with triangles of bounded aspect ratio, how to triangulate a planar point set with triangles having no obtuse angles, how to triangulate a point set in arbitrary dimension with simplices of bounded aspect ratio, and how to produce a linear-size Delaunay triangulation of a multidimensional point set by adding a linear number of extra points. All the triangulations have size within a constant factor of optimal and run in optimal time O(n log n+k) with input of size n and output of size k. No previous work on mesh generation simultaneously guarantees well-shaped elements and small total size.<>
{"title":"Provably good mesh generation","authors":"M. Bern, D. Eppstein, J. Gilbert","doi":"10.1109/FSCS.1990.89542","DOIUrl":"https://doi.org/10.1109/FSCS.1990.89542","url":null,"abstract":"Several versions of the problem of generating triangular meshes for finite-element methods are studied. It is shown how to triangulate a planar point set or a polygonally bounded domain with triangles of bounded aspect ratio, how to triangulate a planar point set with triangles having no obtuse angles, how to triangulate a point set in arbitrary dimension with simplices of bounded aspect ratio, and how to produce a linear-size Delaunay triangulation of a multidimensional point set by adding a linear number of extra points. All the triangulations have size within a constant factor of optimal and run in optimal time O(n log n+k) with input of size n and output of size k. No previous work on mesh generation simultaneously guarantees well-shaped elements and small total size.<<ETX>>","PeriodicalId":271949,"journal":{"name":"Proceedings [1990] 31st Annual Symposium on Foundations of Computer Science","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126794951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An efficient parallel algorithm for the tree-decomposition problem for fixed width w is presented. The algorithm runs in time O(log/sup 3/ n) and uses O(n) processors on a concurrent-read, concurrent-write parallel random access machine (CRCW PRAM). This result can be used to construct efficient parallel algorithms for three important classes of problems: MS (monadic second-order) properties, linear EMS (extended monadic second-order) extremum problems, and enumeration problems for MS properties, for graphs of tree width at most w. The sequential time complexity of the tree-composition problem for fixed w is improved, and some implications for this improvement are stated.<>
{"title":"Efficient parallel algorithms for tree-decomposition and related problems","authors":"J. Lagergren","doi":"10.1109/FSCS.1990.89536","DOIUrl":"https://doi.org/10.1109/FSCS.1990.89536","url":null,"abstract":"An efficient parallel algorithm for the tree-decomposition problem for fixed width w is presented. The algorithm runs in time O(log/sup 3/ n) and uses O(n) processors on a concurrent-read, concurrent-write parallel random access machine (CRCW PRAM). This result can be used to construct efficient parallel algorithms for three important classes of problems: MS (monadic second-order) properties, linear EMS (extended monadic second-order) extremum problems, and enumeration problems for MS properties, for graphs of tree width at most w. The sequential time complexity of the tree-composition problem for fixed w is improved, and some implications for this improvement are stated.<<ETX>>","PeriodicalId":271949,"journal":{"name":"Proceedings [1990] 31st Annual Symposium on Foundations of Computer Science","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132026246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A general theory is developed for constructing the shallowest possible circuits and the shortest possible formulas for the carry-save addition of n numbers using any given basic addition unit. More precisely, it is shown that if BA is a basic addition unit with occurrence matrix N, then the shortest multiple carry-save addition formulas that could be obtained by composing BA units are of size n/sup 1/p+o(1)/, where p is the unique real number for which the L/sub p/ norm of the matrix N equals 1. An analogous result connects the delay matrix M of the basic addition unit BA and the minimal q such that multiple carry-save addition circuits of depth (q+o(1)) log n could be constructed by combining BA units. On the basis of these optimal constructions of multiple carry-save adders, the shallowest known multiplication circuits are constructed.<>
{"title":"Faster circuits and shorter formulae for multiple addition, multiplication and symmetric Boolean functions","authors":"M. Paterson, N. Pippenger, Uri Zwick","doi":"10.1109/FSCS.1990.89586","DOIUrl":"https://doi.org/10.1109/FSCS.1990.89586","url":null,"abstract":"A general theory is developed for constructing the shallowest possible circuits and the shortest possible formulas for the carry-save addition of n numbers using any given basic addition unit. More precisely, it is shown that if BA is a basic addition unit with occurrence matrix N, then the shortest multiple carry-save addition formulas that could be obtained by composing BA units are of size n/sup 1/p+o(1)/, where p is the unique real number for which the L/sub p/ norm of the matrix N equals 1. An analogous result connects the delay matrix M of the basic addition unit BA and the minimal q such that multiple carry-save addition circuits of depth (q+o(1)) log n could be constructed by combining BA units. On the basis of these optimal constructions of multiple carry-save adders, the shallowest known multiplication circuits are constructed.<<ETX>>","PeriodicalId":271949,"journal":{"name":"Proceedings [1990] 31st Annual Symposium on Foundations of Computer Science","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134370702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The notion of pure nested radicals and its field-theoretic counterpart, pure root extensions, are defined and used for investigating exact radical solutions.<>
定义了纯嵌套根的概念及其对应的场理论,纯根扩展,并用于研究精确根解。
{"title":"Simplifying nested radicals and solving polynomials by radicals in minimum depth","authors":"G. Horng, Ming-Deh A. Huang","doi":"10.1109/FSCS.1990.89607","DOIUrl":"https://doi.org/10.1109/FSCS.1990.89607","url":null,"abstract":"The notion of pure nested radicals and its field-theoretic counterpart, pure root extensions, are defined and used for investigating exact radical solutions.<<ETX>>","PeriodicalId":271949,"journal":{"name":"Proceedings [1990] 31st Annual Symposium on Foundations of Computer Science","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132300508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hash P functions are characterized by certain straight-line programs of multivariate polynomials. The power of this characterization is illustrated by a number of consequences. These include a somewhat simplified proof of S. Toda's (1989) theorem that PH contained in P/sup Hash P/, as well as an infinite class of potentially inequivalent checkable functions.<>
{"title":"A characterization of Hash P by arithmetic straight line programs","authors":"L. Babai, L. Fortnow","doi":"10.1109/FSCS.1990.89521","DOIUrl":"https://doi.org/10.1109/FSCS.1990.89521","url":null,"abstract":"Hash P functions are characterized by certain straight-line programs of multivariate polynomials. The power of this characterization is illustrated by a number of consequences. These include a somewhat simplified proof of S. Toda's (1989) theorem that PH contained in P/sup Hash P/, as well as an infinite class of potentially inequivalent checkable functions.<<ETX>>","PeriodicalId":271949,"journal":{"name":"Proceedings [1990] 31st Annual Symposium on Foundations of Computer Science","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133805752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}