A general theory is developed for constructing the shallowest possible circuits and the shortest possible formulas for the carry-save addition of n numbers using any given basic addition unit. More precisely, it is shown that if BA is a basic addition unit with occurrence matrix N, then the shortest multiple carry-save addition formulas that could be obtained by composing BA units are of size n/sup 1/p+o(1)/, where p is the unique real number for which the L/sub p/ norm of the matrix N equals 1. An analogous result connects the delay matrix M of the basic addition unit BA and the minimal q such that multiple carry-save addition circuits of depth (q+o(1)) log n could be constructed by combining BA units. On the basis of these optimal constructions of multiple carry-save adders, the shallowest known multiplication circuits are constructed.<>
{"title":"Faster circuits and shorter formulae for multiple addition, multiplication and symmetric Boolean functions","authors":"M. Paterson, N. Pippenger, Uri Zwick","doi":"10.1109/FSCS.1990.89586","DOIUrl":"https://doi.org/10.1109/FSCS.1990.89586","url":null,"abstract":"A general theory is developed for constructing the shallowest possible circuits and the shortest possible formulas for the carry-save addition of n numbers using any given basic addition unit. More precisely, it is shown that if BA is a basic addition unit with occurrence matrix N, then the shortest multiple carry-save addition formulas that could be obtained by composing BA units are of size n/sup 1/p+o(1)/, where p is the unique real number for which the L/sub p/ norm of the matrix N equals 1. An analogous result connects the delay matrix M of the basic addition unit BA and the minimal q such that multiple carry-save addition circuits of depth (q+o(1)) log n could be constructed by combining BA units. On the basis of these optimal constructions of multiple carry-save adders, the shallowest known multiplication circuits are constructed.<<ETX>>","PeriodicalId":271949,"journal":{"name":"Proceedings [1990] 31st Annual Symposium on Foundations of Computer Science","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134370702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authors introduce a model, called the uniform memory hierarchy (UMH) model, which reflects the hierarchical nature of computer memory more accurately than the RAM (random-access-machine) model, which assumes that any item in memory can be accessed with unit cost. In the model memory occurs as a sequence of increasingly large levels. Data are transferred between levels in fixed-size blocks (the size is level dependent). Within a level blocks are random access. The model is easily extended to handle parallelism. The UMH model is really a family of models parameterized by the rate at which the bandwidth decays as one travels up the hierarchy. A program is parsimonious on a UMH if the leading terms of the program's (time) complexity on the UMH and on a RAM are identical. If these terms differ by more than a constant factor, then the program is inefficient. The authors analyze two standard FFT programs with the same RAM complexity. One is efficient; the other is not.<>
{"title":"Uniform memory hierarchies","authors":"B. Alpern, L. Carter, E. Feig","doi":"10.1109/FSCS.1990.89581","DOIUrl":"https://doi.org/10.1109/FSCS.1990.89581","url":null,"abstract":"The authors introduce a model, called the uniform memory hierarchy (UMH) model, which reflects the hierarchical nature of computer memory more accurately than the RAM (random-access-machine) model, which assumes that any item in memory can be accessed with unit cost. In the model memory occurs as a sequence of increasingly large levels. Data are transferred between levels in fixed-size blocks (the size is level dependent). Within a level blocks are random access. The model is easily extended to handle parallelism. The UMH model is really a family of models parameterized by the rate at which the bandwidth decays as one travels up the hierarchy. A program is parsimonious on a UMH if the leading terms of the program's (time) complexity on the UMH and on a RAM are identical. If these terms differ by more than a constant factor, then the program is inefficient. The authors analyze two standard FFT programs with the same RAM complexity. One is efficient; the other is not.<<ETX>>","PeriodicalId":271949,"journal":{"name":"Proceedings [1990] 31st Annual Symposium on Foundations of Computer Science","volume":"438 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134390277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The first algebraic average-case complete problem is presented. The focus of attention is the modular group, i.e., the multiplicative group SL/sub 2/(Z) of two-by-two integer matrices of determinant 1. By default, in this study matrices are elements of the modular group. The problem is arguably the simplest natural average-case complete problem to date.<>
{"title":"Matrix decomposition problem is complete for the average case","authors":"Y. Gurevich","doi":"10.1109/FSCS.1990.89603","DOIUrl":"https://doi.org/10.1109/FSCS.1990.89603","url":null,"abstract":"The first algebraic average-case complete problem is presented. The focus of attention is the modular group, i.e., the multiplicative group SL/sub 2/(Z) of two-by-two integer matrices of determinant 1. By default, in this study matrices are elements of the modular group. The problem is arguably the simplest natural average-case complete problem to date.<<ETX>>","PeriodicalId":271949,"journal":{"name":"Proceedings [1990] 31st Annual Symposium on Foundations of Computer Science","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128837193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hash P functions are characterized by certain straight-line programs of multivariate polynomials. The power of this characterization is illustrated by a number of consequences. These include a somewhat simplified proof of S. Toda's (1989) theorem that PH contained in P/sup Hash P/, as well as an infinite class of potentially inequivalent checkable functions.<>
{"title":"A characterization of Hash P by arithmetic straight line programs","authors":"L. Babai, L. Fortnow","doi":"10.1109/FSCS.1990.89521","DOIUrl":"https://doi.org/10.1109/FSCS.1990.89521","url":null,"abstract":"Hash P functions are characterized by certain straight-line programs of multivariate polynomials. The power of this characterization is illustrated by a number of consequences. These include a somewhat simplified proof of S. Toda's (1989) theorem that PH contained in P/sup Hash P/, as well as an infinite class of potentially inequivalent checkable functions.<<ETX>>","PeriodicalId":271949,"journal":{"name":"Proceedings [1990] 31st Annual Symposium on Foundations of Computer Science","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133805752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The notion of pure nested radicals and its field-theoretic counterpart, pure root extensions, are defined and used for investigating exact radical solutions.<>
定义了纯嵌套根的概念及其对应的场理论,纯根扩展,并用于研究精确根解。
{"title":"Simplifying nested radicals and solving polynomials by radicals in minimum depth","authors":"G. Horng, Ming-Deh A. Huang","doi":"10.1109/FSCS.1990.89607","DOIUrl":"https://doi.org/10.1109/FSCS.1990.89607","url":null,"abstract":"The notion of pure nested radicals and its field-theoretic counterpart, pure root extensions, are defined and used for investigating exact radical solutions.<<ETX>>","PeriodicalId":271949,"journal":{"name":"Proceedings [1990] 31st Annual Symposium on Foundations of Computer Science","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132300508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A model of learning that expands on the Valiant model is introduced. The point of departure from the Valiant model is that the learner is placed in a Markovian environment. The environment of the learner is a (exponentially large) graph, and the examples reside on the vertices of the graph, one example on each vertex. The learner obtains the examples while performing a random walk on the graph. At each step, the learning algorithm guesses the classification of the example on the current vertex using its current hypothesis. If its guess is incorrect, the learning algorithm updates its current working hypothesis. The performance of the learning algorithm in a given environment is judged by the expected number of mistakes made as a function of the number of steps in the random walk. The predictive value of Occam algorithms under this weaker probabilistic model of the learner's environment is studied.<>
{"title":"A Markovian extension of Valiant's learning model","authors":"D. Aldous, U. Vazirani","doi":"10.1109/FSCS.1990.89558","DOIUrl":"https://doi.org/10.1109/FSCS.1990.89558","url":null,"abstract":"A model of learning that expands on the Valiant model is introduced. The point of departure from the Valiant model is that the learner is placed in a Markovian environment. The environment of the learner is a (exponentially large) graph, and the examples reside on the vertices of the graph, one example on each vertex. The learner obtains the examples while performing a random walk on the graph. At each step, the learning algorithm guesses the classification of the example on the current vertex using its current hypothesis. If its guess is incorrect, the learning algorithm updates its current working hypothesis. The performance of the learning algorithm in a given environment is judged by the expected number of mistakes made as a function of the number of steps in the random walk. The predictive value of Occam algorithms under this weaker probabilistic model of the learner's environment is studied.<<ETX>>","PeriodicalId":271949,"journal":{"name":"Proceedings [1990] 31st Annual Symposium on Foundations of Computer Science","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130950507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The time complexity of wait-free algorithms in so-called normal executions, where no failures occur and processes operate at approximately the same speed, is considered. A lower bound of log n on the time complexity of any wait-free algorithm that achieves approximate agreement among n processes is proved. In contrast, there exists a non-wait-free algorithm that solves this problem in constant time. This implies an Omega (log n)-time separation between the wait-free and non-wait-free computation models. An O(log n)-time wait-free approximate agreement algorithm is presented. Its complexity is within a small constant of the lower bound.<>
{"title":"Are wait-free algorithms fast?","authors":"H. Attiya, N. Lynch, N. Shavit","doi":"10.1109/FSCS.1990.89524","DOIUrl":"https://doi.org/10.1109/FSCS.1990.89524","url":null,"abstract":"The time complexity of wait-free algorithms in so-called normal executions, where no failures occur and processes operate at approximately the same speed, is considered. A lower bound of log n on the time complexity of any wait-free algorithm that achieves approximate agreement among n processes is proved. In contrast, there exists a non-wait-free algorithm that solves this problem in constant time. This implies an Omega (log n)-time separation between the wait-free and non-wait-free computation models. An O(log n)-time wait-free approximate agreement algorithm is presented. Its complexity is within a small constant of the lower bound.<<ETX>>","PeriodicalId":271949,"journal":{"name":"Proceedings [1990] 31st Annual Symposium on Foundations of Computer Science","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134078579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authors solve the two major open problems associated with noninteractive zero-knowledge proofs: how to enable polynomially many provers to prove in writing polynomially many theorems based on the basis of a single random string, and how to construct such proofs under general (rather than number-theoretic) assumptions. The constructions can be used in cryptographic applications in which the prover is restricted to polynomial time, and they are much simpler than earlier (and less capable) proposals.<>
{"title":"Multiple non-interactive zero knowledge proofs based on a single random string","authors":"U. Feige, D. Lapidot, A. Shamir","doi":"10.1109/FSCS.1990.89549","DOIUrl":"https://doi.org/10.1109/FSCS.1990.89549","url":null,"abstract":"The authors solve the two major open problems associated with noninteractive zero-knowledge proofs: how to enable polynomially many provers to prove in writing polynomially many theorems based on the basis of a single random string, and how to construct such proofs under general (rather than number-theoretic) assumptions. The constructions can be used in cryptographic applications in which the prover is restricted to polynomial time, and they are much simpler than earlier (and less capable) proposals.<<ETX>>","PeriodicalId":271949,"journal":{"name":"Proceedings [1990] 31st Annual Symposium on Foundations of Computer Science","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114340186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authors present three alternative simple constructions of small probability spaces on n bits for which any k bits are almost independent. The number of bits used to specify a point in the sample space is O(log log n+k+log 1/ epsilon ), where epsilon is the statistical difference between the distribution induced on any k-bit locations and the uniform distribution. This is asymptotically comparable to the construction recently presented by J. Naor and M. Naor (1990). An advantage of the present constructions is their simplicity. Two of the constructions are based on bit sequences that are widely believed to possess randomness properties, and the results can be viewed as an explanation and establishment of these beliefs.<>
{"title":"Simple construction of almost k-wise independent random variables","authors":"N. Alon, Oded Goldreich, J. Håstad, R. Peralta","doi":"10.1109/FSCS.1990.89575","DOIUrl":"https://doi.org/10.1109/FSCS.1990.89575","url":null,"abstract":"The authors present three alternative simple constructions of small probability spaces on n bits for which any k bits are almost independent. The number of bits used to specify a point in the sample space is O(log log n+k+log 1/ epsilon ), where epsilon is the statistical difference between the distribution induced on any k-bit locations and the uniform distribution. This is asymptotically comparable to the construction recently presented by J. Naor and M. Naor (1990). An advantage of the present constructions is their simplicity. Two of the constructions are based on bit sequences that are widely believed to possess randomness properties, and the results can be viewed as an explanation and establishment of these beliefs.<<ETX>>","PeriodicalId":271949,"journal":{"name":"Proceedings [1990] 31st Annual Symposium on Foundations of Computer Science","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114448743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A model of computation dealing with infinite alphabets is proposed. The model is based on replacing the equality test by unification. It appears to be a natural generalization of the classical Rabin-Scott finite-state automata and possesses many of their properties.<>
{"title":"Finite-memory automata","authors":"M. Kaminski, N. Francez","doi":"10.1109/FSCS.1990.89590","DOIUrl":"https://doi.org/10.1109/FSCS.1990.89590","url":null,"abstract":"A model of computation dealing with infinite alphabets is proposed. The model is based on replacing the equality test by unification. It appears to be a natural generalization of the classical Rabin-Scott finite-state automata and possesses many of their properties.<<ETX>>","PeriodicalId":271949,"journal":{"name":"Proceedings [1990] 31st Annual Symposium on Foundations of Computer Science","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114796345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}