We introduce a notion of integrated cost of a dictionary, as average cost of sequences of search, insert and delete operations. We express generating functions of these sequences in terms of continued fractions; from this we derive an explicit integral expression of integrated costs for three common representations of dictionaries.
{"title":"Computing integrated costs of sequences of operations with application to dictionaries","authors":"P. Flajolet, J. Françon, J. Vuillemin","doi":"10.1145/800135.804397","DOIUrl":"https://doi.org/10.1145/800135.804397","url":null,"abstract":"We introduce a notion of integrated cost of a dictionary, as average cost of sequences of search, insert and delete operations. We express generating functions of these sequences in terms of continued fractions; from this we derive an explicit integral expression of integrated costs for three common representations of dictionaries.","PeriodicalId":176545,"journal":{"name":"Proceedings of the eleventh annual ACM symposium on Theory of computing","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128782262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Among the most remarkable algorithms in algebra are Strassen's algorithm for the multiplication of matrices and the Fast Fourier Transform method for the convolution of vectors. For both of these problems the definition suggests an obvious algorithm that uses just the monotone operations + and ×. Schnorr [18] has shown that these algorithms, which use &thgr;(n3) and &THgr;(n2) operations respectively, are essentially optimal among algorithms that use only these monotone operations. By using subtraction as an additional operation and exploiting cancellations of computed terms in a very intricate way Strassen showed that a faster algorithm requiring only O(n2.81) operations is possible. The FFT method for convolution achieves O(nlog n) complexity in a similar fashion. The question arises as to whether we can expect even greater gains in computational efficiency by such judicious use of cancellations. In this paper we give a positive answer to this, by exhibiting a problem for which an exponential speedup can be attained using {+,−,×} rather than just {+,×} as operations. The problem in question is the multivariate polynomial associated with perfect matchings in planar graphs. For this a fast algorithm is implicit in the Pfaffian technique of Fisher and Kasteleyn [6,8]. The main result we provide here is the exponential lower bound in the monotone case.
{"title":"Negation can be exponentially powerful","authors":"L. Valiant","doi":"10.1145/800135.804412","DOIUrl":"https://doi.org/10.1145/800135.804412","url":null,"abstract":"Among the most remarkable algorithms in algebra are Strassen's algorithm for the multiplication of matrices and the Fast Fourier Transform method for the convolution of vectors. For both of these problems the definition suggests an obvious algorithm that uses just the monotone operations + and ×. Schnorr [18] has shown that these algorithms, which use &thgr;(n3) and &THgr;(n2) operations respectively, are essentially optimal among algorithms that use only these monotone operations. By using subtraction as an additional operation and exploiting cancellations of computed terms in a very intricate way Strassen showed that a faster algorithm requiring only O(n2.81) operations is possible. The FFT method for convolution achieves O(nlog n) complexity in a similar fashion. The question arises as to whether we can expect even greater gains in computational efficiency by such judicious use of cancellations. In this paper we give a positive answer to this, by exhibiting a problem for which an exponential speedup can be attained using {+,−,×} rather than just {+,×} as operations. The problem in question is the multivariate polynomial associated with perfect matchings in planar graphs. For this a fast algorithm is implicit in the Pfaffian technique of Fisher and Kasteleyn [6,8]. The main result we provide here is the exponential lower bound in the monotone case.","PeriodicalId":176545,"journal":{"name":"Proceedings of the eleventh annual ACM symposium on Theory of computing","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130495647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We motivate, formalize, and study a computational problem in concrete inductive inference. A “pattern” is defined to be a concatenation of constants and variables, and the language of a pattern is defined to be the set of strings obtained by substituting constant strings for the variables. The problem we consider is, given a set of strings, find a minimal pattern language containing this set. This problem is shown to be effectively solvable in the general case and to lead to correct inference in the limit of the pattern languages. There exists a polynomial time algorithm for it in the restricted case of one-variable patterns. Inference from positive data is re-examined, and a characterization given of when it is possible for a family of recursive languages. Various collateral results about patterns and pattern languages are obtained. Section 1 is an introduction explaining the context of this work and informally describing the problem formulation. Section 2 is definitions. Section 3 is results concerning patterns and pattern languages. Section 4 concerns the abstract question of inference from positive data. Section 5 gives a polynomial time algorithm for finding minimal one-variable pattern languages compatible with a given set of strings. Section 6 contains remarks.
{"title":"Finding patterns common to a set of strings (Extended Abstract)","authors":"D. Angluin","doi":"10.1145/800135.804406","DOIUrl":"https://doi.org/10.1145/800135.804406","url":null,"abstract":"We motivate, formalize, and study a computational problem in concrete inductive inference. A “pattern” is defined to be a concatenation of constants and variables, and the language of a pattern is defined to be the set of strings obtained by substituting constant strings for the variables. The problem we consider is, given a set of strings, find a minimal pattern language containing this set. This problem is shown to be effectively solvable in the general case and to lead to correct inference in the limit of the pattern languages. There exists a polynomial time algorithm for it in the restricted case of one-variable patterns. Inference from positive data is re-examined, and a characterization given of when it is possible for a family of recursive languages. Various collateral results about patterns and pattern languages are obtained. Section 1 is an introduction explaining the context of this work and informally describing the problem formulation. Section 2 is definitions. Section 3 is results concerning patterns and pattern languages. Section 4 concerns the abstract question of inference from positive data. Section 5 gives a polynomial time algorithm for finding minimal one-variable pattern languages compatible with a given set of strings. Section 6 contains remarks.","PeriodicalId":176545,"journal":{"name":"Proceedings of the eleventh annual ACM symposium on Theory of computing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125835566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an algorithm that recognizes the class of General Series Parallel digraphs and runs in time proportional to the size of its input. To perform this recognition task it is necessary to compute the transitive reduction and transitive closure of any General Series Parallel digraph. Our analysis is based on the relationship between General Series Parallel digraphs and a class of well known models of electrical networks.
{"title":"The recognition of Series Parallel digraphs","authors":"J. Valdes, R. Tarjan, E. Lawler","doi":"10.1145/800135.804393","DOIUrl":"https://doi.org/10.1145/800135.804393","url":null,"abstract":"We present an algorithm that recognizes the class of General Series Parallel digraphs and runs in time proportional to the size of its input. To perform this recognition task it is necessary to compute the transitive reduction and transitive closure of any General Series Parallel digraph. Our analysis is based on the relationship between General Series Parallel digraphs and a class of well known models of electrical networks.","PeriodicalId":176545,"journal":{"name":"Proceedings of the eleventh annual ACM symposium on Theory of computing","volume":"635 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122950651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We prove that a class of functions (denoted by NPCPt), whose graphs can be accepted in non-deterministic polynomial time, can be evaluated in deterministic polynomial time if and only if &ggr;-reducibility is equivalent to polynomial time many-one reducibility. We also modify the proof technique used to obtain part of this result to obtain the stronger result that if every &ggr;-reduction can be replaced by a polynomial time Turing reduction then every function in NPCPt can be evaluated in deterministic polynomial time.
{"title":"On &ggr;-reducibility versus polynomial time many-one reducibility(Extended Abstract)","authors":"T. Long","doi":"10.1145/800135.804421","DOIUrl":"https://doi.org/10.1145/800135.804421","url":null,"abstract":"We prove that a class of functions (denoted by NPCPt), whose graphs can be accepted in non-deterministic polynomial time, can be evaluated in deterministic polynomial time if and only if &ggr;-reducibility is equivalent to polynomial time many-one reducibility. We also modify the proof technique used to obtain part of this result to obtain the stronger result that if every &ggr;-reduction can be replaced by a polynomial time Turing reduction then every function in NPCPt can be evaluated in deterministic polynomial time.","PeriodicalId":176545,"journal":{"name":"Proceedings of the eleventh annual ACM symposium on Theory of computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131747088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Establishing good lower bounds on the complexity of languages is an important area of current research in the theory of computation. However, despite much effort, fundamental questions such as P =? NP and L =? NL remain open. To resolve these questions it may be necessary to develop a deep combinatorial understanding of polynomial time or log space computations, possibly a formidable task. One avenue for approaching these problems is to study weaker models of computation for which the analogous problems may be easier to settle, perhaps yielding insight into the original problems. Sakoda and Sipser [3] raise the following question about finite automata: Is there a polynomial p, such that every n-state 2nfa (two-way nondeterministic finite automaton) has an equivalent p(n)-state 2dfa? They conjecture a negative answer to this. In this paper we take a step toward proving this conjecture by showing that 2nfa are exponentially more succinct than 2dfa of a certain restricted form.
{"title":"Lower bounds on the size of sweeping automata","authors":"M. Sipser","doi":"10.1145/800135.804429","DOIUrl":"https://doi.org/10.1145/800135.804429","url":null,"abstract":"Establishing good lower bounds on the complexity of languages is an important area of current research in the theory of computation. However, despite much effort, fundamental questions such as P =? NP and L =? NL remain open. To resolve these questions it may be necessary to develop a deep combinatorial understanding of polynomial time or log space computations, possibly a formidable task. One avenue for approaching these problems is to study weaker models of computation for which the analogous problems may be easier to settle, perhaps yielding insight into the original problems. Sakoda and Sipser [3] raise the following question about finite automata: Is there a polynomial p, such that every n-state 2nfa (two-way nondeterministic finite automaton) has an equivalent p(n)-state 2dfa? They conjecture a negative answer to this. In this paper we take a step toward proving this conjecture by showing that 2nfa are exponentially more succinct than 2dfa of a certain restricted form.","PeriodicalId":176545,"journal":{"name":"Proceedings of the eleventh annual ACM symposium on Theory of computing","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115904805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deadlock is one of the most serious system failures that can occur in a computer system or a network. Deadlock states have been observed in existing computer networks emphasizing the need for carefully designed flow control procedures (controllers) to avoid deadlocks. Such a deadlock-free controller is readily found if we allow it global information about the overall network state. Generally, this assumption is not realistic, and we must resort to deadlock free local controllers using only packet and node information. We present here several types of such controllers, we study their relationship and give a proof of their optimality with respect to deadlock free controllers using the same set of local parameters.
{"title":"Deadlock-free packet switching networks","authors":"S. Toueg, J. Ullman","doi":"10.1145/800135.804402","DOIUrl":"https://doi.org/10.1145/800135.804402","url":null,"abstract":"Deadlock is one of the most serious system failures that can occur in a computer system or a network. Deadlock states have been observed in existing computer networks emphasizing the need for carefully designed flow control procedures (controllers) to avoid deadlocks. Such a deadlock-free controller is readily found if we allow it global information about the overall network state. Generally, this assumption is not realistic, and we must resort to deadlock free local controllers using only packet and node information. We present here several types of such controllers, we study their relationship and give a proof of their optimality with respect to deadlock free controllers using the same set of local parameters.","PeriodicalId":176545,"journal":{"name":"Proceedings of the eleventh annual ACM symposium on Theory of computing","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117089499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the theory of recursive functions and computational complexity it has been demonstrated repeatedly that the natural problems tend to cluster together in “completeness classes”. These are families of problems that (A) are computationally interreducible and (B) are the hardest members of some computationally defined class. The aim of this paper is to demonstrate that for both algebraic and combinatorial problems this phenomenon exists in a form that is purely algebraic in both of the respects (A) and (B). Such computational consequences as NP-completeness are particular manifestations of something more fundamental. The core of the paper is self-contained, consisting as it does essentially of the two notions of “p-definability” and the five algebraic relations that are proved as theorems. In the remainder our aim is to elucidate the computational consequences of these basic results. Hence in the auxiliary propositions and discussion for convenience we do assume familiarity with algebraic and Boolean complexity theory.
{"title":"Completeness classes in algebra","authors":"L. Valiant","doi":"10.1145/800135.804419","DOIUrl":"https://doi.org/10.1145/800135.804419","url":null,"abstract":"In the theory of recursive functions and computational complexity it has been demonstrated repeatedly that the natural problems tend to cluster together in “completeness classes”. These are families of problems that (A) are computationally interreducible and (B) are the hardest members of some computationally defined class. The aim of this paper is to demonstrate that for both algebraic and combinatorial problems this phenomenon exists in a form that is purely algebraic in both of the respects (A) and (B). Such computational consequences as NP-completeness are particular manifestations of something more fundamental. The core of the paper is self-contained, consisting as it does essentially of the two notions of “p-definability” and the five algebraic relations that are proved as theorems. In the remainder our aim is to elucidate the computational consequences of these basic results. Hence in the auxiliary propositions and discussion for convenience we do assume familiarity with algebraic and Boolean complexity theory.","PeriodicalId":176545,"journal":{"name":"Proceedings of the eleventh annual ACM symposium on Theory of computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127457143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we show that by dropping the restrictions on interpretations of arbitrary programs and requiring only that very natural deductive systems are sound, we get classes of semantics which give good representations of program behavior and are more well-suited for applications involving an axiomatic approach (for example program verification). In addition, by tying the restrictions on the behavior of arbitrary programs or specified axiom schema, we get both a powerful formal tool and properties more widely used specifications lack such as compactness and completeness. Completeness is a very desirable property. It is fairly straightforward to show given any reasonable deductive system D for a class of models A that Pr(D) @@@@ Th(A) . But given an application such as program verification, if it is not true that Th(A) @@@@ Pr(D) , we may be able to find correct programs which we cannot verify. In this paper we show that by using the “axiomatizability” of programming constructs, we can obtain a technique for showing completeness results for some of the more widely used variations of PDL. We begin with some definitions.
{"title":"A completeness technique for d-axiomatizable semantics","authors":"F. Berman","doi":"10.1145/800135.804409","DOIUrl":"https://doi.org/10.1145/800135.804409","url":null,"abstract":"In this paper, we show that by dropping the restrictions on interpretations of arbitrary programs and requiring only that very natural deductive systems are sound, we get classes of semantics which give good representations of program behavior and are more well-suited for applications involving an axiomatic approach (for example program verification). In addition, by tying the restrictions on the behavior of arbitrary programs or specified axiom schema, we get both a powerful formal tool and properties more widely used specifications lack such as compactness and completeness. Completeness is a very desirable property. It is fairly straightforward to show given any reasonable deductive system D for a class of models A that Pr(D) @@@@ Th(A) . But given an application such as program verification, if it is not true that Th(A) @@@@ Pr(D) , we may be able to find correct programs which we cannot verify. In this paper we show that by using the “axiomatizability” of programming constructs, we can obtain a technique for showing completeness results for some of the more widely used variations of PDL. We begin with some definitions.","PeriodicalId":176545,"journal":{"name":"Proceedings of the eleventh annual ACM symposium on Theory of computing","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125816411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We investigate the question of when two database schemes embody the same information. We argue that this question reduces to the equivalence of the sets of fixed points of the project-join mappings associated with the two database schemes in question. When data dependencies are given, we need only consider those fixed points that satisfy the dependencies. A polynomial algorithm to test the equivalence of database schemes, when there are no dependencies, is given. We also provide an exponential algorithm to handle the case where there are functional and/or multivalued dependencies. Furthermore, we give a polynomial time test to determine whether a project-join mapping preserves a set of functional dependencies, and a polynomial time algorithm for equivalence of database schemes whose project-join mappings do preserve the given set of functional dependencies. Lastly, we introduce the “update sets” approach to database design as an application of these results.
{"title":"Equivalence of relational database schemes","authors":"C. Beeri, A. Mendelzon, Y. Sagiv, J. Ullman","doi":"10.1145/800135.804424","DOIUrl":"https://doi.org/10.1145/800135.804424","url":null,"abstract":"We investigate the question of when two database schemes embody the same information. We argue that this question reduces to the equivalence of the sets of fixed points of the project-join mappings associated with the two database schemes in question. When data dependencies are given, we need only consider those fixed points that satisfy the dependencies. A polynomial algorithm to test the equivalence of database schemes, when there are no dependencies, is given. We also provide an exponential algorithm to handle the case where there are functional and/or multivalued dependencies. Furthermore, we give a polynomial time test to determine whether a project-join mapping preserves a set of functional dependencies, and a polynomial time algorithm for equivalence of database schemes whose project-join mappings do preserve the given set of functional dependencies. Lastly, we introduce the “update sets” approach to database design as an application of these results.","PeriodicalId":176545,"journal":{"name":"Proceedings of the eleventh annual ACM symposium on Theory of computing","volume":"252 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133350720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}