We show that “looping” of while-programs can be expressed in Regular First-Order Dynamic Logic, disproving a conjecture made in [Harel-Pratt 1978]. In addition we show that the expressive power of quantifier-free Dynamic Logic increases when nondeterminism is introduced in the programs that are part of formulae of Dynamic Logic. Allowing assignments of random values to variables increases the expressive power even further.
{"title":"On the expressive power of Dynamic Logic (Preliminary Report)","authors":"A. Meyer, Karl Winklmann","doi":"10.1145/800135.804410","DOIUrl":"https://doi.org/10.1145/800135.804410","url":null,"abstract":"We show that “looping” of while-programs can be expressed in Regular First-Order Dynamic Logic, disproving a conjecture made in [Harel-Pratt 1978]. In addition we show that the expressive power of quantifier-free Dynamic Logic increases when nondeterminism is introduced in the programs that are part of formulae of Dynamic Logic. Allowing assignments of random values to variables increases the expressive power even further.","PeriodicalId":176545,"journal":{"name":"Proceedings of the eleventh annual ACM symposium on Theory of computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129307800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Numerous algorithms concerning relational databases use a cover for a set of functional dependencies as all or part of their input. Examples are Bernstein and Beeri's synthesis algorithm [BB] and the tableau modification algorithm of Aho, Beeri, and Ullman [ABU]. The performance of these algorithms may depend both on the number of functional dependencies in the cover and the total size of the cover. Starting with a smaller cover will make such algorithms run faster. After Bernstein [Be75], many researchers believe the problem of finding a minimum cover is NP-complete. We show that minimum covers can be found in polynomial time, using the notion of direct determination. The proof details the structure of minimum covers, refining the structure Bernstein and Beeri show for non-redundant covers [BB]. The kernel algorithm of Lewis, Sekino, and Ting [LST] is improved using these results.
许多涉及关系数据库的算法都使用一组功能依赖项作为其全部或部分输入。例如Bernstein和Beeri的合成算法[BB]和Aho, Beeri和Ullman的表格修改算法[ABU]。这些算法的性能可能取决于覆盖中功能依赖项的数量和覆盖的总大小。从较小的覆盖范围开始将使这种算法运行得更快。在Bernstein [Be75]之后,许多研究者认为寻找最小覆盖的问题是np完全的。我们证明了最小覆盖可以在多项式时间内找到,使用直接确定的概念。证明详细说明了最小覆盖层的结构,改进了Bernstein和Beeri给出的非冗余覆盖层的结构[BB]。利用这些结果改进了Lewis, Sekino, and Ting [LST]的核算法。
{"title":"Minimum covers in the relational database model (Extended Abstract)","authors":"D. Maier","doi":"10.1145/800135.804425","DOIUrl":"https://doi.org/10.1145/800135.804425","url":null,"abstract":"Numerous algorithms concerning relational databases use a cover for a set of functional dependencies as all or part of their input. Examples are Bernstein and Beeri's synthesis algorithm [BB] and the tableau modification algorithm of Aho, Beeri, and Ullman [ABU]. The performance of these algorithms may depend both on the number of functional dependencies in the cover and the total size of the cover. Starting with a smaller cover will make such algorithms run faster. After Bernstein [Be75], many researchers believe the problem of finding a minimum cover is NP-complete. We show that minimum covers can be found in polynomial time, using the notion of direct determination. The proof details the structure of minimum covers, refining the structure Bernstein and Beeri show for non-redundant covers [BB]. The kernel algorithm of Lewis, Sekino, and Ting [LST] is improved using these results.","PeriodicalId":176545,"journal":{"name":"Proceedings of the eleventh annual ACM symposium on Theory of computing","volume":"275 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124442486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A common operation in geometric computing is the decomposition of complex structures into more basic structures. Since it is easier to apply most algorithms to triangles or arbitrary convex polygons, there is considerable interest in finding fast algorithms for such decompositions. We consider the problem of decomposing a simple (non-convex) polygon into the union of a minimal number of convex polygons. Although the structure of the problem led to the conjecture that it was NP-complete, we have been able to reach polynomial time bounded algorithms for exact solution as well as low degree polynomial time bounded algorithm/or approximation methods.
{"title":"Decomposing a polygon into its convex parts","authors":"B. Chazelle, D. Dobkin","doi":"10.1145/800135.804396","DOIUrl":"https://doi.org/10.1145/800135.804396","url":null,"abstract":"A common operation in geometric computing is the decomposition of complex structures into more basic structures. Since it is easier to apply most algorithms to triangles or arbitrary convex polygons, there is considerable interest in finding fast algorithms for such decompositions. We consider the problem of decomposing a simple (non-convex) polygon into the union of a minimal number of convex polygons. Although the structure of the problem led to the conjecture that it was NP-complete, we have been able to reach polynomial time bounded algorithms for exact solution as well as low degree polynomial time bounded algorithm/or approximation methods.","PeriodicalId":176545,"journal":{"name":"Proceedings of the eleventh annual ACM symposium on Theory of computing","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121259538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A recent trend in cryptographic systems is to base their encryption/decryption functions on NP-complete problems, and in particular on the knapsack problem. To analyze the security of these systems, we need a complexity theory which is less worst-case oriented and which takes into account the extra conditions imposed on the problems to make them cryptographically useful. In this paper we consider the two classes of one-to-one and onto knapsack systems, analyze the complexity of recognizing them and of solving their instances, introduce a new complexity measure (median complexity), and show that this complexity is inversely proportional to the density of the knapsack system. The tradeoff result is based on a fast probabilistic knapsack solving algorithm which is applicable only to one-to-one systems, and it indicates that knapsack-based cryptographic systems in which one can both encrypt and sign messages are relatively insecure. We end the paper with new results about the security of some specific knapsack systems.
{"title":"On the cryptocomplexity of knapsack systems","authors":"A. Shamir","doi":"10.1145/800135.804405","DOIUrl":"https://doi.org/10.1145/800135.804405","url":null,"abstract":"A recent trend in cryptographic systems is to base their encryption/decryption functions on NP-complete problems, and in particular on the knapsack problem. To analyze the security of these systems, we need a complexity theory which is less worst-case oriented and which takes into account the extra conditions imposed on the problems to make them cryptographically useful. In this paper we consider the two classes of one-to-one and onto knapsack systems, analyze the complexity of recognizing them and of solving their instances, introduce a new complexity measure (median complexity), and show that this complexity is inversely proportional to the density of the knapsack system. The tradeoff result is based on a fast probabilistic knapsack solving algorithm which is applicable only to one-to-one systems, and it indicates that knapsack-based cryptographic systems in which one can both encrypt and sign messages are relatively insecure. We end the paper with new results about the security of some specific knapsack systems.","PeriodicalId":176545,"journal":{"name":"Proceedings of the eleventh annual ACM symposium on Theory of computing","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133506176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We examine a pebbling problem which has been used to study the storage requirements of various models of computation. Sethi has shown this problem to be NP-hard and Lingas has shown a generalization to be P-space complete. We prove the original problem P-space complete by employing a modification of Lingas's proof. The pebbling problem is one of the few examples of a P-space complete problem not exhibiting any obvious quantifier alternation.
{"title":"The pebbling problem is complete in polynomial space","authors":"J. Gilbert, Thomas Lengauer, R. Tarjan","doi":"10.1145/800135.804418","DOIUrl":"https://doi.org/10.1145/800135.804418","url":null,"abstract":"We examine a pebbling problem which has been used to study the storage requirements of various models of computation. Sethi has shown this problem to be NP-hard and Lingas has shown a generalization to be P-space complete. We prove the original problem P-space complete by employing a modification of Lingas's proof. The pebbling problem is one of the few examples of a P-space complete problem not exhibiting any obvious quantifier alternation.","PeriodicalId":176545,"journal":{"name":"Proceedings of the eleventh annual ACM symposium on Theory of computing","volume":"220 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132475808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The problem of searching for a given k-vector among a sorted list of n k-vectors is considered. The binary search is known to be optimal when k is 1. Here an almost optimal algorithm is presented for the 2-dimensional case. Interesting upper and lower bounds are derived for the general problem.
{"title":"On a multidimensional search problem (Preliminary Version)","authors":"S. Kosaraju","doi":"10.1145/800135.804399","DOIUrl":"https://doi.org/10.1145/800135.804399","url":null,"abstract":"The problem of searching for a given k-vector among a sorted list of n k-vectors is considered. The binary search is known to be optimal when k is 1. Here an almost optimal algorithm is presented for the 2-dimensional case. Interesting upper and lower bounds are derived for the general problem.","PeriodicalId":176545,"journal":{"name":"Proceedings of the eleventh annual ACM symposium on Theory of computing","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131816978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Given a function f over a finite domain D and an arbitrary starting point x, the sequence x,f(x),f(f(x)),... is ultimately periodic. Such sequences typically are used for constructing random number generators. The cycle problem is to determine the first repeated element fn(x) in the sequence. Previous algorithms for this problem have required 3n operations. In this paper we present an algorithm which only requires n(1+O(1/(@@@@)M)) steps, if M memory cells are available to store values of the function. By increasing M, this running time can be made arbitrarily close to the information-theoretic lower bound on the running time of any algorithm for the cycle problem. Our treatment is novel in that we explicitly consider the performance of the algorithm as a function of the amount of memory available as well as the relative cost of evaluating f and comparing sequence elements for equality.
{"title":"The complexity of finding periods","authors":"R. Sedgewick, T. G. Szymanski","doi":"10.1145/800135.804400","DOIUrl":"https://doi.org/10.1145/800135.804400","url":null,"abstract":"Given a function f over a finite domain D and an arbitrary starting point x, the sequence x,f(x),f(f(x)),... is ultimately periodic. Such sequences typically are used for constructing random number generators. The cycle problem is to determine the first repeated element fn(x) in the sequence. Previous algorithms for this problem have required 3n operations. In this paper we present an algorithm which only requires n(1+O(1/(@@@@)M)) steps, if M memory cells are available to store values of the function. By increasing M, this running time can be made arbitrarily close to the information-theoretic lower bound on the running time of any algorithm for the cycle problem. Our treatment is novel in that we explicitly consider the performance of the algorithm as a function of the amount of memory available as well as the relative cost of evaluating f and comparing sequence elements for equality.","PeriodicalId":176545,"journal":{"name":"Proceedings of the eleventh annual ACM symposium on Theory of computing","volume":"191 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114014308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider representations of data structures in which the relative ordering of the values stored is implicit in the pattern in which the elements are retained, rather than explicit in pointers. Several implicit schemes for storing data are introduced to permit efficient implementation of the instructions insert, delete and find. &thgr;(@@@@N) basic operations are shown to be necessary and sufficient, in the worst case, to perform these instructions provided that the data elements are kept in some fixed partial order. We demonstrate, however, that further improvements can be made if an arrangement other than a fixed partial order is used. A structure, based on a fixed partial order, is introduced to facilitate multiple key searches. This structure, together with the retrieval scheme based upon it, is shown to be within a constant factor of the optimal one based on a partial order.
{"title":"Implicit data structures (Preliminary Draft)","authors":"J. Munro, Hendra Suwanda","doi":"10.1145/800135.804404","DOIUrl":"https://doi.org/10.1145/800135.804404","url":null,"abstract":"We consider representations of data structures in which the relative ordering of the values stored is implicit in the pattern in which the elements are retained, rather than explicit in pointers. Several implicit schemes for storing data are introduced to permit efficient implementation of the instructions insert, delete and find. &thgr;(@@@@N) basic operations are shown to be necessary and sufficient, in the worst case, to perform these instructions provided that the data elements are kept in some fixed partial order. We demonstrate, however, that further improvements can be made if an arrangement other than a fixed partial order is used. A structure, based on a fixed partial order, is introduced to facilitate multiple key searches. This structure, together with the retrieval scheme based upon it, is shown to be within a constant factor of the optimal one based on a partial order.","PeriodicalId":176545,"journal":{"name":"Proceedings of the eleventh annual ACM symposium on Theory of computing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123161161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Let G denote the set of elements of a commutative group whose addition operations is denoted by +, let N be a positive integer, and let A(1) ,..., A(N) denote an array with values in G. We will be concerned with designing data structures for representing the array A, which facilitate efficient implementation of the following two on-line tasks: (1) Update(j,x); replace A(j) by A(j) +x. (j and x are inputs, 1≤j≤N and x&egr;G) (2) Retrieve(j); returns the value of A(1) +...+ A(j). (j is an input, 1≤j≤N) As a motivating example, let G be the group of integers with + denoting the usual addition operation. Imagine a standardized examination given to large numbers of individuals over an indefinite period of time. Assume that each examinee will attain an integer score in the interval [1,N]. If an individual gets j points, this fact is recorded by executing Update(j,1). so that A(j) represents the number of individuals to date having scored j points. In order to compute the percentile currently associated with a particular score k, we need the cumulative sum provided by executing Retrieve(k).
{"title":"A near optimal data structure for a type of range query problem","authors":"M. Fredman","doi":"10.1145/800135.804398","DOIUrl":"https://doi.org/10.1145/800135.804398","url":null,"abstract":"Let G denote the set of elements of a commutative group whose addition operations is denoted by +, let N be a positive integer, and let A(1) ,..., A(N) denote an array with values in G. We will be concerned with designing data structures for representing the array A, which facilitate efficient implementation of the following two on-line tasks: (1) Update(j,x); replace A(j) by A(j) +x. (j and x are inputs, 1≤j≤N and x&egr;G) (2) Retrieve(j); returns the value of A(1) +...+ A(j). (j is an input, 1≤j≤N) As a motivating example, let G be the group of integers with + denoting the usual addition operation. Imagine a standardized examination given to large numbers of individuals over an indefinite period of time. Assume that each examinee will attain an integer score in the interval [1,N]. If an individual gets j points, this fact is recorded by executing Update(j,1). so that A(j) represents the number of individuals to date having scored j points. In order to compute the percentile currently associated with a particular score k, we need the cumulative sum provided by executing Retrieve(k).","PeriodicalId":176545,"journal":{"name":"Proceedings of the eleventh annual ACM symposium on Theory of computing","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116962854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An O(EVlog2V) algorithm for finding the maximal flow in networks is described. It is asymptotically better than the other known algorithms if E = O(V2-ε) for some ε>0. The analysis of the running time exploits the discovery of a phenomenon similar to (but more general than) path compression, although the union find algorithm is not used. The time bound is shown to be tight in terms of V and E by exhibiting a family of networks that require Ω(EVlog2V) time.++
{"title":"Network flow and generalized path compression","authors":"Z. Galil, A. Naamad","doi":"10.1145/800135.804394","DOIUrl":"https://doi.org/10.1145/800135.804394","url":null,"abstract":"An O(EVlog2V) algorithm for finding the maximal flow in networks is described. It is asymptotically better than the other known algorithms if E = O(V2-ε) for some ε>0. The analysis of the running time exploits the discovery of a phenomenon similar to (but more general than) path compression, although the union find algorithm is not used. The time bound is shown to be tight in terms of V and E by exhibiting a family of networks that require Ω(EVlog2V) time.++","PeriodicalId":176545,"journal":{"name":"Proceedings of the eleventh annual ACM symposium on Theory of computing","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128875313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}