首页 > 最新文献

Proceedings of the forty-seventh annual ACM symposium on Theory of Computing最新文献

英文 中文
Complexity measures and hierarchies for the evaluation of integers, polynomials, and n-linear forms 复杂性措施和层次结构的评估整数,多项式,和n-线性形式
Pub Date : 1975-05-05 DOI: 10.1145/800116.803746
R. Lipton, D. Dobkin
The difficulty of evaluating integers and polynomials has been studied in various frameworks ranging from the addition-chain approach [5] to integer evaluation to recent efforts aimed at generating polynomials that are hard to evaluate [2,8,10]. Here we consider the classes of integers and polynomials that can be evaluated within given complexity bounds and prove the existence of proper hierarchies of complexity classes. The framework in which our problems are cast is general enough to allow any finite set of binary operations rather than just addition, subtraction, multiplication, and division. The motivation for studying complexity classes rather than specific integers or polynomials is analogous to why complexity classes are studied in automata-based complexity: (i) the immense difficulty associated with computing the complexity of a specific integer or polynomial; (ii) the important insight obtained from discovering the structure of the complexity classes.
从加法链方法[5]到整数评估,再到最近旨在生成难以评估的多项式的研究[2,8,10],各种框架都研究了评估整数和多项式的难度。本文考虑可在给定复杂度范围内求值的整数类和多项式类,并证明了复杂度类的适当层次的存在性。我们的问题所在的框架足够通用,可以允许任何有限的二进制操作集,而不仅仅是加法、减法、乘法和除法。研究复杂类而不是特定整数或多项式的动机类似于为什么在基于自动机的复杂性中研究复杂类:(i)与计算特定整数或多项式的复杂性相关的巨大困难;(ii)从发现复杂性类的结构中获得的重要见解。
{"title":"Complexity measures and hierarchies for the evaluation of integers, polynomials, and n-linear forms","authors":"R. Lipton, D. Dobkin","doi":"10.1145/800116.803746","DOIUrl":"https://doi.org/10.1145/800116.803746","url":null,"abstract":"The difficulty of evaluating integers and polynomials has been studied in various frameworks ranging from the addition-chain approach [5] to integer evaluation to recent efforts aimed at generating polynomials that are hard to evaluate [2,8,10]. Here we consider the classes of integers and polynomials that can be evaluated within given complexity bounds and prove the existence of proper hierarchies of complexity classes. The framework in which our problems are cast is general enough to allow any finite set of binary operations rather than just addition, subtraction, multiplication, and division. The motivation for studying complexity classes rather than specific integers or polynomials is analogous to why complexity classes are studied in automata-based complexity: (i) the immense difficulty associated with computing the complexity of a specific integer or polynomial; (ii) the important insight obtained from discovering the structure of the complexity classes.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1975-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87394249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Intercalation theorems for tree transducer languages 树形换能器语言的插补定理
Pub Date : 1975-05-05 DOI: 10.1145/800116.803761
C. Raymond Perrault
We develop intercalation lemmas for the computations of the top-down tree transducers defined by Rounds [15] and Thatcher [17]. These lemmas are used to prove necessary conditions for languages all of whose strings are of exponential length to be tree transducer languages. The language {ww:w&egr;{a,b}*, ¦w¦=2n,n≥0}, which is generable by the composition of two transducers, is shown not to be generable by one. The proof technique applies to bottom-up transducers as well. The results are related to some subclasses of Woods' Augmented Transition Networks [18] characterized elsewhere in terms of tree transducer languages [14].
我们为Rounds[15]和Thatcher[17]定义的自顶向下树形换能器的计算开发了插入引理。这些引理用来证明字符串长度为指数的语言是树换能器语言的必要条件。语言{ww:w&egr;{a,b}*,……=2n,n≥0}是由两个换能器组合生成的,而不是由一个换能器生成的。证明技术也适用于自下而上的传感器。这些结果与Woods' Augmented Transition Networks[18]的一些子类有关,这些子类在其他地方用树形换能器语言[14]来表征。
{"title":"Intercalation theorems for tree transducer languages","authors":"C. Raymond Perrault","doi":"10.1145/800116.803761","DOIUrl":"https://doi.org/10.1145/800116.803761","url":null,"abstract":"We develop intercalation lemmas for the computations of the top-down tree transducers defined by Rounds [15] and Thatcher [17]. These lemmas are used to prove necessary conditions for languages all of whose strings are of exponential length to be tree transducer languages. The language {ww:w&egr;{a,b}*, ¦w¦=2n,n≥0}, which is generable by the composition of two transducers, is shown not to be generable by one. The proof technique applies to bottom-up transducers as well. The results are related to some subclasses of Woods' Augmented Transition Networks [18] characterized elsewhere in terms of tree transducer languages [14].","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1975-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88826836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
On non-linear lower bounds in computational complexity 关于计算复杂度的非线性下界
Pub Date : 1975-05-05 DOI: 10.1145/800116.803752
L. Valiant
The purpose of this paper is to explore the possibility that purely graph-theoretic reasons may account for the superlinear complexity of wide classes of computational problems. The results are therefore of two kinds: reductions to graph theoretic conjectures on the one hand, and graph theoretic results on the other. We show that the graph of any algorithm for any one of a number of arithmetic problems (e.g. polynomial multiplication, discrete Fourier transforms, matrix multiplication) must have properties closely related to concentration networks.
本文的目的是探讨纯图论的原因可以解释大量计算问题的超线性复杂性的可能性。因此,结果有两种:一方面是对图论猜想的约简,另一方面是图论结果。我们证明了任意一种算法的图(例如多项式乘法、离散傅立叶变换、矩阵乘法)必须具有与集中网络密切相关的性质。
{"title":"On non-linear lower bounds in computational complexity","authors":"L. Valiant","doi":"10.1145/800116.803752","DOIUrl":"https://doi.org/10.1145/800116.803752","url":null,"abstract":"The purpose of this paper is to explore the possibility that purely graph-theoretic reasons may account for the superlinear complexity of wide classes of computational problems. The results are therefore of two kinds: reductions to graph theoretic conjectures on the one hand, and graph theoretic results on the other. We show that the graph of any algorithm for any one of a number of arithmetic problems (e.g. polynomial multiplication, discrete Fourier transforms, matrix multiplication) must have properties closely related to concentration networks.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1975-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78411620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 90
On the complexity of the Extended String-to-String Correction Problem 扩展字符串到字符串校正问题的复杂性
Pub Date : 1975-05-05 DOI: 10.1145/800116.803771
R. Wagner
The Extended String-to-String Correction Problem [ESSCP] is defined as the problem of determining, for given strings A and B over alphabet V, a minimum-cost sequence S of edit operations such that S(A) = B. The sequence S may make use of the operations: Change, Insert, Delete and Swaps, each of constant cost WC, WI, WD, and WS respectively. Swap permits any pair of adjacent characters to be interchanged. The principal results of this paper are: (1) a brief presentation of an algorithm (the CELLAR algorithm) which solves ESSCP in time Ø(¦A¦* ¦B¦* ¦V¦s*s), where s = min(4WC, WI+WD)/WS + 1; (2) presentation of polynomial time algorithms for the cases (a) WS = 0, (b) WS > 0, WC= WI= WD= @@@@; (3) proof that ESSCP, with WI < WC = WD = @@@@, 0 < WS < @@@@, suitably encoded, is NP-complete. (The remaining case, WS= @@@@, reduces ESSCP to the string-to-string correction problem of [1], where an Ø( ¦A¦* ¦B¦) algorithm is given.) Thus, “almost all” ESSCP's can be solved in deterministic polynomial time, but the general problem is NP-complete.
扩展字符串到字符串校正问题[ESSCP]被定义为,对于字母V上给定的字符串A和B,确定一个编辑操作的最小代价序列S,使得S(A) = B。序列S可以使用操作:更改、插入、删除和交换,每个操作的代价分别为WC、WI、WD和WS。Swap允许任意对相邻字符进行交换。本文的主要成果有:(1)简要介绍了求解ESSCP的一种算法(CELLAR算法),该算法的求解时间为Ø(γ γ γ γ γ γ γ γ γ γ γ γ γ γ γ γ γ γ γ γ γ γ γ γ γ γ γ γ γ),其中s = min(4WC, WI+WD)/WS + 1;(2)给出了(a) WS = 0, (b) WS >, WC= WI= WD= @@@@情况下的多项式时间算法;(3)证明了WI < WC = WD = @@@@, 0 < WS < @@@@编码适当的ESSCP是np完全的。(剩下的情况,WS= @@@@,将ESSCP简化为[1]的字符串到字符串校正问题,其中给出了Ø(…*…)算法。)因此,“几乎所有”的ESSCP问题都可以在确定性多项式时间内解决,但一般问题是np完全的。
{"title":"On the complexity of the Extended String-to-String Correction Problem","authors":"R. Wagner","doi":"10.1145/800116.803771","DOIUrl":"https://doi.org/10.1145/800116.803771","url":null,"abstract":"The Extended String-to-String Correction Problem [ESSCP] is defined as the problem of determining, for given strings A and B over alphabet V, a minimum-cost sequence S of edit operations such that S(A) = B. The sequence S may make use of the operations: Change, Insert, Delete and Swaps, each of constant cost WC, WI, WD, and WS respectively. Swap permits any pair of adjacent characters to be interchanged. The principal results of this paper are: (1) a brief presentation of an algorithm (the CELLAR algorithm) which solves ESSCP in time Ø(¦A¦* ¦B¦* ¦V¦s*s), where s = min(4WC, WI+WD)/WS + 1; (2) presentation of polynomial time algorithms for the cases (a) WS = 0, (b) WS > 0, WC= WI= WD= @@@@; (3) proof that ESSCP, with WI < WC = WD = @@@@, 0 < WS < @@@@, suitably encoded, is NP-complete. (The remaining case, WS= @@@@, reduces ESSCP to the string-to-string correction problem of [1], where an Ø( ¦A¦* ¦B¦) algorithm is given.) Thus, “almost all” ESSCP's can be solved in deterministic polynomial time, but the general problem is NP-complete.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1975-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87726512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 111
Feasibly constructive proofs and the propositional calculus (Preliminary Version) 可行构造证明与命题演算(初版)
Pub Date : 1975-05-05 DOI: 10.1145/800116.803756
S. Cook
The motivation for this work comes from two general sources. The first source is the basic open question in complexity theory of whether P equals NP (see [1] and [2]). Our approach is to try to show they are not equal, by trying to show that the set of tautologies is not in NP (of course its complement is in NP). This is equivalent to showing that no proof system (in the general sense defined in [3]) for the tautologies is “super” in the sense that there is a short proof for every tautology. Extended resolution is an example of a powerful proof system for tautologies that can simulate most standard proof systems (see [3]). The Main Theorem (5.5) in this paper describes the power of extended resolution in a way that may provide a handle for showing it is not super. The second motivation comes from constructive mathematics. A constructive proof of, say, a statement @@@@×A must provide an effective means of finding a proof of A for each value of x, but nothing is said about how long this proof is as a function of x. If the function is exponential or super exponential, then for short values of x the length of the proof of the instance of A may exceed the number of electrons in the universe. In section 2, I introduce the system PV for number theory, and it is this system which I suggest properly formalizes the notion of a feasibly constructive proof.
这项工作的动机一般来自两个来源。第一个来源是复杂性理论中关于P是否等于NP的基本开放问题(参见[1]和[2])。我们的方法是试图证明它们是不相等的,通过试图证明重言式集合不是NP(当然它的补是NP)这相当于表明,对于重言式,没有一个证明系统(在[3]中定义的一般意义上)是“超级”的,因为每个重言式都有一个简短的证明。扩展分辨率是一个强大的重言式证明系统的例子,它可以模拟大多数标准证明系统(参见[3])。本文中的主要定理(5.5)描述了扩展分辨率的力量,它可以提供一个句柄来表明它不是超级的。第二个动机来自构造数学。例如,一个命题@@@@×A的建设性证明必须提供一种有效的方法来找到每个x值的A的证明,但没有说这个证明作为x的函数有多长。如果函数是指数或超指数的,那么对于x的短值,A实例的证明的长度可能超过宇宙中电子的数量。在第2节中,我介绍了数论的PV系统,我认为正是这个系统正确地形式化了可行构造证明的概念。
{"title":"Feasibly constructive proofs and the propositional calculus (Preliminary Version)","authors":"S. Cook","doi":"10.1145/800116.803756","DOIUrl":"https://doi.org/10.1145/800116.803756","url":null,"abstract":"The motivation for this work comes from two general sources. The first source is the basic open question in complexity theory of whether P equals NP (see [1] and [2]). Our approach is to try to show they are not equal, by trying to show that the set of tautologies is not in NP (of course its complement is in NP). This is equivalent to showing that no proof system (in the general sense defined in [3]) for the tautologies is “super” in the sense that there is a short proof for every tautology. Extended resolution is an example of a powerful proof system for tautologies that can simulate most standard proof systems (see [3]). The Main Theorem (5.5) in this paper describes the power of extended resolution in a way that may provide a handle for showing it is not super. The second motivation comes from constructive mathematics. A constructive proof of, say, a statement @@@@×A must provide an effective means of finding a proof of A for each value of x, but nothing is said about how long this proof is as a function of x. If the function is exponential or super exponential, then for short values of x the length of the proof of the instance of A may exceed the number of electrons in the universe. In section 2, I introduce the system PV for number theory, and it is this system which I suggest properly formalizes the notion of a feasibly constructive proof.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1975-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88281754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 227
On computing the minima of quadratic forms (Preliminary Report) 二次型最小值的计算(初步报告)
Pub Date : 1975-05-05 DOI: 10.1145/800116.803749
A. Yao
The following problem was recently raised by C. William Gear [1]: Let F(x1,x2,...,xn) = &Sgr;i≤j a'ijxixj + &Sgr;i bixi +c be a quadratic form in n variables. We wish to compute the point x→(0) = (x1(0),...,xn(0)), at which F achieves its minimum, by a series of adaptive functional evaluations. It is clear that, by evaluating F(x→) at 1/2(n+1)(n+2)+1 points, we can determine the coefficients a'ij,bi,c and thereby find the point x→(0). Gear's question is, “How many evaluations are necessary?” In this paper, we shall prove that O(n2) evaluations are necessary in the worst case for any such algorithm.
最近,c . William Gear[1]提出了以下问题:设F(x1,x2,…,xn) = &Sgr;i≤j a'ijxixj + &Sgr;i bixi +c是n变量的二次型。我们希望通过一系列自适应函数求值来计算点x→(0)= (x1(0),…,xn(0)),在此点F达到最小值。很明显,通过计算F(x→)在1/2(n+1)(n+2)+1点处的值,我们可以确定系数a'ij,bi,c,从而找到点x→(0)。Gear的问题是,“有多少评估是必要的?”在本文中,我们将证明在最坏的情况下,任何这样的算法都需要O(n2)次求值。
{"title":"On computing the minima of quadratic forms (Preliminary Report)","authors":"A. Yao","doi":"10.1145/800116.803749","DOIUrl":"https://doi.org/10.1145/800116.803749","url":null,"abstract":"The following problem was recently raised by C. William Gear [1]: Let F(x1,x2,...,xn) = &Sgr;i≤j a'ijxixj + &Sgr;i bixi +c be a quadratic form in n variables. We wish to compute the point x→(0) = (x1(0),...,xn(0)), at which F achieves its minimum, by a series of adaptive functional evaluations. It is clear that, by evaluating F(x→) at 1/2(n+1)(n+2)+1 points, we can determine the coefficients a'ij,bi,c and thereby find the point x→(0). Gear's question is, “How many evaluations are necessary?” In this paper, we shall prove that O(n2) evaluations are necessary in the worst case for any such algorithm.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1975-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76581971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Proving assertions about programs that manipulate data structures 证明关于操作数据结构的程序的断言
Pub Date : 1975-05-05 DOI: 10.1145/800116.803758
D. Oppen, S. Cook
In this paper we wish to consider the problem of proving assertions about programs that construct and alter data structures. Our method will be to define a suitable assertion language L for data structures, to define a simple programming language L' for constructing and altering data structures, to give axioms and rules of inference (in the style of [Hoare 1969]) which specify the effect of program segments on data structures (described by formulas in L) and finally to prove that these axioms are correct (relative to a formal definition of the semantics of L') and, in a reasonable sense, complete. Thus our intention is to provide a complete theoretical framework for describing arbitrary data structures and proving assertions about programs that manipulate them.
在本文中,我们希望考虑关于构造和修改数据结构的程序的断言的证明问题。我们的方法是为数据结构定义一种合适的断言语言L,为构造和改变数据结构定义一种简单的编程语言L',给出公理和推理规则(以[Hoare 1969]的风格),指定程序片段对数据结构的影响(用L中的公式描述),最后证明这些公理是正确的(相对于L'语义的正式定义),并且在合理的意义上是完整的。因此,我们的目的是提供一个完整的理论框架,用于描述任意数据结构和证明关于操作它们的程序的断言。
{"title":"Proving assertions about programs that manipulate data structures","authors":"D. Oppen, S. Cook","doi":"10.1145/800116.803758","DOIUrl":"https://doi.org/10.1145/800116.803758","url":null,"abstract":"In this paper we wish to consider the problem of proving assertions about programs that construct and alter data structures. Our method will be to define a suitable assertion language L for data structures, to define a simple programming language L' for constructing and altering data structures, to give axioms and rules of inference (in the style of [Hoare 1969]) which specify the effect of program segments on data structures (described by formulas in L) and finally to prove that these axioms are correct (relative to a formal definition of the semantics of L') and, in a reasonable sense, complete. Thus our intention is to provide a complete theoretical framework for describing arbitrary data structures and proving assertions about programs that manipulate them.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1975-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72667161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Degree-languages, polynomial time recognition, and the LBA problem 学位语言,多项式时间识别,和LBA问题
Pub Date : 1975-05-05 DOI: 10.1145/800116.803763
D. Wotschke
The so-called Chomsky hierarchy [5], consisting of regular, context-free, context-sensitive, and recursively enumerable languages, does not account for many “real world” classes of languages, e.g., programming languages and natural languages [4]. This is one of the reasons why many attempts have been made to “refine” the original Chomsky classification. The main goal has been to describe languages which, for instance, are not context-free but are still context-sensitive, without using the powerful and complex concept of context-sensitive grammars.
所谓的乔姆斯基层次结构[5],由规则的、上下文无关的、上下文敏感的和递归可枚举的语言组成,并没有考虑到许多“现实世界”的语言类别,例如编程语言和自然语言[4]。这就是为什么许多人试图“完善”乔姆斯基的原始分类的原因之一。主要的目标是描述语言,例如,不是上下文无关的,但仍然是上下文敏感的,而不使用上下文敏感语法这个强大而复杂的概念。
{"title":"Degree-languages, polynomial time recognition, and the LBA problem","authors":"D. Wotschke","doi":"10.1145/800116.803763","DOIUrl":"https://doi.org/10.1145/800116.803763","url":null,"abstract":"The so-called Chomsky hierarchy [5], consisting of regular, context-free, context-sensitive, and recursively enumerable languages, does not account for many “real world” classes of languages, e.g., programming languages and natural languages [4]. This is one of the reasons why many attempts have been made to “refine” the original Chomsky classification. The main goal has been to describe languages which, for instance, are not context-free but are still context-sensitive, without using the powerful and complex concept of context-sensitive grammars.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1975-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77670742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Two applications of a probabilistic search technique: Sorting X+Y and building balanced search trees 概率搜索技术的两个应用:排序X+Y和构建平衡搜索树
Pub Date : 1975-05-05 DOI: 10.1145/800116.803774
M. Fredman
Let X = {x1,...,xN} and Y = {y1,...,yN} be sets of N real numbers. We denote by X + Y the multiset {xi + yj; 1 ≤ i, j ≤ N} of size N2. Berklekamp has posed the problem of sorting X + Y. Harper, Payne, Savage and Strauss [1] show that N21og2N comparisons suffice to sort X + Y, thereby saving a factor of 2 over sorting without exploiting the structure of X + Y. (Given u in X + Y, we assume that we know the i,j indices such that u = xi + yj.) Furthermore, they show that this bound is tight for a restricted class of comparison algorithms. However, without their restriction the order of magnitude comparison complexity of this problem has remained an open question. In this paper we show that X + Y can be sorted with O(N2) comparisons. Our proof is unusual for this type of problem in that we do not explicitly exhibit an algorithm. Instead, it is a particular application of a more general search technique whose behavior is easily related to information theoretic lower bounds. In the context of sorting, this search method translates into an insertion sort, where the insertions are not performed by means of the usual binary search, but rather as off-centered searches designed so that each comparison, roughly speaking, equally divides the space of remaining possibilities. We draw attention to this search technique because it might find application to other problems, and we illustrate this possibility with a second application. Our second application concerns the construction of probabilistically balanced binary search trees.
令X = {x1,…,xN}和Y = {y1,…, N}是N个实数的集合。我们用X + Y表示多集{xi + yj;1≤i, j≤N},大小为N2。Berklekamp提出了对X + Y排序的问题。Harper, Payne, Savage和Strauss[1]表明,N21og2N比较足以对X + Y排序,从而在不利用X + Y结构的情况下节省了2倍的排序(给定X + Y中的u,我们假设我们知道i,j个指标,使得u = xi + yj)。进一步地,他们证明了这个界对于一类受限的比较算法是紧的。然而,如果没有它们的限制,这个问题的数量级比较的复杂性仍然是一个悬而未决的问题。本文证明了X + Y可以用O(N2)次比较排序。对于这类问题,我们的证明是不寻常的,因为我们没有明确地展示算法。相反,它是一种更一般的搜索技术的特殊应用,其行为很容易与信息理论下界相关。在排序上下文中,这种搜索方法转换为插入排序,其中插入不是通过通常的二进制搜索执行的,而是作为偏离中心的搜索执行的,因此,粗略地说,每次比较都平分剩余可能性的空间。我们要注意这种搜索技术,因为它可以应用于其他问题,我们用第二个应用程序来说明这种可能性。我们的第二个应用涉及到概率平衡二叉搜索树的构造。
{"title":"Two applications of a probabilistic search technique: Sorting X+Y and building balanced search trees","authors":"M. Fredman","doi":"10.1145/800116.803774","DOIUrl":"https://doi.org/10.1145/800116.803774","url":null,"abstract":"Let X = {x1,...,xN} and Y = {y1,...,yN} be sets of N real numbers. We denote by X + Y the multiset {xi + yj; 1 ≤ i, j ≤ N} of size N2. Berklekamp has posed the problem of sorting X + Y. Harper, Payne, Savage and Strauss [1] show that N21og2N comparisons suffice to sort X + Y, thereby saving a factor of 2 over sorting without exploiting the structure of X + Y. (Given u in X + Y, we assume that we know the i,j indices such that u = xi + yj.) Furthermore, they show that this bound is tight for a restricted class of comparison algorithms. However, without their restriction the order of magnitude comparison complexity of this problem has remained an open question. In this paper we show that X + Y can be sorted with O(N2) comparisons. Our proof is unusual for this type of problem in that we do not explicitly exhibit an algorithm. Instead, it is a particular application of a more general search technique whose behavior is easily related to information theoretic lower bounds. In the context of sorting, this search method translates into an insertion sort, where the insertions are not performed by means of the usual binary search, but rather as off-centered searches designed so that each comparison, roughly speaking, equally divides the space of remaining possibilities. We draw attention to this search technique because it might find application to other problems, and we illustrate this possibility with a second application. Our second application concerns the construction of probabilistically balanced binary search trees.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1975-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78676578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
Four models for the analysis and optimization of program control structures 程序控制结构分析与优化的四种模型
Pub Date : 1975-05-05 DOI: 10.1145/800116.803766
T. W. Pratt
The analysis of the relation between the structure of a program and the function that it computes requires a decomposition of the program into its components. Traditionally this decomposition has been based on the common division of a program into subprograms, and ultimately into statements, expressions and individual variables and constants. In this paper an alternative decomposition is proposed that is based on the decomposition of a program into a set of kernel elements, those program elements that participate in the direct computation of the outputs of the program, and a set of control elements, those elements that participate in the determination of the execution path. The kernel-control decomposition of a program leads to a series of progressively more abstract program representations, each of which has both theoretical and practical interest. The separation of control structure from kernel and the three abstract models presented here, which are based on this decomposition, are particularly valuable in the analysis and optimization of program control structures. This research summary outlines the major results, which will be reported in full in a journal article.
分析一个程序的结构和它所计算的功能之间的关系需要把程序分解成它的组成部分。传统上,这种分解是基于将程序划分为子程序,最终划分为语句、表达式、单个变量和常量。本文提出了另一种分解方法,该方法基于将程序分解为一组核心元素(参与程序输出的直接计算的程序元素)和一组控制元素(参与确定执行路径的元素)。程序的核控制分解导致一系列逐渐抽象的程序表示,其中每一个都具有理论和实践意义。控制结构与内核的分离以及基于这种分离的三个抽象模型,在程序控制结构的分析和优化中具有重要的应用价值。本研究摘要概述了主要结果,将全文发表在期刊文章中。
{"title":"Four models for the analysis and optimization of program control structures","authors":"T. W. Pratt","doi":"10.1145/800116.803766","DOIUrl":"https://doi.org/10.1145/800116.803766","url":null,"abstract":"The analysis of the relation between the structure of a program and the function that it computes requires a decomposition of the program into its components. Traditionally this decomposition has been based on the common division of a program into subprograms, and ultimately into statements, expressions and individual variables and constants. In this paper an alternative decomposition is proposed that is based on the decomposition of a program into a set of kernel elements, those program elements that participate in the direct computation of the outputs of the program, and a set of control elements, those elements that participate in the determination of the execution path. The kernel-control decomposition of a program leads to a series of progressively more abstract program representations, each of which has both theoretical and practical interest. The separation of control structure from kernel and the three abstract models presented here, which are based on this decomposition, are particularly valuable in the analysis and optimization of program control structures. This research summary outlines the major results, which will be reported in full in a journal article.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1975-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74125244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Proceedings of the forty-seventh annual ACM symposium on Theory of Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1