首页 > 最新文献

[Proceedings 1988] 29th Annual Symposium on Foundations of Computer Science最新文献

英文 中文
Covering polygons is hard 覆盖多边形是困难的
Pub Date : 1988-10-24 DOI: 10.1109/SFCS.1988.21976
J. Culberson, R. Reckhow
It is shown that the following minimum cover problems are NP-hard, even for polygons without holes: (1) covering an arbitrary polygon with convex polygons; (2) covering the boundary of an arbitrary polygon with convex polygons; (3) covering an orthogonal polygon with rectangles; and (4) covering the boundary of an orthogonal polygon with rectangles. It is noted that these results hold even if the polygons are required to be in general position.<>
证明了以下最小覆盖问题是np困难的,即使对于没有孔的多边形也是如此:(1)用凸多边形覆盖任意多边形;(2)用凸多边形覆盖任意多边形的边界;(3)用矩形覆盖正交多边形;(4)用矩形覆盖正交多边形的边界。值得注意的是,即使需要多边形处于一般位置,这些结果也成立
{"title":"Covering polygons is hard","authors":"J. Culberson, R. Reckhow","doi":"10.1109/SFCS.1988.21976","DOIUrl":"https://doi.org/10.1109/SFCS.1988.21976","url":null,"abstract":"It is shown that the following minimum cover problems are NP-hard, even for polygons without holes: (1) covering an arbitrary polygon with convex polygons; (2) covering the boundary of an arbitrary polygon with convex polygons; (3) covering an orthogonal polygon with rectangles; and (4) covering the boundary of an orthogonal polygon with rectangles. It is noted that these results hold even if the polygons are required to be in general position.<<ETX>>","PeriodicalId":113255,"journal":{"name":"[Proceedings 1988] 29th Annual Symposium on Foundations of Computer Science","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115480282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 155
A fast planar partition algorithm. I 一种快速平面划分算法。我
Pub Date : 1988-10-24 DOI: 10.1109/SFCS.1988.21974
K. Mulmuley
A fast randomized algorithm is given for finding a partition of the plane induced by a given set of linear segments. The algorithm is ideally suited for a practical use because it is extremely simple and robust, as well as optimal; its expected running time is O(m+n log n) where n is the number of input segments and m is the number of points of intersection. The storage requirement is O(m+n). Though the algorithm itself is simple, the global evolution of the partition is complex, which makes the analysis of the algorithm theoretically interesting in its own right.<>
给出了一种快速的随机化算法,用于求由一组给定的线性线段引起的平面划分。该算法非常适合于实际应用,因为它非常简单、鲁棒,而且是最优的;其预期运行时间为O(m+n log n),其中n为输入段的个数,m为交点的个数。存储空间要求为0 (m+n)。虽然算法本身很简单,但分割的全局演变是复杂的,这使得对算法的理论分析本身就很有趣。
{"title":"A fast planar partition algorithm. I","authors":"K. Mulmuley","doi":"10.1109/SFCS.1988.21974","DOIUrl":"https://doi.org/10.1109/SFCS.1988.21974","url":null,"abstract":"A fast randomized algorithm is given for finding a partition of the plane induced by a given set of linear segments. The algorithm is ideally suited for a practical use because it is extremely simple and robust, as well as optimal; its expected running time is O(m+n log n) where n is the number of input segments and m is the number of points of intersection. The storage requirement is O(m+n). Though the algorithm itself is simple, the global evolution of the partition is complex, which makes the analysis of the algorithm theoretically interesting in its own right.<<ETX>>","PeriodicalId":113255,"journal":{"name":"[Proceedings 1988] 29th Annual Symposium on Foundations of Computer Science","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125858467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 196
Combinatorial algorithms for the generalized circulation problem 广义循环问题的组合算法
Pub Date : 1988-10-24 DOI: 10.1109/SFCS.1988.21959
A. Goldberg, Serge A. Plotkin, É. Tardos
A generalization of the maximum-flow problem is considered in which the amounts of flow entering and leaving an arc are linearly related. More precisely, if x(e) units of flow enter an arc e, x(e) lambda (e) units arrive at the other end. For instance, nodes of the graph can correspond to different currencies, with the multipliers being the exchange rates. Conservation of flow is required at every node except a given source node. The goal is to maximize the amount of flow excess at the source. This problem is a special case of linear programming, and therefore can be solved in polynomial time. The authors present polynomial-time combinatorial algorithms for this problem. The algorithms are simple and intuitive.<>
考虑了最大流量问题的推广,其中进入和离开弧的流量是线性相关的。更准确地说,如果x(e)个单位的流量进入一个弧e, x(e)个单位到达另一端。例如,图中的节点可以对应不同的货币,乘数是汇率。除了给定的源节点外,每个节点都需要保持流量。目标是使源处的流量最大化。这个问题是线性规划的一个特例,因此可以在多项式时间内解决。作者提出了求解这一问题的多项式-时间组合算法。算法简单直观。
{"title":"Combinatorial algorithms for the generalized circulation problem","authors":"A. Goldberg, Serge A. Plotkin, É. Tardos","doi":"10.1109/SFCS.1988.21959","DOIUrl":"https://doi.org/10.1109/SFCS.1988.21959","url":null,"abstract":"A generalization of the maximum-flow problem is considered in which the amounts of flow entering and leaving an arc are linearly related. More precisely, if x(e) units of flow enter an arc e, x(e) lambda (e) units arrive at the other end. For instance, nodes of the graph can correspond to different currencies, with the multipliers being the exchange rates. Conservation of flow is required at every node except a given source node. The goal is to maximize the amount of flow excess at the source. This problem is a special case of linear programming, and therefore can be solved in polynomial time. The authors present polynomial-time combinatorial algorithms for this problem. The algorithms are simple and intuitive.<<ETX>>","PeriodicalId":113255,"journal":{"name":"[Proceedings 1988] 29th Annual Symposium on Foundations of Computer Science","volume":"334 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127574669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 120
Results on learnability and the Vapnik-Chervonenkis dimension 易学性与Vapnik-Chervonenkis维度的研究结果
Pub Date : 1988-10-24 DOI: 10.1109/SFCS.1988.21930
N. Linial, Y. Mansour, R. Rivest
The problem of learning a concept from examples in a distribution-free model is considered. The notion of dynamic sampling, wherein the number of examples examined can increase with the complexity of the target concept, is introduced. This method is used to establish the learnability of various concept classes with an infinite Vapnik-Chervonenkis (VC) dimension. An important variation on the problem of learning from examples, called approximating from examples, is also discussed. The problem of computing the VC dimension of a finite concept set defined on a finite domain is considered.<>
研究了无分布模型中从实例中学习概念的问题。引入了动态采样的概念,其中检查的示例数量可以随着目标概念的复杂性而增加。该方法用于建立具有无限VC维的各种概念类的可学习性。本文还讨论了从例子中学习问题的一个重要变体,即从例子中近似。研究了有限域上有限概念集的VC维计算问题。
{"title":"Results on learnability and the Vapnik-Chervonenkis dimension","authors":"N. Linial, Y. Mansour, R. Rivest","doi":"10.1109/SFCS.1988.21930","DOIUrl":"https://doi.org/10.1109/SFCS.1988.21930","url":null,"abstract":"The problem of learning a concept from examples in a distribution-free model is considered. The notion of dynamic sampling, wherein the number of examples examined can increase with the complexity of the target concept, is introduced. This method is used to establish the learnability of various concept classes with an infinite Vapnik-Chervonenkis (VC) dimension. An important variation on the problem of learning from examples, called approximating from examples, is also discussed. The problem of computing the VC dimension of a finite concept set defined on a finite domain is considered.<<ETX>>","PeriodicalId":113255,"journal":{"name":"[Proceedings 1988] 29th Annual Symposium on Foundations of Computer Science","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130503299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 97
On pointers versus addresses 关于指针和地址
Pub Date : 1988-10-24 DOI: 10.1145/146637.146666
Amir M. Ben-Amram, Z. Galil
The problem of determining the cost of random-access memory (RAM) is addressed by studying the simulation of random addressing by a machine which lacks it, called a pointer machine. The model allows the use of a data type of choice. A RAM program of time t and space s can be simulated in O(t log s) time using a tree. However, this is not an obvious lower bound since a high-level data type can allow the data to be encoded in a more economical way. The major contribution is the formalization of incompressibility for general data types. The definition extends a similar property of strings that underlies the theory of Kolmogorov complexity. The main theorem states that for all incompressible data types an Omega (t log s) lower bound holds. Incompressibility is proved for the real numbers with a set of primitives which includes all functions which are continuously differentiable except on a countable closed set.<>
通过研究无随机寻址的指针机对随机寻址的模拟,解决了确定随机存取存储器(RAM)成本的问题。该模型允许使用数据类型的选择。时间为t,空间为s的RAM程序可以使用树在O(t log s)时间内模拟。但是,这并不是一个明显的下限,因为高级数据类型可以允许以更经济的方式对数据进行编码。其主要贡献是对一般数据类型的不可压缩性进行了形式化。该定义扩展了作为柯尔莫哥洛夫复杂度理论基础的弦的类似性质。主要定理表明,对于所有不可压缩的数据类型都有(t log s)下界。证明了实数在包含除可数闭集以外的所有连续可微函数的基元集上的不可压缩性。
{"title":"On pointers versus addresses","authors":"Amir M. Ben-Amram, Z. Galil","doi":"10.1145/146637.146666","DOIUrl":"https://doi.org/10.1145/146637.146666","url":null,"abstract":"The problem of determining the cost of random-access memory (RAM) is addressed by studying the simulation of random addressing by a machine which lacks it, called a pointer machine. The model allows the use of a data type of choice. A RAM program of time t and space s can be simulated in O(t log s) time using a tree. However, this is not an obvious lower bound since a high-level data type can allow the data to be encoded in a more economical way. The major contribution is the formalization of incompressibility for general data types. The definition extends a similar property of strings that underlies the theory of Kolmogorov complexity. The main theorem states that for all incompressible data types an Omega (t log s) lower bound holds. Incompressibility is proved for the real numbers with a set of primitives which includes all functions which are continuously differentiable except on a countable closed set.<<ETX>>","PeriodicalId":113255,"journal":{"name":"[Proceedings 1988] 29th Annual Symposium on Foundations of Computer Science","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132258982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
Learning probabilistic prediction functions 学习概率预测函数
Pub Date : 1988-10-24 DOI: 10.1109/SFCS.1988.21929
A. D. Santis, G. Markowsky, M. Wegman
The question of how to learn rules, when those rules make probabilistic statements about the future, is considered. Issues are discussed that arise when attempting to determine what a good prediction function is, when those prediction functions make probabilistic assumptions. Learning has at least two purposes: to enable the learner to make predictions in the future and to satisfy intellectual curiosity as to the underlying cause of a process. Two results related to these distinct goals are given. In both cases, the inputs are a countable collection of functions which make probabilistic statements about a sequence of events. One of the results shows how to find one of the functions, which generated the sequence, the other result allows to do as well in terms of predicting events as the best of the collection. In both cases the results are obtained by evaluating a function based on a tradeoff between its simplicity and the accuracy of its predictions.<>
当这些规则对未来做出概率陈述时,如何学习规则的问题被考虑在内。当试图确定一个好的预测函数是什么时,当这些预测函数做出概率假设时,讨论了出现的问题。学习至少有两个目的:使学习者能够对未来作出预测,并满足对过程的根本原因的求知欲。给出了与这些不同目标相关的两个结果。在这两种情况下,输入都是可数函数的集合,这些函数对一系列事件做出概率性陈述。其中一个结果显示了如何找到生成序列的函数之一,另一个结果允许在预测事件方面做得一样好,是集合中最好的。在这两种情况下,结果都是通过权衡函数的简单性和预测的准确性来获得的。
{"title":"Learning probabilistic prediction functions","authors":"A. D. Santis, G. Markowsky, M. Wegman","doi":"10.1109/SFCS.1988.21929","DOIUrl":"https://doi.org/10.1109/SFCS.1988.21929","url":null,"abstract":"The question of how to learn rules, when those rules make probabilistic statements about the future, is considered. Issues are discussed that arise when attempting to determine what a good prediction function is, when those prediction functions make probabilistic assumptions. Learning has at least two purposes: to enable the learner to make predictions in the future and to satisfy intellectual curiosity as to the underlying cause of a process. Two results related to these distinct goals are given. In both cases, the inputs are a countable collection of functions which make probabilistic statements about a sequence of events. One of the results shows how to find one of the functions, which generated the sequence, the other result allows to do as well in terms of predicting events as the best of the collection. In both cases the results are obtained by evaluating a function based on a tradeoff between its simplicity and the accuracy of its predictions.<<ETX>>","PeriodicalId":113255,"journal":{"name":"[Proceedings 1988] 29th Annual Symposium on Foundations of Computer Science","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121901036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 90
Three stacks 3个栈
Pub Date : 1988-10-24 DOI: 10.1109/SFCS.1988.21967
M. Fredman, D. Goldsmith
The storage allocation for three stacks has been traditionally accomplished by using pointers to store the stacks as linked lists or by relocating the stacks within memory when collisions take place. The former approach requires additional space to store the pointers, and the latter approach requires additional time. The authors explore the extent to which some additional space or time is required to maintain three stacks. They provide a formal setting for this topic and establish upper and lower complexity bounds on various aspects.<>
三个堆栈的存储分配传统上是通过使用指针将堆栈存储为链表或在发生冲突时将堆栈重新定位到内存中来完成的。前一种方法需要额外的空间来存储指针,后一种方法需要额外的时间。作者探讨了维持三个堆栈需要额外的空间或时间的程度。它们为该主题提供了一个正式的设置,并在各个方面建立了上下限复杂性界限。
{"title":"Three stacks","authors":"M. Fredman, D. Goldsmith","doi":"10.1109/SFCS.1988.21967","DOIUrl":"https://doi.org/10.1109/SFCS.1988.21967","url":null,"abstract":"The storage allocation for three stacks has been traditionally accomplished by using pointers to store the stacks as linked lists or by relocating the stacks within memory when collisions take place. The former approach requires additional space to store the pointers, and the latter approach requires additional time. The authors explore the extent to which some additional space or time is required to maintain three stacks. They provide a formal setting for this topic and establish upper and lower complexity bounds on various aspects.<<ETX>>","PeriodicalId":113255,"journal":{"name":"[Proceedings 1988] 29th Annual Symposium on Foundations of Computer Science","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116779520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A lower bound for matrix multiplication 矩阵乘法的下界
Pub Date : 1988-10-24 DOI: 10.1109/SFCS.1988.21922
N. Bshouty
It is proved that computing the product of two n*n matrices over the binary field requires at least 2.5n/sup 2/-O(n/sup 2/) multiplications.<>
证明了在二进制域上计算两个n*n矩阵的乘积至少需要2.5n/sup 2/-O(n/sup 2/)次乘法
{"title":"A lower bound for matrix multiplication","authors":"N. Bshouty","doi":"10.1109/SFCS.1988.21922","DOIUrl":"https://doi.org/10.1109/SFCS.1988.21922","url":null,"abstract":"It is proved that computing the product of two n*n matrices over the binary field requires at least 2.5n/sup 2/-O(n/sup 2/) multiplications.<<ETX>>","PeriodicalId":113255,"journal":{"name":"[Proceedings 1988] 29th Annual Symposium on Foundations of Computer Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125347768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Fully abstract models of the lazy lambda calculus 完全抽象模型的懒惰λ演算
Pub Date : 1988-10-24 DOI: 10.1109/SFCS.1988.21953
C. Ong
Much of what is known about the model theory and proof theory of the lambda -calculus is sensible in nature, i.e. only head normal forms are semantically meaningful. However, most functional languages are lazy, i.e. programs are evaluated in normal order to weak head normal forms. The author develops a theory of lazy or strongly sensible lambda -calculus that corresponds to practice. A general method for constructing fully abstract models for a class of lazy languages is illustrated. A formal system called lambda beta C ( lambda beta -calculus with convergence testing C) is introduced, and its properties are investigated.<>
关于λ演算的模型理论和证明理论的许多已知内容在本质上是合理的,即只有头部范式在语义上有意义。然而,大多数函数式语言都是懒惰的,即程序按照正常顺序进行评估,而不是按照正常形式进行评估。作者提出了一种符合实际的懒或强敏感λ演算理论。给出了构造一类惰性语言的全抽象模型的一般方法。引入了一种称为λ β C (λ β -带收敛性检验C的微积分)的形式系统,并研究了它的性质。
{"title":"Fully abstract models of the lazy lambda calculus","authors":"C. Ong","doi":"10.1109/SFCS.1988.21953","DOIUrl":"https://doi.org/10.1109/SFCS.1988.21953","url":null,"abstract":"Much of what is known about the model theory and proof theory of the lambda -calculus is sensible in nature, i.e. only head normal forms are semantically meaningful. However, most functional languages are lazy, i.e. programs are evaluated in normal order to weak head normal forms. The author develops a theory of lazy or strongly sensible lambda -calculus that corresponds to practice. A general method for constructing fully abstract models for a class of lazy languages is illustrated. A formal system called lambda beta C ( lambda beta -calculus with convergence testing C) is introduced, and its properties are investigated.<<ETX>>","PeriodicalId":113255,"journal":{"name":"[Proceedings 1988] 29th Annual Symposium on Foundations of Computer Science","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116533440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Dynamic perfect hashing: upper and lower bounds 动态完美哈希:上限和下限
Pub Date : 1988-10-24 DOI: 10.1109/SFCS.1988.21968
Martin Dietzfelbinger, Anna R. Karlin, K. Mehlhorn, F. Heide, H. Rohnert, R. Tarjan
A randomized algorithm is given for the dictionary problem with O(1) worst-case time for lookup and O(1) amortized expected time for insertion and deletion. An Omega (log n) lower bound is proved for the amortized worst-case time complexity of any deterministic algorithm in a class of algorithms encompassing realistic hashing-based schemes. If the worst-case lookup time is restricted to k, then the lower bound for insertion becomes Omega (kn/sup 1/k/).<>
针对字典问题,给出了一种随机化算法,该算法的最坏情况查找时间为O(1),期望插入和删除时间为O(1)平摊。在一类包含现实哈希算法的算法中,证明了任意确定性算法的平摊最坏情况时间复杂度的Omega (log n)下界。如果最坏情况查找时间限制为k,则插入的下界为ω (kn/sup 1/k/)。
{"title":"Dynamic perfect hashing: upper and lower bounds","authors":"Martin Dietzfelbinger, Anna R. Karlin, K. Mehlhorn, F. Heide, H. Rohnert, R. Tarjan","doi":"10.1109/SFCS.1988.21968","DOIUrl":"https://doi.org/10.1109/SFCS.1988.21968","url":null,"abstract":"A randomized algorithm is given for the dictionary problem with O(1) worst-case time for lookup and O(1) amortized expected time for insertion and deletion. An Omega (log n) lower bound is proved for the amortized worst-case time complexity of any deterministic algorithm in a class of algorithms encompassing realistic hashing-based schemes. If the worst-case lookup time is restricted to k, then the lower bound for insertion becomes Omega (kn/sup 1/k/).<<ETX>>","PeriodicalId":113255,"journal":{"name":"[Proceedings 1988] 29th Annual Symposium on Foundations of Computer Science","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126529527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
[Proceedings 1988] 29th Annual Symposium on Foundations of Computer Science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1