首页 > 最新文献

Computer Languages最新文献

英文 中文
List of contents and author index 目录和作者索引
Pub Date : 1994-11-01 DOI: 10.1016/0096-0551(94)90009-4
{"title":"List of contents and author index","authors":"","doi":"10.1016/0096-0551(94)90009-4","DOIUrl":"https://doi.org/10.1016/0096-0551(94)90009-4","url":null,"abstract":"","PeriodicalId":100315,"journal":{"name":"Computer Languages","volume":"20 4","pages":"Pages iii-iv"},"PeriodicalIF":0.0,"publicationDate":"1994-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/0096-0551(94)90009-4","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137349726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A process oriented semantics of the PRAM-language FORK pram语言FORK的面向进程的语义
Pub Date : 1994-11-01 DOI: 10.1016/0096-0551(94)90007-8
Gudula Rünger, Kurt Sieber

The parallel language FORK [1], based on a scalable shared memory model, is a PASCAL-like language with some additional parallel constructs. A PRAM (Parallel Random Access Machine) algorithm can be expressed on a high level of abstraction as a FORK program which is translated into efficient PRAM code guaranteeing theoretically predicted runtimes.

In this paper, we concentrate on those features of the language FORK related to parallelism, such as the group concept, a shared memory access and synchronous or asynchronous execution. We present a trace-based denotational interleaving semantics where processes describe synchronous computations. Processes are created or deleted dynamically and run asynchronously. Interleaving rules reflect the underlying CRCW (concurrent-read-concurrent-write) PRAM model.

并行语言FORK[1]基于可伸缩的共享内存模型,是一种类似pascal的语言,带有一些额外的并行结构。并行随机存取机(PRAM)算法可以在较高的抽象层次上表示为FORK程序,该程序被转换为有效的PRAM代码,保证理论上预测的运行时间。在本文中,我们将集中讨论FORK语言中与并行性相关的特性,例如组概念、共享内存访问以及同步或异步执行。我们提出了一种基于跟踪的指示交错语义,其中进程描述同步计算。动态创建或删除进程,并异步运行。交错规则反映底层的CRCW(并发-读-并发-写)PRAM模型。
{"title":"A process oriented semantics of the PRAM-language FORK","authors":"Gudula Rünger,&nbsp;Kurt Sieber","doi":"10.1016/0096-0551(94)90007-8","DOIUrl":"10.1016/0096-0551(94)90007-8","url":null,"abstract":"<div><p>The parallel language FORK [1], based on a scalable shared memory model, is a PASCAL-like language with some additional parallel constructs. A PRAM (Parallel Random Access Machine) algorithm can be expressed on a high level of abstraction as a FORK program which is translated into efficient PRAM code guaranteeing theoretically predicted runtimes.</p><p>In this paper, we concentrate on those features of the language FORK related to parallelism, such as the group concept, a shared memory access and synchronous or asynchronous execution. We present a trace-based denotational interleaving semantics where processes describe synchronous computations. Processes are created or deleted dynamically and run asynchronously. Interleaving rules reflect the underlying CRCW (concurrent-read-concurrent-write) PRAM model.</p></div>","PeriodicalId":100315,"journal":{"name":"Computer Languages","volume":"20 4","pages":"Pages 253-265"},"PeriodicalIF":0.0,"publicationDate":"1994-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/0096-0551(94)90007-8","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72653414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parallel incremental LR parsing 并行增量LR解析
Pub Date : 1994-08-01 DOI: 10.1016/0096-0551(94)90002-7
N. Viswanathan, Y.N. Srikant

A new parallel parsing algorithm for block structured languages, capable of parsing incrementally also, is presented. The parser is for LR grammars. A shared memory multiprocessor model is assumed. We associate processors to parse corrections independently with minimum reparsing. A new compatibility condition is used by the associated processors to terminate parsing, and prevent redoing the work of other processors. We give an efficient way of assembling the final parse tree from the individual parses. Our compatibility condition is simple and it can be computed at the parser construction time itself. Further, the compatibility condition can be tested while parsing, in constant time. The parser can be integrated into the editor. We give an estimate for speedup by our parallel parsing and parallel incremental parsing methods. We have obtained considerable speedups in simulation studies of our algorithm.

提出了一种适用于块结构语言的并行解析算法,该算法具有增量解析的能力。解析器用于LR语法。假设一个共享内存多处理器模型。我们将处理器关联起来,以最少的重新解析来独立解析更正。关联处理器使用一个新的兼容性条件来终止解析,并防止重做其他处理器的工作。我们给出了一种从各个解析集合最终解析树的有效方法。我们的兼容性条件很简单,它可以在解析器构建时计算出来。此外,可以在解析时以恒定的时间测试兼容性条件。解析器可以集成到编辑器中。给出了并行解析和并行增量解析方法的加速估计。我们在算法的模拟研究中获得了相当大的加速。
{"title":"Parallel incremental LR parsing","authors":"N. Viswanathan,&nbsp;Y.N. Srikant","doi":"10.1016/0096-0551(94)90002-7","DOIUrl":"10.1016/0096-0551(94)90002-7","url":null,"abstract":"<div><p>A new parallel parsing algorithm for block structured languages, capable of parsing incrementally also, is presented. The parser is for LR grammars. A shared memory multiprocessor model is assumed. We associate processors to parse corrections independently with minimum reparsing. A new compatibility condition is used by the associated processors to terminate parsing, and prevent redoing the work of other processors. We give an efficient way of assembling the final parse tree from the individual parses. Our compatibility condition is simple and it can be computed at the parser construction time itself. Further, the compatibility condition can be tested while parsing, in constant time. The parser can be integrated into the editor. We give an estimate for speedup by our parallel parsing and parallel incremental parsing methods. We have obtained considerable speedups in simulation studies of our algorithm.</p></div>","PeriodicalId":100315,"journal":{"name":"Computer Languages","volume":"20 3","pages":"Pages 151-175"},"PeriodicalIF":0.0,"publicationDate":"1994-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/0096-0551(94)90002-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81011378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Discrete loops and worst case performance 离散循环和最坏情况下的性能
Pub Date : 1994-08-01 DOI: 10.1016/0096-0551(94)90004-3
Johann Blieberger

In this paper so-called discrete loops are introduced which narrow the gap between general loops (e.g. while- or repeat-loops) and for-loops. Although discrete loops can be used for applications that would otherwise require general loops, discrete loops are known to complete in any case. Furthermore it is possible to determine the number of iterations of a discrete loop, while this is trivial to do for for-loops and extremely difficult for general loops. Thus discrete loops form an ideal frame-work for determining the worst case timing behavior of a program and they are especially useful in implementing real-time systems and proving such systems correct.

本文引入了所谓的离散循环,它缩小了一般循环(例如while循环或repeat循环)和for循环之间的差距。虽然离散循环可以用于需要一般循环的应用程序,但已知离散循环在任何情况下都是完整的。此外,还可以确定离散循环的迭代次数,而这对于for循环来说是微不足道的,对于一般循环来说则极其困难。因此,离散循环形成了一个理想的框架,用于确定程序的最坏情况定时行为,它们在实现实时系统和证明此类系统的正确性方面特别有用。
{"title":"Discrete loops and worst case performance","authors":"Johann Blieberger","doi":"10.1016/0096-0551(94)90004-3","DOIUrl":"10.1016/0096-0551(94)90004-3","url":null,"abstract":"<div><p>In this paper so-called <em>discrete loops</em> are introduced which narrow the gap between general loops (e.g. while- or repeat-loops) and for-loops. Although discrete loops can be used for applications that would otherwise require general loops, discrete loops are known to complete in any case. Furthermore it is possible to determine the number of iterations of a discrete loop, while this is trivial to do for for-loops and extremely difficult for general loops. Thus discrete loops form an ideal frame-work for determining the worst case timing behavior of a program and they are especially useful in implementing real-time systems and proving such systems correct.</p></div>","PeriodicalId":100315,"journal":{"name":"Computer Languages","volume":"20 3","pages":"Pages 193-212"},"PeriodicalIF":0.0,"publicationDate":"1994-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/0096-0551(94)90004-3","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80741357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Experiments with destructive updates in a lazy functional language 在惰性函数式语言中尝试破坏性更新
Pub Date : 1994-08-01 DOI: 10.1016/0096-0551(94)90003-5
Pieter H. Hartel, Willem G. Vree

The aggregate update problem has received considerable attention since pure functional programming languages were recognised as an interesting research topic. There is extensive literature in this area, which proposes a wide variety of solutions. We have tried to apply some of the proposed solutions to our own applications to see how these solutions work in practice. We have been able to use destructive updates but are not convinced that this could have been achieved without application specific knowledge. In particular, no form of update analysis has been reported that is applicable to non-flat domains in polymorphic languages with higher order functions.

It is our belief that a refinement of the monolithic approach towards constructing arrays may be a good alternative to using the incremental approach with destructive updates.

自从纯函数式编程语言被认为是一个有趣的研究课题以来,聚合更新问题受到了相当大的关注。在这个领域有大量的文献,提出了各种各样的解决方案。我们尝试将一些建议的解决方案应用到我们自己的应用程序中,以了解这些解决方案在实践中是如何工作的。我们已经能够使用破坏性更新,但不相信这可以在没有应用程序特定知识的情况下实现。特别是,没有任何形式的更新分析被报道适用于具有高阶函数的多态语言中的非平面域。我们相信,对于构造数组的整体方法的改进可能是使用带有破坏性更新的增量方法的一个很好的替代方案。
{"title":"Experiments with destructive updates in a lazy functional language","authors":"Pieter H. Hartel,&nbsp;Willem G. Vree","doi":"10.1016/0096-0551(94)90003-5","DOIUrl":"https://doi.org/10.1016/0096-0551(94)90003-5","url":null,"abstract":"<div><p>The aggregate update problem has received considerable attention since pure functional programming languages were recognised as an interesting research topic. There is extensive literature in this area, which proposes a wide variety of solutions. We have tried to apply some of the proposed solutions to our own applications to see how these solutions work in practice. We have been able to use destructive updates but are not convinced that this could have been achieved without application specific knowledge. In particular, no form of update analysis has been reported that is applicable to non-flat domains in polymorphic languages with higher order functions.</p><p>It is our belief that a refinement of the monolithic approach towards constructing arrays may be a good alternative to using the incremental approach with destructive updates.</p></div>","PeriodicalId":100315,"journal":{"name":"Computer Languages","volume":"20 3","pages":"Pages 177-192"},"PeriodicalIF":0.0,"publicationDate":"1994-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/0096-0551(94)90003-5","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92091589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
An automatic parallelization framework for multicomputers 用于多计算机的自动并行化框架
Pub Date : 1994-08-01 DOI: 10.1016/0096-0551(94)90001-9
U. Nagaraj Shenoy , Y.N. Srikant , V.P. Bhatkar

Several researchers have looked into various issues related to automatic parallelization of sequential programs for multicomputers. But there is a need for a coherent framework which encompasses all these issues. In this paper we present a such a framework which takes best advantage of the multicomputer architecture. We resort to tiling transformation for iteration space partitioning and propose a scheme of automatic data partitioning and dynamic data distribution. We have tried a simple implementation of our scheme on a transputer based multicomputer [1] and the results are encouraging.

一些研究人员研究了与多计算机串行程序的自动并行化有关的各种问题。但是,需要一个包括所有这些问题的连贯框架。在本文中,我们提出了一个这样的框架,充分利用了多计算机体系结构的优势。采用平铺变换进行迭代空间划分,提出了一种数据自动划分和动态分布的方案。我们已经在一个基于多机的多机[1]上尝试了一个简单的实现方案,结果令人鼓舞。
{"title":"An automatic parallelization framework for multicomputers","authors":"U. Nagaraj Shenoy ,&nbsp;Y.N. Srikant ,&nbsp;V.P. Bhatkar","doi":"10.1016/0096-0551(94)90001-9","DOIUrl":"10.1016/0096-0551(94)90001-9","url":null,"abstract":"<div><p>Several researchers have looked into various issues related to automatic parallelization of sequential programs for multicomputers. But there is a need for a coherent framework which encompasses all these issues. In this paper we present a such a framework which takes best advantage of the multicomputer architecture. We resort to tiling transformation for iteration space partitioning and propose a scheme of automatic data partitioning and dynamic data distribution. We have tried a simple implementation of our scheme on a transputer based multicomputer [1] and the results are encouraging.</p></div>","PeriodicalId":100315,"journal":{"name":"Computer Languages","volume":"20 3","pages":"Pages 135-150"},"PeriodicalIF":0.0,"publicationDate":"1994-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/0096-0551(94)90001-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78307146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
An empirical study of the run-time behavior of quicksort, Shellsort and mergesort for medium to large size data 中大型数据快速排序、shell排序和归并排序运行时行为的实证研究
Pub Date : 1994-05-01 DOI: 10.1016/0096-0551(94)90019-1
S. Mansoor Sarwar , Mansour H.A. Jaragh , Mike Wind

The paper describes the results of a large empirical study to measure the practical behavior of the basic versions of the popular internal sorting algorithms, Shellsort, quicksort, and mergesort, for medium to large size data and compares them with previous results. The results give running times of θ(N1.25) for Shellsort, quicksort, and mergesort for 1000 < N < 2 × 106. The study also shows that Shellsort behaves better than mergesort for 1000 < N < 150,000. However, mergesort outperforms Shellsort for N > 150,000. Quicksort outperforms both Shellsort and mergesort for all values of N > 1000. Our fits show better performance for Shellsort than the previous studies and are mostly accurate to within 2% for 1000 < N < 2 × 106. The primary reason for this error seems to be related to the error in the measured data.

本文描述了一项大型实证研究的结果,该研究测量了流行的内部排序算法(Shellsort、quicksort和归并排序)的基本版本对中大型数据的实际行为,并将它们与以前的结果进行了比较。结果给出了1000 <的shell排序、快速排序和归并排序的运行时间θ(N1.25);N & lt;2 × 106。研究还表明,在1000 <时,Shellsort的性能优于归并排序。N & lt;150000年。然而,对于N >,归并排序优于Shellsort;150000年。对于N >的所有值,快速排序都优于Shellsort和归并排序;1000. 我们的拟合显示Shellsort的性能比以前的研究更好,并且在1000 <中大多数精度在2%以内;N & lt;2 × 106。产生这种误差的主要原因似乎与测量数据中的误差有关。
{"title":"An empirical study of the run-time behavior of quicksort, Shellsort and mergesort for medium to large size data","authors":"S. Mansoor Sarwar ,&nbsp;Mansour H.A. Jaragh ,&nbsp;Mike Wind","doi":"10.1016/0096-0551(94)90019-1","DOIUrl":"10.1016/0096-0551(94)90019-1","url":null,"abstract":"<div><p>The paper describes the results of a large empirical study to measure the practical behavior of the basic versions of the popular internal sorting algorithms, Shellsort, quicksort, and mergesort, for medium to large size data and compares them with previous results. The results give running times of <em>θ</em>(<em>N</em><sup>1.25</sup>) for Shellsort, quicksort, and mergesort for 1000 &lt; <em>N</em> &lt; 2 × 10<sup>6</sup>. The study also shows that Shellsort behaves better than mergesort for 1000 &lt; <em>N</em> &lt; 150,000. However, mergesort outperforms Shellsort for <em>N</em> &gt; 150,000. Quicksort outperforms both Shellsort and mergesort for all values of <em>N</em> &gt; 1000. Our fits show better performance for Shellsort than the previous studies and are mostly accurate to within 2% for 1000 &lt; <em>N</em> &lt; 2 × 10<sup>6</sup>. The primary reason for this error seems to be related to the error in the measured data.</p></div>","PeriodicalId":100315,"journal":{"name":"Computer Languages","volume":"20 2","pages":"Pages 127-134"},"PeriodicalIF":0.0,"publicationDate":"1994-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/0096-0551(94)90019-1","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73227714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A practical approach to type-sensitive parsing 一种实用的类型敏感解析方法
Pub Date : 1994-05-01 DOI: 10.1016/0096-0551(94)90017-5
Ken Sailor, Carl McCrosky

Type-sensitive parsing of expressions is context-sensitive parsing based on type. Previous research reported a general class of algorithms for type-sensitive parsing. Unfortunately, these algorithms are impractical for languages that infer type. This paper describes a related algorithm which is much more efficient—its incremental cost is linear (with a small constant) in the length of the expression, even when types must be deduced. Our method can be applied to any statically typed language solving a variety of problems associated with conventional parsing techniques including problems with operator precedence and the interaction between infix operators and higher order functions.

表达式的类型敏感解析是基于类型的上下文敏感解析。以前的研究报告了一类用于类型敏感解析的通用算法。不幸的是,这些算法对于推断类型的语言是不切实际的。本文描述了一种更有效的相关算法,它的增量成本在表达式长度上是线性的(带有一个小常数),即使必须推导类型。我们的方法可以应用于任何静态类型语言,解决与传统解析技术相关的各种问题,包括操作符优先级问题和中缀操作符与高阶函数之间的交互问题。
{"title":"A practical approach to type-sensitive parsing","authors":"Ken Sailor,&nbsp;Carl McCrosky","doi":"10.1016/0096-0551(94)90017-5","DOIUrl":"10.1016/0096-0551(94)90017-5","url":null,"abstract":"<div><p>Type-sensitive parsing of expressions is context-sensitive parsing based on type. Previous research reported a general class of algorithms for type-sensitive parsing. Unfortunately, these algorithms are impractical for languages that infer type. This paper describes a related algorithm which is much more efficient—its incremental cost is linear (with a small constant) in the length of the expression, even when types must be deduced. Our method can be applied to any statically typed language solving a variety of problems associated with conventional parsing techniques including problems with operator precedence and the interaction between infix operators and higher order functions.</p></div>","PeriodicalId":100315,"journal":{"name":"Computer Languages","volume":"20 2","pages":"Pages 101-116"},"PeriodicalIF":0.0,"publicationDate":"1994-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/0096-0551(94)90017-5","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72538013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exception handling: Expecting the unexpected 异常处理:期待意外
Pub Date : 1994-05-01 DOI: 10.1016/0096-0551(94)90015-9
Steven J. Drew, K. John Gough

Since the mid-1970s, and with the development of each new programming paradigm there has been an increasing interest in exceptions and the benefits of exception handling. With the move towards programming for ever more complex architectures, understanding basic facilities such as exception handling as an aid to improving program reliability, robustness and comprehensibility has become much more important. Interest has sparked the production of many papers both theoretical and practical, each giving a view of exceptions and exception handling from a different standpoint.

In an effort to provide a means of classifying exception handling models which may be encountered, a taxonomy is presented in this paper. As the taxonomy is developed some of the concepts of exception handling are introduced and discussed. The taxonomy is applied to a number of exception handling models in some contemporary programming languages and some observations and conclusions offered.

自20世纪70年代中期以来,随着每一种新的编程范式的发展,人们对异常和异常处理的好处越来越感兴趣。随着为更复杂的体系结构编程的发展,理解诸如异常处理之类的基本设施以帮助提高程序的可靠性、健壮性和可理解性变得更加重要。这种兴趣引发了许多理论和实践论文的产生,每一篇论文都从不同的角度给出了异常和异常处理的观点。为了提供对可能遇到的异常处理模型进行分类的方法,本文提出了一种分类法。随着分类法的发展,引入和讨论了一些异常处理的概念。该分类法应用于一些现代编程语言中的许多异常处理模型,并提供了一些观察和结论。
{"title":"Exception handling: Expecting the unexpected","authors":"Steven J. Drew,&nbsp;K. John Gough","doi":"10.1016/0096-0551(94)90015-9","DOIUrl":"10.1016/0096-0551(94)90015-9","url":null,"abstract":"<div><p>Since the mid-1970s, and with the development of each new programming paradigm there has been an increasing interest in exceptions and the benefits of exception handling. With the move towards programming for ever more complex architectures, understanding basic facilities such as exception handling as an aid to improving program reliability, robustness and comprehensibility has become much more important. Interest has sparked the production of many papers both theoretical and practical, each giving a view of exceptions and exception handling from a different standpoint.</p><p>In an effort to provide a means of classifying exception handling models which may be encountered, a taxonomy is presented in this paper. As the taxonomy is developed some of the concepts of exception handling are introduced and discussed. The taxonomy is applied to a number of exception handling models in some contemporary programming languages and some observations and conclusions offered.</p></div>","PeriodicalId":100315,"journal":{"name":"Computer Languages","volume":"20 2","pages":"Pages 69-87"},"PeriodicalIF":0.0,"publicationDate":"1994-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/0096-0551(94)90015-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78936539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Grammar transformations for optimizing backtrack parsers 用于优化回溯解析器的语法转换
Pub Date : 1994-05-01 DOI: 10.1016/0096-0551(94)90016-7
Janos J. Sarbo

We present two grammar transformations which can decrease the search space of generated top-down backtrack parsers. The transformations are simple and can be of practical use.

The first transformation, which is a combination of substitution and left-factorization, is based on the LR-table construction. The second transformation uses the calculation of the sets FIRST and FOLLOW, and a grammar property, called rrelative unambiguity.

The time complexity of the transformations is worst case polynomial and in practical cases linear in the size of the grammar.

我们提出了两种语法转换,可以减少生成的自顶向下回溯解析器的搜索空间。转换很简单,并且可以实际使用。第一个变换是替换和左分解的组合,它基于lr表构造。第二个转换使用集合FIRST和FOLLOW的计算,以及称为相对无歧义的语法属性。变换的时间复杂度在最坏情况下是多项式,在实际情况下是语法大小的线性。
{"title":"Grammar transformations for optimizing backtrack parsers","authors":"Janos J. Sarbo","doi":"10.1016/0096-0551(94)90016-7","DOIUrl":"10.1016/0096-0551(94)90016-7","url":null,"abstract":"<div><p>We present two grammar transformations which can decrease the search space of generated top-down backtrack parsers. The transformations are simple and can be of practical use.</p><p>The first transformation, which is a combination of substitution and left-factorization, is based on the LR-table construction. The second transformation uses the calculation of the sets <em>FIRST</em> and <em>FOLLOW</em>, and a grammar property, called <em>rrelative unambiguity</em>.</p><p>The time complexity of the transformations is worst case polynomial and in practical cases linear in the size of the grammar.</p></div>","PeriodicalId":100315,"journal":{"name":"Computer Languages","volume":"20 2","pages":"Pages 89-100"},"PeriodicalIF":0.0,"publicationDate":"1994-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/0096-0551(94)90016-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91360472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Computer Languages
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1