Attribute grammars have been used for many years for automated compiler construction. Attribute grammars support the description of semantic analysis, code generation and some code optimization in a formal declarative style. Other tools support the automation of lexical analysis and parsing. However, there is one large part of compiler construction that is missing from our toolkit: run-time environments. This paper introduces an extension of attribute grammars that supports the generation of run-time environments. The extension also supports the generation of interpreters, symbolic debugging tools, and other execution-time facilities. /
{"title":"Generation of run-time environments","authors":"G. Kaiser","doi":"10.1145/12276.13316","DOIUrl":"https://doi.org/10.1145/12276.13316","url":null,"abstract":"Attribute grammars have been used for many years for automated compiler construction. Attribute grammars support the description of semantic analysis, code generation and some code optimization in a formal declarative style. Other tools support the automation of lexical analysis and parsing. However, there is one large part of compiler construction that is missing from our toolkit: run-time environments. This paper introduces an extension of attribute grammars that supports the generation of run-time environments. The extension also supports the generation of interpreters, symbolic debugging tools, and other execution-time facilities.\u0000/","PeriodicalId":414056,"journal":{"name":"SIGPLAN Conferences and Workshops","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1986-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121919923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In semantics-directed compiler generation one is faced with the problem of how to translate a source semantic definition of a programming language into on equivalent target semantics closer to an implementation. Most of the existing works solve this problem in a non constructive way : a target semantics is exhibited first and then only proved correct against the source. We try to show that target semantics can be derived from source semantics in a constructive way and so that some correctness ideas are automatically preserved. The framework is denotational semantics.
{"title":"Transformations of denotational semantics in semantics directed compiler generation","authors":"V. Royer","doi":"10.1145/12276.13318","DOIUrl":"https://doi.org/10.1145/12276.13318","url":null,"abstract":"In semantics-directed compiler generation one is faced with the problem of how to translate a source semantic definition of a programming language into on equivalent target semantics closer to an implementation. Most of the existing works solve this problem in a non constructive way : a target semantics is exhibited first and then only proved correct against the source. We try to show that target semantics can be derived from source semantics in a constructive way and so that some correctness ideas are automatically preserved. The framework is denotational semantics.","PeriodicalId":414056,"journal":{"name":"SIGPLAN Conferences and Workshops","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1986-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131730916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I have designed and built a compiler construction tool that automates much of the case analysis necessary to exploit special purpose instructions on a target machine. Given a suitable description of the target machine, my analysis identifies instruction sequences that are equivalent to single instructions. During code generation, these equivalences can be used to avoid inefficient instruction sequences in favor of more efficient instructions. I present a working prototype of the instruction set analyzer needed in the framework outlined by [Giegerich 83]. In contrast to the work presented in [Davidson and Fraser 80, 84], I analyze machine descriptions during compiler construction, rather than analyzing instruction sequences that occur during code generation. [R Kessler 84] describes a system which analyzes machine descriptions during compiler construction, but which which is limited to discovering instructions that are equivalent to instruction sequences of length 2. The techniques presented here can identify instruction sequences of arbitrary length that are equivalent to single instructions. I have applied this analysis to the descriptions of two machines, and used the results to replace hand-written case analysis routines in an otherwise table-driven code generator [Henry 84].
我已经设计并构建了一个编译器构造工具,它可以自动执行在目标机器上利用特殊用途指令所必需的许多用例分析。给定目标机器的适当描述,我的分析确定了等同于单个指令的指令序列。在代码生成过程中,可以使用这些等价来避免低效的指令序列,以支持更高效的指令。我提出了[Giegerich 83]概述的框架中所需的指令集分析器的工作原型。与[Davidson and Fraser 80,84]中提出的工作相反,我在编译器构造期间分析机器描述,而不是分析代码生成期间发生的指令序列。[R Kessler 84]描述了一个在编译器构造过程中分析机器描述的系统,但该系统仅限于发现与长度为2的指令序列等效的指令。这里介绍的技术可以识别任意长度的指令序列,这些指令序列相当于单个指令。我已经将这种分析应用到两台机器的描述中,并使用结果来替换表驱动代码生成器中手写的案例分析例程[Henry 84]。
{"title":"Discovering machine-specific code improvements","authors":"P. Kessler","doi":"10.1145/12276.13336","DOIUrl":"https://doi.org/10.1145/12276.13336","url":null,"abstract":"I have designed and built a compiler construction tool that automates much of the case analysis necessary to exploit special purpose instructions on a target machine. Given a suitable description of the target machine, my analysis identifies instruction sequences that are equivalent to single instructions. During code generation, these equivalences can be used to avoid inefficient instruction sequences in favor of more efficient instructions.\u0000I present a working prototype of the instruction set analyzer needed in the framework outlined by [Giegerich 83]. In contrast to the work presented in [Davidson and Fraser 80, 84], I analyze machine descriptions during compiler construction, rather than analyzing instruction sequences that occur during code generation. [R Kessler 84] describes a system which analyzes machine descriptions during compiler construction, but which which is limited to discovering instructions that are equivalent to instruction sequences of length 2. The techniques presented here can identify instruction sequences of arbitrary length that are equivalent to single instructions.\u0000I have applied this analysis to the descriptions of two machines, and used the results to replace hand-written case analysis routines in an otherwise table-driven code generator [Henry 84].","PeriodicalId":414056,"journal":{"name":"SIGPLAN Conferences and Workshops","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1986-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114213221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Since the introduction of LALR parsing, several algorithms have been presented for the computation of the lookahead sets needed to produce an LALR parser. The algorithm in Aho and Ullman[1] has perhaps received the widest exposure. The recent algorithms by DeRemer and Pennello[2] and Park, Choe, and Chang[4] are the most efficient. A new algorithm has been developed from an algorithm originally based on the Aho and Ullman algorithm and subsequently modified to take advantage of the efficiencies introduced by the DeRemer and Pennello algorithm. The new algorithm performs better than the Park, Choe, and Chang algorithm, and both perform better than the DeRemer and Pennello algorithm. The reasons for the relative performances are easily understood when the algorithms are presented in a common light.
自从引入LALR解析以来,已经提出了几种算法来计算生成LALR解析器所需的前瞻集。Aho和Ullman[1]中的算法可能得到了最广泛的曝光。最近由DeRemer和Pennello[2]以及Park, Choe, and Chang[4]提出的算法是最有效的。该算法最初基于Aho和Ullman算法,随后进行了修改,以利用DeRemer和Pennello算法引入的效率,从而开发了一种新的算法。新算法的性能优于Park、Choe、Chang算法,也优于DeRemer和Pennello算法。当这些算法以一种共同的方式呈现时,相对性能的原因很容易理解。
{"title":"Unifying view of recent LALR(1) lookahead set algorithms","authors":"F. Ives","doi":"10.1145/12276.13324","DOIUrl":"https://doi.org/10.1145/12276.13324","url":null,"abstract":"Since the introduction of LALR parsing, several algorithms have been presented for the computation of the lookahead sets needed to produce an LALR parser. The algorithm in Aho and Ullman[1] has perhaps received the widest exposure. The recent algorithms by DeRemer and Pennello[2] and Park, Choe, and Chang[4] are the most efficient.\u0000A new algorithm has been developed from an algorithm originally based on the Aho and Ullman algorithm and subsequently modified to take advantage of the efficiencies introduced by the DeRemer and Pennello algorithm. The new algorithm performs better than the Park, Choe, and Chang algorithm, and both perform better than the DeRemer and Pennello algorithm. The reasons for the relative performances are easily understood when the algorithms are presented in a common light.","PeriodicalId":414056,"journal":{"name":"SIGPLAN Conferences and Workshops","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1986-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125648542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Asynchronous CALL statements are necessary in order to use more than one processor in current multiprocessors. Detecting CALL statements that may be executed in parallel is one way to fill this need. This approach requires accurate approximations of called procedure effects. This is achieved by using new objects called Region and Execution Context. An algorithm to find asynchronous CALL statements is given. It involves a new dependence test to compute data dependence graphs, which provides better results than previous ones even when no CALL statements are involved. This method has been implemented in Parafrase and preliminary results are encouraging.
{"title":"Direct parallelization of call statements","authors":"R. Triolet, F. Irigoin, P. Feautrier","doi":"10.1145/12276.13329","DOIUrl":"https://doi.org/10.1145/12276.13329","url":null,"abstract":"Asynchronous CALL statements are necessary in order to use more than one processor in current multiprocessors. Detecting CALL statements that may be executed in parallel is one way to fill this need. This approach requires accurate approximations of called procedure effects. This is achieved by using new objects called Region and Execution Context. An algorithm to find asynchronous CALL statements is given. It involves a new dependence test to compute data dependence graphs, which provides better results than previous ones even when no CALL statements are involved. This method has been implemented in Parafrase and preliminary results are encouraging.","PeriodicalId":414056,"journal":{"name":"SIGPLAN Conferences and Workshops","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1986-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130642316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the traditional formulation of attribute grammars (AGs) circularities are not allowed, that is, no attribute-instance in any derivation tree may be defined in terms of itself. Elsewhere in mathematics and computing, though, circular (or recursive) definitions are commonplace, and even essential. Given appropriate constraints, recursive definitions are well-founded, and the least fixed-points they denote are computable. This is also the case for circular AGs. This paper presents constraints on individual attributes and semantic functions of an AG that are sufficient to guarantee that a circular AG specifies a well-defined translation and that circularly-defined attribute-instances can be computed via successive approximation. AGs that satisfy these constraints are called finitely recursive. An attribute evaluation paradigm is presented that incorporates successive approximation to evaluate circular attribute-instances, along with an algorithm to automatically construct such an evaluator. The attribute evaluators so produced are static in the sense that the order of evaluation at each production-instance in the derivation-tree is determined at the time that each translator is generated. A final algorithm is presented that tells which individual attributes and functions must satisfy the constraints.
{"title":"Automatic generation of fixed-point-finding evaluators for circular, but well-defined, attribute grammars","authors":"Rodney Farrow","doi":"10.1145/12276.13320","DOIUrl":"https://doi.org/10.1145/12276.13320","url":null,"abstract":"In the traditional formulation of attribute grammars (AGs) circularities are not allowed, that is, no attribute-instance in any derivation tree may be defined in terms of itself. Elsewhere in mathematics and computing, though, circular (or recursive) definitions are commonplace, and even essential. Given appropriate constraints, recursive definitions are well-founded, and the least fixed-points they denote are computable. This is also the case for circular AGs.\u0000This paper presents constraints on individual attributes and semantic functions of an AG that are sufficient to guarantee that a circular AG specifies a well-defined translation and that circularly-defined attribute-instances can be computed via successive approximation. AGs that satisfy these constraints are called finitely recursive.\u0000An attribute evaluation paradigm is presented that incorporates successive approximation to evaluate circular attribute-instances, along with an algorithm to automatically construct such an evaluator. The attribute evaluators so produced are static in the sense that the order of evaluation at each production-instance in the derivation-tree is determined at the time that each translator is generated.\u0000A final algorithm is presented that tells which individual attributes and functions must satisfy the constraints.","PeriodicalId":414056,"journal":{"name":"SIGPLAN Conferences and Workshops","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1986-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122971141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aggregate valued attributes, which store collections of keyed elements, are required in attribute grammars to communicate information from multiple definition sites to multiple use locations. For syntax directed editors and incremental compilers, symbol tables are represented as aggregate values. We present efficient algorithms for incrementally maintaining these aggregate values and give an incremental evaluation algorithm that restricts attribute propagation to attributes dependent only upon information within the aggregate value that has changed.
{"title":"Efficient incremental evaluation of aggregate values in attribute grammars","authors":"R. Hoover, T. Teitelbaum","doi":"10.1145/12276.13315","DOIUrl":"https://doi.org/10.1145/12276.13315","url":null,"abstract":"Aggregate valued attributes, which store collections of keyed elements, are required in attribute grammars to communicate information from multiple definition sites to multiple use locations. For syntax directed editors and incremental compilers, symbol tables are represented as aggregate values. We present efficient algorithms for incrementally maintaining these aggregate values and give an incremental evaluation algorithm that restricts attribute propagation to attributes dependent only upon information within the aggregate value that has changed.","PeriodicalId":414056,"journal":{"name":"SIGPLAN Conferences and Workshops","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1986-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128826088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Static evaluation underlies essentially all techniques for a priori semantic program manipulation, i.e. those that stop short of fully general execution. Included are such activities as type checking, partial evaluation, and, ultimately, optimized compilation. This paper describes a novel approach to static evaluation of programs in functional languages involving infinite data objects, i.e. those using normal order or “lazy” evaluation. Its principal features are abstract interpretation on a domain of demand patterns, and a notion of function “reversal”. The latter associates with each function f a derived function f' mapping demand patterns on f to demand patterns on its formal parameter. This is used for a comprehensive form of strictness analysis, aiding in efficient compilation. This analysis leads to a revised notion of basic block, appropriate as an intermediate representation for a normal order functional language. An implementation of the analysis technique in Prolog is sketched, as well as an effort currently underway to apply the technique to the generation of optimized G-machine code.
{"title":"Static evaluation of functional programs","authors":"G. Lindstrom","doi":"10.1145/12276.13331","DOIUrl":"https://doi.org/10.1145/12276.13331","url":null,"abstract":"Static evaluation underlies essentially all techniques for a priori semantic program manipulation, i.e. those that stop short of fully general execution. Included are such activities as type checking, partial evaluation, and, ultimately, optimized compilation.\u0000This paper describes a novel approach to static evaluation of programs in functional languages involving infinite data objects, i.e. those using normal order or “lazy” evaluation. Its principal features are abstract interpretation on a domain of demand patterns, and a notion of function “reversal”. The latter associates with each function f a derived function f' mapping demand patterns on f to demand patterns on its formal parameter. This is used for a comprehensive form of strictness analysis, aiding in efficient compilation.\u0000This analysis leads to a revised notion of basic block, appropriate as an intermediate representation for a normal order functional language. An implementation of the analysis technique in Prolog is sketched, as well as an effort currently underway to apply the technique to the generation of optimized G-machine code.","PeriodicalId":414056,"journal":{"name":"SIGPLAN Conferences and Workshops","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1986-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123366342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an overview of the design of a machine-code-level, global (intraprocedural) optimizer that supports several front-ends producing code for the Hewlett-Packard Precision Architecture family of machines. The basic optimization strategy is described, including information about the division of responsibilities between various components of the compiler. Optimization algorithms are described, including a discussion of the dataflow information they require. Measurements showing the collective and individual effects of various optimizer components are presented. The performance data presented here was collected using a preliminary version of the optimizer. Development is continuing and further improvements are expected.
{"title":"Effectiveness of a machine-level, global optimizer","authors":"M. S. Johnson, T. Miller","doi":"10.1145/12276.13321","DOIUrl":"https://doi.org/10.1145/12276.13321","url":null,"abstract":"We present an overview of the design of a machine-code-level, global (intraprocedural) optimizer that supports several front-ends producing code for the Hewlett-Packard Precision Architecture family of machines. The basic optimization strategy is described, including information about the division of responsibilities between various components of the compiler. Optimization algorithms are described, including a discussion of the dataflow information they require. Measurements showing the collective and individual effects of various optimizer components are presented.\u0000The performance data presented here was collected using a preliminary version of the optimizer. Development is continuing and further improvements are expected.","PeriodicalId":414056,"journal":{"name":"SIGPLAN Conferences and Workshops","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1986-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115739427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A transformation is presented for replacing conventional local attribute references in attribute grammars by upward remote references. The purpose of the transformation is to enhance readability of the grammar and to facilitate easy storage optimization.
{"title":"A globalizing transformation for attribute grammars","authors":"Kari-Jouko Räihä, J. Tarhio","doi":"10.1145/12276.13319","DOIUrl":"https://doi.org/10.1145/12276.13319","url":null,"abstract":"A transformation is presented for replacing conventional local attribute references in attribute grammars by upward remote references. The purpose of the transformation is to enhance readability of the grammar and to facilitate easy storage optimization.","PeriodicalId":414056,"journal":{"name":"SIGPLAN Conferences and Workshops","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1986-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128606454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}