Our goal is to develop a practical syntactic error recovery method applicable within the general framework of viable prefix parsing. Our method represents an attempt to accurately diagnose and report all syntax errors without reporting errors that are not actually present. Successful recovery depends upon accurate diagnosis of errors together with sensible “correction” or alteration of the text to put the parse back on track. The issuing of accurate and helpful diagnostics is achieved by indicating the nature of the recovery made for each error encountered. The error recovery is prior to and independent of any semantic analysis of the program. However, the method does not exclude the invocation of semantic actions while parsing or preclude the use of semantic information for error recovery. The method assumes a framework in which an LR or LL parser, driven by the tables produced by a parser generator, maintains an input symbol buffer, state or prediction stack, and parse stack. The input symbol buffer contains part or all of the sequence of remaining input tokens, including the current token. The LR state stack is analogous to the LL prediction stack; except when restricting our attention to the LL case, prediction stack shall serve as a generic term indicating the LR state or LL prediction stack. The parse stack contains the symbols of the right hand sides that have not yet been reduced.
{"title":"A practical method for syntactic error diagnosis and recovery","authors":"M. Burke, Gerald A. Fisher","doi":"10.1145/800230.806981","DOIUrl":"https://doi.org/10.1145/800230.806981","url":null,"abstract":"Our goal is to develop a practical syntactic error recovery method applicable within the general framework of viable prefix parsing. Our method represents an attempt to accurately diagnose and report all syntax errors without reporting errors that are not actually present. Successful recovery depends upon accurate diagnosis of errors together with sensible “correction” or alteration of the text to put the parse back on track. The issuing of accurate and helpful diagnostics is achieved by indicating the nature of the recovery made for each error encountered. The error recovery is prior to and independent of any semantic analysis of the program. However, the method does not exclude the invocation of semantic actions while parsing or preclude the use of semantic information for error recovery.\u0000 The method assumes a framework in which an LR or LL parser, driven by the tables produced by a parser generator, maintains an input symbol buffer, state or prediction stack, and parse stack. The input symbol buffer contains part or all of the sequence of remaining input tokens, including the current token. The LR state stack is analogous to the LL prediction stack; except when restricting our attention to the LL case, prediction stack shall serve as a generic term indicating the LR state or LL prediction stack. The parse stack contains the symbols of the right hand sides that have not yet been reduced.","PeriodicalId":414056,"journal":{"name":"SIGPLAN Conferences and Workshops","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127776928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CIMS PL/I is an implementation of PL/I on the Control Data 6600 computer. The most challenging aspect of implementing PL/I is dealing with the sheer size and complexity of the language; since a PL/I compiler is an inherently large object, building one is a good way to test the limits of ideas on compiler construction and programming methodology. Version 1 of CIMS PL/I has been in active use since 1973 and includes roughly 70% of the full language, but is limited by some severe design flaws. Version 2 is now under development; the parser and the declaration processor are in excellent working order, but the later passes are still being worked on. In this paper I shall describe Version 2 as though it already existed in full. Version 2 is intended to implement full ANSI Standard PL/I [1], and the passes that have been completed accept the full language. Both versions are themselves written almost entirely in PL/I, and have been developed using boot-strapping methods.
CIMS PL/I是PL/I在Control Data 6600计算机上的实现。实现PL/I最具挑战性的方面是处理语言的大小和复杂性;由于PL/I编译器本质上是一个大型对象,因此构建一个编译器是测试编译器结构和编程方法的局限性的好方法。CIMS PL/I的版本1自1973年以来一直在积极使用,包含了大约70%的完整语言,但由于一些严重的设计缺陷而受到限制。版本2目前正在开发中;解析器和声明处理器处于良好的工作状态,但是后面的传递仍在进行中。在本文中,我将描述版本2,就好像它已经完全存在一样。版本2旨在实现完整的ANSI标准PL/I[1],并且已经完成的通道接受完整的语言。这两个版本本身几乎完全是用PL/I编写的,并且是使用引导方法开发的。
{"title":"The CIMS PL/I compiler","authors":"P. Abrahams","doi":"10.1145/800229.806960","DOIUrl":"https://doi.org/10.1145/800229.806960","url":null,"abstract":"CIMS PL/I is an implementation of PL/I on the Control Data 6600 computer. The most challenging aspect of implementing PL/I is dealing with the sheer size and complexity of the language; since a PL/I compiler is an inherently large object, building one is a good way to test the limits of ideas on compiler construction and programming methodology. Version 1 of CIMS PL/I has been in active use since 1973 and includes roughly 70% of the full language, but is limited by some severe design flaws. Version 2 is now under development; the parser and the declaration processor are in excellent working order, but the later passes are still being worked on. In this paper I shall describe Version 2 as though it already existed in full. Version 2 is intended to implement full ANSI Standard PL/I [1], and the passes that have been completed accept the full language. Both versions are themselves written almost entirely in PL/I, and have been developed using boot-strapping methods.","PeriodicalId":414056,"journal":{"name":"SIGPLAN Conferences and Workshops","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124449505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents the design of an interpreter structure for modern programming languages such as Turing and Modula II that is modular and highly orthogonal while providing maximal flexibility and efficiency in implementation. At the outermost level, the structure consists of a front end, responsible for interaction with the user, and a back end, responsible for execution. The two are linked by a single database consisting of the tokenized statements of the user program. Interfaces between the major modules of each part are defined in such a way as to maximize reusability, and each interface can service a range of plug-compatible modules implementing radically different semantics. The design accommodates a wide spectrum of interpreter types ranging from batch-oriented compiler-simulators to statement-by-statement interactive execution, and provides for a range of program editing tools from simple line editors through to modern language-directed programming environments. It has served as the basis for several interpretive systems including the production Turing interpreter, the Turing Programming Environment, and the Turing Tool software maintenance tool.
{"title":"Design of an interpretive environment for Turing","authors":"J. Cordy, T. Graham","doi":"10.1145/29650.29671","DOIUrl":"https://doi.org/10.1145/29650.29671","url":null,"abstract":"This paper presents the design of an interpreter structure for modern programming languages such as Turing and Modula II that is modular and highly orthogonal while providing maximal flexibility and efficiency in implementation. At the outermost level, the structure consists of a front end, responsible for interaction with the user, and a back end, responsible for execution. The two are linked by a single database consisting of the tokenized statements of the user program. Interfaces between the major modules of each part are defined in such a way as to maximize reusability, and each interface can service a range of plug-compatible modules implementing radically different semantics. The design accommodates a wide spectrum of interpreter types ranging from batch-oriented compiler-simulators to statement-by-statement interactive execution, and provides for a range of program editing tools from simple line editors through to modern language-directed programming environments. It has served as the basis for several interpretive systems including the production Turing interpreter, the Turing Programming Environment, and the Turing Tool software maintenance tool.","PeriodicalId":414056,"journal":{"name":"SIGPLAN Conferences and Workshops","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1987-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126614111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes key features of an interpreter for a language-based editor. The interpreter unites in a RISC framework features which have been used in other domains. The paper examines each feature's integration into the RISC framework.
{"title":"The JADE interpreter: a RISC interpreter for syntax directed editing","authors":"C. F. Clark","doi":"10.1145/29650.29674","DOIUrl":"https://doi.org/10.1145/29650.29674","url":null,"abstract":"This paper describes key features of an interpreter for a language-based editor. The interpreter unites in a RISC framework features which have been used in other domains. The paper examines each feature's integration into the RISC framework.","PeriodicalId":414056,"journal":{"name":"SIGPLAN Conferences and Workshops","volume":"247 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1987-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123259912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Interpreters replace the edit/compile/run cyle with edit/run. Dynamic computing environments, like spreadsheets, shorten this still more to just edit. So-called "Visiprog" environments, such as Maryland's XED, permit developing normal imperative programs in a dynamic computing environment, XED and similar environments, because they show the results of executing a program after every (reasonable) editing step, raise the issue of efficient incremental execution. Incremental execution optimizations are also applicable to any programming situation, including batch/cards, in which nearly the same program is run many times on nearly the same data. However, the requirement of remembering large amounts of internal state between runs make incremental exectution most natural for interpreted languages. This paper examines some algorithms for incremental execution. Based on the frequency of typical program editing changes, we predict the importance of optimizing certain kinds of incremental execution. We also examine actual speedups obtained in executing programs after subjecting them to these simulated incremental edits under these optimizations. The speedups range from factors of 1.1 to near 10. Finally, we discuss the feasibility of including these optimizations in an actual dynamic computing environment like XED, and in more traditional programming environments.
{"title":"Incremental re-execution of programs","authors":"R. Karinthi, M. Weiser","doi":"10.1145/29650.29654","DOIUrl":"https://doi.org/10.1145/29650.29654","url":null,"abstract":"Interpreters replace the edit/compile/run cyle with edit/run. Dynamic computing environments, like spreadsheets, shorten this still more to just edit. So-called \"Visiprog\" environments, such as Maryland's XED, permit developing normal imperative programs in a dynamic computing environment, XED and similar environments, because they show the results of executing a program after every (reasonable) editing step, raise the issue of efficient incremental execution. Incremental execution optimizations are also applicable to any programming situation, including batch/cards, in which nearly the same program is run many times on nearly the same data. However, the requirement of remembering large amounts of internal state between runs make incremental exectution most natural for interpreted languages. This paper examines some algorithms for incremental execution. Based on the frequency of typical program editing changes, we predict the importance of optimizing certain kinds of incremental execution. We also examine actual speedups obtained in executing programs after subjecting them to these simulated incremental edits under these optimizations. The speedups range from factors of 1.1 to near 10. Finally, we discuss the feasibility of including these optimizations in an actual dynamic computing environment like XED, and in more traditional programming environments.","PeriodicalId":414056,"journal":{"name":"SIGPLAN Conferences and Workshops","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1987-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126229266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Language implementation is in need of automation. Although compiler construction has long been aided by parser generators and other tools, interpreters and runtime systems have been neglected, even though they constitute a large component of languages like Lisp, Prolog, and Smalltalk. Of the several parts of a runtime system, the primitive datatype definitions present some of the most difficult decisions for the implementor. The effectiveness of type discrimination schemes, interactions between storage allocation and virtual memory, and general time/space tradeoffs are issues that have no simple resolution-they must be evaluated for each implementation. A formalism for describing implementations has been developed and used in a prototype designer of primitive data structures. The designer is a collection of heuristic rules that produce multiple designs of differing characteristics. Cost evaluation on machine code derived from those designs yields performance formulas, which are then used to estimate the designs' effect on benchmark programs.
{"title":"Automatic design and implementation of language data types","authors":"S. Shebs, R. Kessler","doi":"10.1145/29650.29653","DOIUrl":"https://doi.org/10.1145/29650.29653","url":null,"abstract":"Language implementation is in need of automation. Although compiler construction has long been aided by parser generators and other tools, interpreters and runtime systems have been neglected, even though they constitute a large component of languages like Lisp, Prolog, and Smalltalk. Of the several parts of a runtime system, the primitive datatype definitions present some of the most difficult decisions for the implementor. The effectiveness of type discrimination schemes, interactions between storage allocation and virtual memory, and general time/space tradeoffs are issues that have no simple resolution-they must be evaluated for each implementation. A formalism for describing implementations has been developed and used in a prototype designer of primitive data structures. The designer is a collection of heuristic rules that produce multiple designs of differing characteristics. Cost evaluation on machine code derived from those designs yields performance formulas, which are then used to estimate the designs' effect on benchmark programs.","PeriodicalId":414056,"journal":{"name":"SIGPLAN Conferences and Workshops","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1987-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127812346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To design a new processor or to modify an existing one, designers need to gather data to estimate the influence of specific architecture features on the performance of the proposed machine (PM). To obtain this data, it is necessary to measure on an existing machine (EM) the dynamic behavior of typical programs. Traditionally, simulators have been used to obtain measurements for PMs. Since several hundred EM instructions are required to decode, interpret, and measure each simulated (PM) instruction, the simulation time of typical programs is prohibitively large. Thus, designers tend to simulate only small programs and the results obtained might not be representative of a real system behavior. In this paper we present an alternative tool for collecting architecture measurements: the Block-and-Actions Generator (BKGEN). BKGEN produces a version of the program being measured which is directly executable by the EM. This executable version is obtained directly with the EM compiler or with the PM compiler and a assembly-to-assembly translator. The choice between these alternatives depends on the EM and PM compiler technology and the type of measurements to be obtained. BKGEN also collects the PM events to be measured (called actions). Each EM block of instructions is associated with a PM block of actions so that when the program is executed, it collects the measurements associated with the PM. The main advantage of BKGEN is that the execution time is substantially reduced compared to the execution time of a simulator while collecting similar data. Thus, large typical programs (compilers, assemblers, word processors, ...) can be used by the designer to obtain meaningful measurements.
{"title":"A block-and-actions generator as an alternative to a simulator for collecting architecture measurements","authors":"M. Huguet, T. Lang, Y. Tamir","doi":"10.1145/29650.29652","DOIUrl":"https://doi.org/10.1145/29650.29652","url":null,"abstract":"To design a new processor or to modify an existing one, designers need to gather data to estimate the influence of specific architecture features on the performance of the proposed machine (PM). To obtain this data, it is necessary to measure on an existing machine (EM) the dynamic behavior of typical programs. Traditionally, simulators have been used to obtain measurements for PMs. Since several hundred EM instructions are required to decode, interpret, and measure each simulated (PM) instruction, the simulation time of typical programs is prohibitively large. Thus, designers tend to simulate only small programs and the results obtained might not be representative of a real system behavior. In this paper we present an alternative tool for collecting architecture measurements: the Block-and-Actions Generator (BKGEN). BKGEN produces a version of the program being measured which is directly executable by the EM. This executable version is obtained directly with the EM compiler or with the PM compiler and a assembly-to-assembly translator. The choice between these alternatives depends on the EM and PM compiler technology and the type of measurements to be obtained. BKGEN also collects the PM events to be measured (called actions). Each EM block of instructions is associated with a PM block of actions so that when the program is executed, it collects the measurements associated with the PM. The main advantage of BKGEN is that the execution time is substantially reduced compared to the execution time of a simulator while collecting similar data. Thus, large typical programs (compilers, assemblers, word processors, ...) can be used by the designer to obtain meaningful measurements.","PeriodicalId":414056,"journal":{"name":"SIGPLAN Conferences and Workshops","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1987-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129390562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The DI interpreter is both a debugger and interpreter of SISAL programs. Its use as a program interpreter is only a small part of its role; it is designed to be a tool for studying compilation techniques for applicative languages. DI interprets dataflow graphs expressed in the IF1 and IF2 languages, and is heavily instrumented to report the activity of dynamic storage activity, reference counting, copying and updating of structured data values. It also aids the SISAL language evaluation by providing an interim execution vehicle for SISAL programs. DI provides determinate, sequential interpretation of graph nodes for sequential and parallel operations in a canonical order. As a debugging aid, DI allows tracing, breakpointing, and interactive display of program data values. DI handles creation of SISAL and IF1 error values for each data type and propagates them according to a well-defined algebra. We have begun to implement IF1 optimizers and have measured the improvements with DI.
{"title":"DI: an interactive debugging interpreter for applicative languages","authors":"S. Skedzielewski, R. K. Yates, R. Oldehoeft","doi":"10.1145/29650.29661","DOIUrl":"https://doi.org/10.1145/29650.29661","url":null,"abstract":"The DI interpreter is both a debugger and interpreter of SISAL programs. Its use as a program interpreter is only a small part of its role; it is designed to be a tool for studying compilation techniques for applicative languages. DI interprets dataflow graphs expressed in the IF1 and IF2 languages, and is heavily instrumented to report the activity of dynamic storage activity, reference counting, copying and updating of structured data values. It also aids the SISAL language evaluation by providing an interim execution vehicle for SISAL programs. DI provides determinate, sequential interpretation of graph nodes for sequential and parallel operations in a canonical order. As a debugging aid, DI allows tracing, breakpointing, and interactive display of program data values. DI handles creation of SISAL and IF1 error values for each data type and propagates them according to a well-defined algebra. We have begun to implement IF1 optimizers and have measured the improvements with DI.","PeriodicalId":414056,"journal":{"name":"SIGPLAN Conferences and Workshops","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1987-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122829001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The object-oriented paradigm is applied to the interpreting of programming languages. An intermediate representation of a program is created as a collection of objects representing various entities in the conceptual world of the source language. These objects cover both the static and the dynamic aspects of a program. As a major advantage of this approach, issues that are traditionally handled by very different techniques (like symbol table management and the generation and execution of intermediate code) can be treated in a unified manner. The specification language of an interpreter generator based on these principles is described.
{"title":"TOOLS: a unifying approach to object-oriented language interpretation","authors":"K. Koskimies, J. Paakki","doi":"10.1145/29650.29667","DOIUrl":"https://doi.org/10.1145/29650.29667","url":null,"abstract":"The object-oriented paradigm is applied to the interpreting of programming languages. An intermediate representation of a program is created as a collection of objects representing various entities in the conceptual world of the source language. These objects cover both the static and the dynamic aspects of a program. As a major advantage of this approach, issues that are traditionally handled by very different techniques (like symbol table management and the generation and execution of intermediate code) can be treated in a unified manner. The specification language of an interpreter generator based on these principles is described.","PeriodicalId":414056,"journal":{"name":"SIGPLAN Conferences and Workshops","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1987-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116705321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Smalltalk programming language allows contexts (stack frames) to be accessed and manipulated in very general ways. This sometimes requires that contexts be retained even after they have terminated executing, and that they be reclaimed other than by LIFO stack discipline. The authoritative definition of Smalltalk [Goldberg and Robson 83] uses reference counting garbage collection to manage contexts, an approach found to be inadequate in practice [Krasner, et al. 83]. Deutsch and Schiffman have described a technique that uses an actual stack as much as possible [Deutsch and Schiffman 84]. Here we offer a less complex technique that we expect will have lower total overhead and reclaim many frames sooner and more easily. We are implementing our technique as part of a state of the art Smalltalk interpreter. The approach may apply to other languages that allow indefinite lifetimes for execution contexts, be they interpreted or compiled.
Smalltalk编程语言允许以非常通用的方式访问和操作上下文(堆栈帧)。这有时要求上下文即使在终止执行后也要保留,并且需要通过后进先出堆栈规则以外的方式回收它们。Smalltalk的权威定义[Goldberg and Robson 83]使用引用计数垃圾收集来管理上下文,这种方法在实践中被发现是不充分的[Krasner等人83]。Deutsch和Schiffman描述了一种尽可能使用实际堆栈的技术[Deutsch和Schiffman 84]。在这里,我们提供了一种不太复杂的技术,我们希望它能降低总开销,并更快更容易地回收许多帧。我们将我们的技术作为最先进的Smalltalk解释器的一部分来实现。这种方法也适用于其他允许执行上下文无限期生存期的语言,无论是解释的还是编译的。
{"title":"Managing stack frames in Smalltalk","authors":"J. Moss","doi":"10.1145/29650.29675","DOIUrl":"https://doi.org/10.1145/29650.29675","url":null,"abstract":"The Smalltalk programming language allows contexts (stack frames) to be accessed and manipulated in very general ways. This sometimes requires that contexts be retained even after they have terminated executing, and that they be reclaimed other than by LIFO stack discipline. The authoritative definition of Smalltalk [Goldberg and Robson 83] uses reference counting garbage collection to manage contexts, an approach found to be inadequate in practice [Krasner, et al. 83]. Deutsch and Schiffman have described a technique that uses an actual stack as much as possible [Deutsch and Schiffman 84]. Here we offer a less complex technique that we expect will have lower total overhead and reclaim many frames sooner and more easily. We are implementing our technique as part of a state of the art Smalltalk interpreter. The approach may apply to other languages that allow indefinite lifetimes for execution contexts, be they interpreted or compiled.","PeriodicalId":414056,"journal":{"name":"SIGPLAN Conferences and Workshops","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1987-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122473160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}