A method for bringing the concept of lazy evaluation to logic programming in a rigorous yet efficient manner is presented. Its main advantage over previous methods is its considerable efficiency, from both the theoretical and implementation points of view. It is based on making the SLD-resolution rule of inference directly simulate the behavior of a lazy rewriting interpreter satisfying strong computational properties. It thereby yields a powerful system in which one can program with functions, relations, non-determinism, lazy evaluation, and combinations of these all within a single logical framework. The method can also be viewed as contributing to the design and implementation of lazy rewriting. It introduces lazy, nondeterministic rewriting. It proposes a new method of shrinking the search space of reductions to a single branch. It allows a very simple, yet efficient, implementation in Prolog, so programming of many low-level tasks involved in usual implementations of lazy rewriting is avoided altogether.<>
{"title":"Lazy evaluation in logic programming","authors":"S. Narain","doi":"10.1109/ICCL.1990.63777","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63777","url":null,"abstract":"A method for bringing the concept of lazy evaluation to logic programming in a rigorous yet efficient manner is presented. Its main advantage over previous methods is its considerable efficiency, from both the theoretical and implementation points of view. It is based on making the SLD-resolution rule of inference directly simulate the behavior of a lazy rewriting interpreter satisfying strong computational properties. It thereby yields a powerful system in which one can program with functions, relations, non-determinism, lazy evaluation, and combinations of these all within a single logical framework. The method can also be viewed as contributing to the design and implementation of lazy rewriting. It introduces lazy, nondeterministic rewriting. It proposes a new method of shrinking the search space of reductions to a single branch. It allows a very simple, yet efficient, implementation in Prolog, so programming of many low-level tasks involved in usual implementations of lazy rewriting is avoided altogether.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124595158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An extension of conventional procedures in which procedure bodies handling multiple cases can be subdivided into separate bodies handling a single case each is described. Subdivision is based on criteria specified by the programmer. Underlying call support selects the body to execute in response to each call. Subdivided procedures support a programming style in which great attention is paid to facilitating subsequent extensions. Normally, extensions have to be made by changing source code; subdivided procedures allow them to be made instead by adding new bodies. Subdivided procedures can be implemented on top of procedural languages with a preprocessor that examines just a file of definitions; it does not need to examine procedure code. A restricted version of the mechanism implemented within the RPDE/sup 3/ environment framework has been in constant use for more than two years. Experience has shown that it facilitates extensible programming at little or no cost in call-time overhead.<>
{"title":"Subdivided procedures: a language extension supporting extensible programming","authors":"W. Harrison, H. Ossher","doi":"10.1109/ICCL.1990.63774","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63774","url":null,"abstract":"An extension of conventional procedures in which procedure bodies handling multiple cases can be subdivided into separate bodies handling a single case each is described. Subdivision is based on criteria specified by the programmer. Underlying call support selects the body to execute in response to each call. Subdivided procedures support a programming style in which great attention is paid to facilitating subsequent extensions. Normally, extensions have to be made by changing source code; subdivided procedures allow them to be made instead by adding new bodies. Subdivided procedures can be implemented on top of procedural languages with a preprocessor that examines just a file of definitions; it does not need to examine procedure code. A restricted version of the mechanism implemented within the RPDE/sup 3/ environment framework has been in constant use for more than two years. Experience has shown that it facilitates extensible programming at little or no cost in call-time overhead.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114874345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Object-oriented dialects of existing programming languages are often implemented using a preprocessor that translates from the dialect to an equivalent program in the original programming language. Unfortunately, the nature of the preprocessing done by these implementations is hidden in the ad hoc algorithms of the preprocessors themselves, except as demonstrated by examples. An attempt to catalogue and generalize these syntactic transformations using a simple set of applicative transformation rules expressed in the TXL dialect description language is described. Example transformation rules for implementing object types and parametric polymorphism in an object-oriented dialect of the Turing programming language are given. These rules easily generalize to other languages of the Pascal family and have been used to automatically implement Objective Turing.<>
{"title":"Specification and automatic prototype implementation of polymorphic objects in Turing using the TXL dialect processor","authors":"J. Cordy, Eric Promislow","doi":"10.1109/ICCL.1990.63770","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63770","url":null,"abstract":"Object-oriented dialects of existing programming languages are often implemented using a preprocessor that translates from the dialect to an equivalent program in the original programming language. Unfortunately, the nature of the preprocessing done by these implementations is hidden in the ad hoc algorithms of the preprocessors themselves, except as demonstrated by examples. An attempt to catalogue and generalize these syntactic transformations using a simple set of applicative transformation rules expressed in the TXL dialect description language is described. Example transformation rules for implementing object types and parametric polymorphism in an object-oriented dialect of the Turing programming language are given. These rules easily generalize to other languages of the Pascal family and have been used to automatically implement Objective Turing.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133402046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A practical language for creating real-time, two-dimensional, smooth, color animations is described. Animation can be a valuable component in a variety of domains, such as user interface design, on-line help information, and computer-aided instruction. The animation language produces aesthetically pleasing, smooth imagery, and is easy to learn and use. The language is based on four abstract data types: locations, images, paths, and transitions. Animation designers create and modify objects of these types in order to produce animation sequences. In addition, a precise specification and semantics are provided for all the data type operations. This rigorous definition helps simplify animation design by formalizing the actions resulting from the operations. A prototype algorithm animation system that utilizes this design language as its basis has been implemented.<>
{"title":"A practical animation language for software development","authors":"J. Stasko","doi":"10.1109/ICCL.1990.63755","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63755","url":null,"abstract":"A practical language for creating real-time, two-dimensional, smooth, color animations is described. Animation can be a valuable component in a variety of domains, such as user interface design, on-line help information, and computer-aided instruction. The animation language produces aesthetically pleasing, smooth imagery, and is easy to learn and use. The language is based on four abstract data types: locations, images, paths, and transitions. Animation designers create and modify objects of these types in order to produce animation sequences. In addition, a precise specification and semantics are provided for all the data type operations. This rigorous definition helps simplify animation design by formalizing the actions resulting from the operations. A prototype algorithm animation system that utilizes this design language as its basis has been implemented.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121380504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Marlin, Wei Zhao, Graeme Doherty, Andrew Bohonis
The increasing importance of real-time computing systems is widely known and such systems are presently the subject of much research. A particularly attractive approach to the programming of hard real-time systems is the identification of multiple versions of the task to be carried out. If this is done, then the system scheduler used by a real-time system can select the version that gives the most precise results in the time available-the more time available, the more precise the results. This approach to programming hard real-time systems is called multiversion computation. The question of suitable language support for multiversion computation through the description of GARTL, a real-time programming language is explored. Some aspects of the implementation of GARTL are discussed.<>
{"title":"GARTL: a real-time programming language based on multi-version computation","authors":"C. Marlin, Wei Zhao, Graeme Doherty, Andrew Bohonis","doi":"10.1109/ICCL.1990.63766","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63766","url":null,"abstract":"The increasing importance of real-time computing systems is widely known and such systems are presently the subject of much research. A particularly attractive approach to the programming of hard real-time systems is the identification of multiple versions of the task to be carried out. If this is done, then the system scheduler used by a real-time system can select the version that gives the most precise results in the time available-the more time available, the more precise the results. This approach to programming hard real-time systems is called multiversion computation. The question of suitable language support for multiversion computation through the description of GARTL, a real-time programming language is explored. Some aspects of the implementation of GARTL are discussed.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123974492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The concept of a priority controlled module (PCM), which is intended to implement shared objects in a parallelism environment is presented. Semantics of a PCM are given using a temporal logic. Experiences adapting the inheritance mechanisms to the synchronization domain using the concept of a PCM are also described. Because PCM mainly relies on the separation between data abstraction and synchronization, either of these can be thought as a degree of freedom. Each degree of freedom appears as a reusable programming entity, and can be implemented using the concept of a class occurring in the object-oriented languages.<>
{"title":"A 'two degrees of freedom' approach for parallel programming","authors":"J. Bahsoun, L. Féraud, C. Bétourné","doi":"10.1109/ICCL.1990.63782","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63782","url":null,"abstract":"The concept of a priority controlled module (PCM), which is intended to implement shared objects in a parallelism environment is presented. Semantics of a PCM are given using a temporal logic. Experiences adapting the inheritance mechanisms to the synchronization domain using the concept of a PCM are also described. Because PCM mainly relies on the separation between data abstraction and synchronization, either of these can be thought as a degree of freedom. Each degree of freedom appears as a reusable programming entity, and can be implemented using the concept of a class occurring in the object-oriented languages.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"362 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120961149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
LEGEND is a novel generator-generator language for the definition, generation, and maintenance of generic component libraries used in high-level hardware synthesis. Each LEGEND description generates a library generator GENUS, which is organized as a hierarchy of generic component generators, templates, and instances. High-level synthesis systems typically transform the abstract behavior of a design into an interconnection of generic component instances derived from a library such as GENUS. Although existing hardware description languages (such as VHDL) can effectively describe particular component libraries, they lack the capability of generating these component libraries from a high-level description. LEGEND complements a language such as VHDL by providing a component library generator-generator with behavioral models for simulation and subsequent synthesis. LEGEND generated components have realistic register transfer semantics, including clocking, asynchrony, and data bidirectionality. LEGEND's simple and extensible syntax allows users to add and modify component types easily. LEGEND is currently implemented on SUN3s under C/UNIX.<>
{"title":"LEGEND: a language for generic component library description","authors":"N. Dutt","doi":"10.1109/ICCL.1990.63775","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63775","url":null,"abstract":"LEGEND is a novel generator-generator language for the definition, generation, and maintenance of generic component libraries used in high-level hardware synthesis. Each LEGEND description generates a library generator GENUS, which is organized as a hierarchy of generic component generators, templates, and instances. High-level synthesis systems typically transform the abstract behavior of a design into an interconnection of generic component instances derived from a library such as GENUS. Although existing hardware description languages (such as VHDL) can effectively describe particular component libraries, they lack the capability of generating these component libraries from a high-level description. LEGEND complements a language such as VHDL by providing a component library generator-generator with behavioral models for simulation and subsequent synthesis. LEGEND generated components have realistic register transfer semantics, including clocking, asynchrony, and data bidirectionality. LEGEND's simple and extensible syntax allows users to add and modify component types easily. LEGEND is currently implemented on SUN3s under C/UNIX.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125734558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Clamen, Linda D. Leibengood, S. Nettles, Jeannette M. Wing
An overview of these novel aspects of Avalon/Common Lisp is presented: (1) support for remote evaluation through a new evaluator data type; (2) a generalization of the traditional client/server model of computation, allowing clients to extend server interfaces and server writers to hide aspects of distribution, such as caching, from clients; (3) support for failure atomicity through automatic commit and abort processing of transactions; and (4) support for persistence through automatic crash recovery of atomic data. These capabilities provide programmers with the flexibility to exploit the semantics of an application to enhance its reliability and efficiency. Avalon/Common Lisp runs on IBM RTs on the Mach operating system. Though the design of Avalon/Common Lisp exploits some of the features of Common Lisp, e.g., its packaging mechanism, all of the constructs are applicable to any Lisp-like language.<>
{"title":"Reliable distributed computing with Avalon/Common Lisp","authors":"S. Clamen, Linda D. Leibengood, S. Nettles, Jeannette M. Wing","doi":"10.1109/ICCL.1990.63772","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63772","url":null,"abstract":"An overview of these novel aspects of Avalon/Common Lisp is presented: (1) support for remote evaluation through a new evaluator data type; (2) a generalization of the traditional client/server model of computation, allowing clients to extend server interfaces and server writers to hide aspects of distribution, such as caching, from clients; (3) support for failure atomicity through automatic commit and abort processing of transactions; and (4) support for persistence through automatic crash recovery of atomic data. These capabilities provide programmers with the flexibility to exploit the semantics of an application to enhance its reliability and efficiency. Avalon/Common Lisp runs on IBM RTs on the Mach operating system. Though the design of Avalon/Common Lisp exploits some of the features of Common Lisp, e.g., its packaging mechanism, all of the constructs are applicable to any Lisp-like language.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116804749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The dynamic clause compilation technique which is used to implement Prolog's dynamic predicates is described. The effectiveness of the technique when applied to a practical application program executed on the sequential inference machine CHI is reported. Dynamic predicates are indispensable in writing practical Prolog application programs. According to the authors' application program analysis, many applications spend more than half of the total execution time in dynamic predicate execution. This means that speeding up dynamic predicates is essential for improving Prolog application performance. From this point of view, the authors introduced the dynamic clause compilation technique, and implemented it on CHI. As soon as a clause is added to Prolog's database, the clause is compiled into machine instructions. This technique greatly accelerates dynamic predicate execution. Application program analysis shows that dynamic clause compilation accelerates the application execution speed up to 5 times faster than conventional dynamic predicate implementation.<>
{"title":"Implementation and evaluation of dynamic predicates on the sequential inference machine CHI","authors":"A. Atarashi, A. Konagaya, S. Habata, M. Yokota","doi":"10.1109/ICCL.1990.63779","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63779","url":null,"abstract":"The dynamic clause compilation technique which is used to implement Prolog's dynamic predicates is described. The effectiveness of the technique when applied to a practical application program executed on the sequential inference machine CHI is reported. Dynamic predicates are indispensable in writing practical Prolog application programs. According to the authors' application program analysis, many applications spend more than half of the total execution time in dynamic predicate execution. This means that speeding up dynamic predicates is essential for improving Prolog application performance. From this point of view, the authors introduced the dynamic clause compilation technique, and implemented it on CHI. As soon as a clause is added to Prolog's database, the clause is compiled into machine instructions. This technique greatly accelerates dynamic predicate execution. Application program analysis shows that dynamic clause compilation accelerates the application execution speed up to 5 times faster than conventional dynamic predicate implementation.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123926136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The problems of embedding programs in documents are sketched and the solutions adopted in the Ness component of the Andrew ToolKit are reviewed. A key question is the connection from user actions to program functions. Other questions include the appropriate level of programming language, its string-processing capabilities, and security.<>
{"title":"Enhancing documents with embedded programs: how Ness extends insets in the Andrew ToolKit","authors":"W. J. Hansen","doi":"10.1109/ICCL.1990.63757","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63757","url":null,"abstract":"The problems of embedding programs in documents are sketched and the solutions adopted in the Ness component of the Andrew ToolKit are reviewed. A key question is the connection from user actions to program functions. Other questions include the appropriate level of programming language, its string-processing capabilities, and security.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114560752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}