Parallelism in object-oriented systems is discussed. The most appealing way to insert parallelism in an object framework is to associate execution capacity with objects. This approach introduces active objects. Synchronous and asynchronous communication between active objects are described using examples of existing languages. A second dimension of parallelism comes from accommodating several activities within the same object. The synchronization techniques for the internal activities are described. The presented examples are written in a highly parallel language, called Parallel Objects. A distinctive characteristic of PO is the possibility of inheritance for the specification of concurrency internal to objects.<>
{"title":"Parallelism in object-oriented programming languages","authors":"Antonio Corradi, L. Leonardi","doi":"10.1109/ICCL.1990.63783","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63783","url":null,"abstract":"Parallelism in object-oriented systems is discussed. The most appealing way to insert parallelism in an object framework is to associate execution capacity with objects. This approach introduces active objects. Synchronous and asynchronous communication between active objects are described using examples of existing languages. A second dimension of parallelism comes from accommodating several activities within the same object. The synchronization techniques for the internal activities are described. The presented examples are written in a highly parallel language, called Parallel Objects. A distinctive characteristic of PO is the possibility of inheritance for the specification of concurrency internal to objects.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126978329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Three coordination languages, Linda, Flat Concurrent Prolog, and DeltaProlog, are discussed with respect to their features for open system design. It is interesting to compare the Linda coordination model with the model of logic languages, because both involve forms of communication based on pattern matching. Although they seem to be equivalent with respect to their expressive power, current implementations of Flat Concurrent Prolog and DeltaProlog miss the efficiency of Linda, for reasons that are discussed. Shared Prolog, a new parallel logic language that is closer to the Linda coordination model, is introduced.<>
{"title":"Coordination languages for open system design","authors":"P. Ciancarini","doi":"10.1109/ICCL.1990.63781","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63781","url":null,"abstract":"Three coordination languages, Linda, Flat Concurrent Prolog, and DeltaProlog, are discussed with respect to their features for open system design. It is interesting to compare the Linda coordination model with the model of logic languages, because both involve forms of communication based on pattern matching. Although they seem to be equivalent with respect to their expressive power, current implementations of Flat Concurrent Prolog and DeltaProlog miss the efficiency of Linda, for reasons that are discussed. Shared Prolog, a new parallel logic language that is closer to the Linda coordination model, is introduced.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116216591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A set of language features that can be added to Ada that associate exceptions with the operations of a data type and exception handlers with data objects are presented. The notation is called data-oriented exception exception handling to distinguish it from more conventional, control-oriented mechanisms. The implementation of a preprocessor from the notation to Ada is described. Empirical studies indicate that control-oriented exception handling mechanisms are more complex than necessary for the tasks they perform, and that data-oriented exception handling can be used to produce programs that are smaller, better structured, and easier to understand and modify.<>
{"title":"Data-oriented exception handling in Ada","authors":"Qian Cui, J. Gannon","doi":"10.1109/ICCL.1990.63765","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63765","url":null,"abstract":"A set of language features that can be added to Ada that associate exceptions with the operations of a data type and exception handlers with data objects are presented. The notation is called data-oriented exception exception handling to distinguish it from more conventional, control-oriented mechanisms. The implementation of a preprocessor from the notation to Ada is described. Empirical studies indicate that control-oriented exception handling mechanisms are more complex than necessary for the tasks they perform, and that data-oriented exception handling can be used to produce programs that are smaller, better structured, and easier to understand and modify.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129745754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although modular programming with separate compilation aids in eliminating unnecessary recompilation and reoptimization, recent studies have discovered that more efficient code can be generated by collapsing a modular program through procedure inlining. To avoid having to reoptimize the resultant large procedures, techniques for incrementally incorporating changes into globally optimized code are presented. The algorithm determines which optimizations are no longer safe after a program change, and also discovers which new optimizations can be performed in order to maintain a high level of optimization. An intermediate representation is incrementally updated to reflect the current optimizations in the program. The techniques developed in this paper have been exploited to improve on the current techniques for symbolic debugging of optimized code.<>
{"title":"Incremental global optimization for faster recompilations","authors":"L. Pollock, M. Soffa","doi":"10.1109/ICCL.1990.63784","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63784","url":null,"abstract":"Although modular programming with separate compilation aids in eliminating unnecessary recompilation and reoptimization, recent studies have discovered that more efficient code can be generated by collapsing a modular program through procedure inlining. To avoid having to reoptimize the resultant large procedures, techniques for incrementally incorporating changes into globally optimized code are presented. The algorithm determines which optimizations are no longer safe after a program change, and also discovers which new optimizations can be performed in order to maintain a high level of optimization. An intermediate representation is incrementally updated to reflect the current optimizations in the program. The techniques developed in this paper have been exploited to improve on the current techniques for symbolic debugging of optimized code.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122486764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Jones, C. K. Gomard, Anders Bondorf, O. Danvy, Torben Æ. Mogensen
A description is given of theoretical and a few practical aspects of an implemented self-applicable partial evaluator for the call by value untyped lambda calculus with constants, conditionals, and a fixed point operator. A partial evaluator that is both high-order and self-applicable is also described. A solution to the problem of binding time analysis is presented. The partial evaluator is simple, completely automatic, and implemented in a side-effect free subset of Scheme. It has been used to compile, to generate compilers, and to generate a compiler generator.<>
{"title":"A self-applicable partial evaluator for the lambda calculus","authors":"N. Jones, C. K. Gomard, Anders Bondorf, O. Danvy, Torben Æ. Mogensen","doi":"10.1145/128861.128864","DOIUrl":"https://doi.org/10.1145/128861.128864","url":null,"abstract":"A description is given of theoretical and a few practical aspects of an implemented self-applicable partial evaluator for the call by value untyped lambda calculus with constants, conditionals, and a fixed point operator. A partial evaluator that is both high-order and self-applicable is also described. A solution to the problem of binding time analysis is presented. The partial evaluator is simple, completely automatic, and implemented in a side-effect free subset of Scheme. It has been used to compile, to generate compilers, and to generate a compiler generator.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122520612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The classical object model supports private data within objects and clean interfaces among objects, and, by definition, does not permit sharing of data among arbitrary objects. This is a problem for certain real-world applications, where the same data logically belongs to multiple objects and may be distributed over multiple nodes on the network. Rather than give up the advantages of encapsulated objects in modeling real-world entities, a new object model that supports distribution of computation units from information-hiding concerns is proposed. A new object model is introduced, a motivating example from the financial services domain is described, and a new language based on the model is presented.<>
{"title":"An object model for shared data","authors":"G. Kaiser, B. Hailpern","doi":"10.1109/ICCL.1990.63769","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63769","url":null,"abstract":"The classical object model supports private data within objects and clean interfaces among objects, and, by definition, does not permit sharing of data among arbitrary objects. This is a problem for certain real-world applications, where the same data logically belongs to multiple objects and may be distributed over multiple nodes on the network. Rather than give up the advantages of encapsulated objects in modeling real-world entities, a new object model that supports distribution of computation units from information-hiding concerns is proposed. A new object model is introduced, a motivating example from the financial services domain is described, and a new language based on the model is presented.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"238 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116350282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Consideration is given to a generalization of coercion that permits structured transformations between program and data structures. The nature of these coercions goes significantly beyond what is found in most modern programming languages. The intent is to develop a programming model that permits the expression of a wide range of superficially diverse modularity constructs within a simple and unified framework. The design of this model is based on the observation that a variety of program structures found in modern programming languages are represented fundamentally in terms of an environment. Given suitable transformations that map the environment representation of a program structure into a data object, it is possible to enable the programmer to gain explicit control over the naming environment. An investigation is made of the semantics of program/data coercion in the presence of a non-strict parallel evaluation semantics for environments. Parallelism and program/data coercion form an interesting symbiosis and it is the investigation of this interaction that forms the primary focus of this work.<>
{"title":"Coercion as a metaphor for computation","authors":"S. Jagannathan","doi":"10.1109/ICCL.1990.63767","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63767","url":null,"abstract":"Consideration is given to a generalization of coercion that permits structured transformations between program and data structures. The nature of these coercions goes significantly beyond what is found in most modern programming languages. The intent is to develop a programming model that permits the expression of a wide range of superficially diverse modularity constructs within a simple and unified framework. The design of this model is based on the observation that a variety of program structures found in modern programming languages are represented fundamentally in terms of an environment. Given suitable transformations that map the environment representation of a program structure into a data object, it is possible to enable the programmer to gain explicit control over the naming environment. An investigation is made of the semantics of program/data coercion in the presence of a non-strict parallel evaluation semantics for environments. Parallelism and program/data coercion form an interesting symbiosis and it is the investigation of this interaction that forms the primary focus of this work.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127076761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A summary of the advantages of data parallel languages a subclass of SIMD (single-instruction-stream, multiple-data-stream languages) is presented, and it is shown how programs written in a data parallel language can be compiled into loosely-synchronous MIMD (multiple-instruction-stream, multiple-data-stream) programs suitable for efficient execution on multicomputers. It is shown that the compiler must first locate the points at which message passing is required. These points are identical to the synchronization points. Therefore, message passing-primitives also synchronize the processors. Second, the compiler must transform the control structure of the input program to bring message-passing primitives to the outermost level. In order to allow a single physical processor to emulate a number of processing elements, the compiler must insert FOR loops around the blocks of code that are delimited by the calls to the message-passing primitives. Finally, data flow analysis can be used to eliminate some calls on message-passing routines and to combine multiple shorter messages into single, longer message whenever possible.<>
{"title":"Compiling SIMD programs for MIMD architectures","authors":"M. J. Quinn, P. Hatcher","doi":"10.1109/ICCL.1990.63785","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63785","url":null,"abstract":"A summary of the advantages of data parallel languages a subclass of SIMD (single-instruction-stream, multiple-data-stream languages) is presented, and it is shown how programs written in a data parallel language can be compiled into loosely-synchronous MIMD (multiple-instruction-stream, multiple-data-stream) programs suitable for efficient execution on multicomputers. It is shown that the compiler must first locate the points at which message passing is required. These points are identical to the synchronization points. Therefore, message passing-primitives also synchronize the processors. Second, the compiler must transform the control structure of the input program to bring message-passing primitives to the outermost level. In order to allow a single physical processor to emulate a number of processing elements, the compiler must insert FOR loops around the blocks of code that are delimited by the calls to the message-passing primitives. Finally, data flow analysis can be used to eliminate some calls on message-passing routines and to combine multiple shorter messages into single, longer message whenever possible.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131841851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The conceptual view model of output is based on the complete separation of the output specification of a program from the program itself, and the use of implicit synchronization to allow the data state of the program to be continuously mapped to a display view. An output specification language called GVL is used to specify the mapping from the program's data state to the display. GVL is a functional language explicitly designed for specifying output. Building from a small number of basic primitives, it provides sufficient power to describe complex graphical output. Examples, including GVL specifications for linked list diagrams, bar charts and an address card file, are given. In keeping with its intended application, GVL is also a graphical language, in which the user draws output specifications directly on the display. How problems often associated with imperative graphical languages are avoided by using the functional paradigm is shown. A prototype implementation of GVL was used to produce examples of graphical output.<>
{"title":"GVL: a graphical, functional language for the specification of output in programming languages","authors":"J. Cordy, T.C.N. Graham","doi":"10.1109/ICCL.1990.63756","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63756","url":null,"abstract":"The conceptual view model of output is based on the complete separation of the output specification of a program from the program itself, and the use of implicit synchronization to allow the data state of the program to be continuously mapped to a display view. An output specification language called GVL is used to specify the mapping from the program's data state to the display. GVL is a functional language explicitly designed for specifying output. Building from a small number of basic primitives, it provides sufficient power to describe complex graphical output. Examples, including GVL specifications for linked list diagrams, bar charts and an address card file, are given. In keeping with its intended application, GVL is also a graphical language, in which the user draws output specifications directly on the display. How problems often associated with imperative graphical languages are avoided by using the functional paradigm is shown. A prototype implementation of GVL was used to produce examples of graphical output.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116401606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
FLAME is an experimental language for distributed programming. It is intended for applications that run on a collection of computers connected by a network. It is part of a project that deals with the building of open software systems. Open systems place stringent requirements on both the type of language one uses and the style of programming. FLAME tries to address the difficulties of writing such open, distributed applications with current technology. FLAME is based on C++, and its main contribution is to integrate the notions of objects and process groups at the programming language level. The result is a powerful language which, nevertheless, raises several new questions. The current state of FLAME is discussed, and potential areas for future development are outlined.<>
{"title":"FLAME: a language for distributed programming","authors":"F. D. Paoli, M. Jazayeri","doi":"10.1109/ICCL.1990.63762","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63762","url":null,"abstract":"FLAME is an experimental language for distributed programming. It is intended for applications that run on a collection of computers connected by a network. It is part of a project that deals with the building of open software systems. Open systems place stringent requirements on both the type of language one uses and the style of programming. FLAME tries to address the difficulties of writing such open, distributed applications with current technology. FLAME is based on C++, and its main contribution is to integrate the notions of objects and process groups at the programming language level. The result is a powerful language which, nevertheless, raises several new questions. The current state of FLAME is discussed, and potential areas for future development are outlined.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117062431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}