No programming language embodies a fully abstract and consistent facility for representing and managing computational events. Tahiti is an experimental CSP-based language that augments the standard primitive data types with the type Event, which enables data objects to be bound to occurrences in the execution of the program itself. A description is presented of Tahiti's constructs for representing and managing events without addressing the language's formal semantics, or the many implementation issues it arouses.<>
{"title":"The Tahiti programming language: events as first-class objects","authors":"J. Hearne, D. Jusak","doi":"10.1109/ICCL.1990.63780","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63780","url":null,"abstract":"No programming language embodies a fully abstract and consistent facility for representing and managing computational events. Tahiti is an experimental CSP-based language that augments the standard primitive data types with the type Event, which enables data objects to be bound to occurrences in the execution of the program itself. A description is presented of Tahiti's constructs for representing and managing events without addressing the language's formal semantics, or the many implementation issues it arouses.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"404 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122733729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The threaded Interpretive Graph Reduction Engine (TIGRE) was developed for the efficient reduction of combinator graphs in support of functional programming languages and other applications. Results are presented of cache simulations of the TIGRE graph reducer with the following parameters varied: cache size, cache organization, block size, associativity, replacement policy, write policy, and write allocation. As a check on these results, the simulations are compared to measured performance on real hardware. From the results of the simulation study, it is concluded that graph reduction in TIGRE has a very heavy dependence on a write-allocate strategy for good performance, and very high spatial and temporal locality.<>
{"title":"Cache performance of combinator graph reduction","authors":"P. Koopman, Peter Lee, D. Siewiorek","doi":"10.1109/ICCL.1990.63759","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63759","url":null,"abstract":"The threaded Interpretive Graph Reduction Engine (TIGRE) was developed for the efficient reduction of combinator graphs in support of functional programming languages and other applications. Results are presented of cache simulations of the TIGRE graph reducer with the following parameters varied: cache size, cache organization, block size, associativity, replacement policy, write policy, and write allocation. As a check on these results, the simulations are compared to measured performance on real hardware. From the results of the simulation study, it is concluded that graph reduction in TIGRE has a very heavy dependence on a write-allocate strategy for good performance, and very high spatial and temporal locality.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116828509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Orca is a language for programming parallel applications on distributed computing systems. Although processors in such systems communicate only through message passing and not through shared memory, Orca provides a communication model based on logically shared data. The language and its implementation are briefly described and a report is given on experiences in using Orca for three parallel applications: the traveling salesman problem, the all-pairs shortest paths problem, and successive overrelaxation. These applications have different needs for shared data: TSP benefits greatly from the support for shared data; ASP benefits from the use of broadcast communication, even though it is hidden in the implementation; SOR merely requires point-to-point communication, but still can be implemented in the language by simulating message passing. How these applications are programmed in Orca is discussed and the most interesting portions of the Orca code are given. Performance measurements for these programs on a distributed system consisting of 10 MC68020s connected by an Ethernet are also included. These measurements show that significant speedups are obtained for all three programs.<>
{"title":"Experience with distributed programming in Orca","authors":"H. Bal, M. Kaashoek, A. Tanenbaum","doi":"10.1109/ICCL.1990.63763","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63763","url":null,"abstract":"Orca is a language for programming parallel applications on distributed computing systems. Although processors in such systems communicate only through message passing and not through shared memory, Orca provides a communication model based on logically shared data. The language and its implementation are briefly described and a report is given on experiences in using Orca for three parallel applications: the traveling salesman problem, the all-pairs shortest paths problem, and successive overrelaxation. These applications have different needs for shared data: TSP benefits greatly from the support for shared data; ASP benefits from the use of broadcast communication, even though it is hidden in the implementation; SOR merely requires point-to-point communication, but still can be implemented in the language by simulating message passing. How these applications are programmed in Orca is discussed and the most interesting portions of the Orca code are given. Performance measurements for these programs on a distributed system consisting of 10 MC68020s connected by an Ethernet are also included. These measurements show that significant speedups are obtained for all three programs.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"4 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120917172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
KSL/Logic is an integration of logic and object-oriented programming that adds the declarative framework and deductive reasoning of logic programming to the powerful modeling capabilities of the object-oriented paradigm. Predicates, logic expressions, and the generalized search protocol of KSL/Logic are implemented as an integral part of KSL, a reflective, object-oriented programming language. KSL/Logic provides capabilities that go beyond those of Prolog to permit domain-based reasoning, functional arguments, matching of complex object patterns, and object representation of facts. The syntax and semantics of KSL/Logic are described, and the object implementation of its predicate resolution is examined.<>
{"title":"KSL/Logic: integration of logic with objects","authors":"M. Ibrahim, F. Cummins","doi":"10.1109/ICCL.1990.63778","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63778","url":null,"abstract":"KSL/Logic is an integration of logic and object-oriented programming that adds the declarative framework and deductive reasoning of logic programming to the powerful modeling capabilities of the object-oriented paradigm. Predicates, logic expressions, and the generalized search protocol of KSL/Logic are implemented as an integral part of KSL, a reflective, object-oriented programming language. KSL/Logic provides capabilities that go beyond those of Prolog to permit domain-based reasoning, functional arguments, matching of complex object patterns, and object representation of facts. The syntax and semantics of KSL/Logic are described, and the object implementation of its predicate resolution is examined.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124016717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most reuse techniques that involve adaptation of software components focus on transformations at either the design level or the source code level (i.e, individual modules). A third fundamental type of transformation is proposed: interface adaptation. By introducing transformations at the point where module interfaces are bound programmers can reduce coupling between modules in a design, and simultaneously increase cohesion within modules. A language (called Nimble) was created for programmers to implement interface adaptations.<>
{"title":"Improving module reuse by interface adaptation","authors":"James M. Purtilo, J. Atlee","doi":"10.1109/ICCL.1990.63776","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63776","url":null,"abstract":"Most reuse techniques that involve adaptation of software components focus on transformations at either the design level or the source code level (i.e, individual modules). A third fundamental type of transformation is proposed: interface adaptation. By introducing transformations at the point where module interfaces are bound programmers can reduce coupling between modules in a design, and simultaneously increase cohesion within modules. A language (called Nimble) was created for programmers to implement interface adaptations.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121644559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A graphical performance display tool can offer insights into the nature of a program's performance that would be difficult, and sometimes impossible, to achieve with a traditional textual view of performance activity. Two languages with which programmers can specify the collection and display of performance information about parallel and distributed application programs are discussed. It is demonstrated that visual environments for program information display may be developed within a uniform conceptual framework. The display language allows the user to create displays tailored for viewing the performance of various monitored application components. The next step is to use this monitored information as input to a variety of user-specified performance models.<>
{"title":"Using languages for capture, analysis and display of performance information for parallel and distributed applications","authors":"C. Kilpatrick, K. Schwan, D. Ogle","doi":"10.1109/ICCL.1990.63773","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63773","url":null,"abstract":"A graphical performance display tool can offer insights into the nature of a program's performance that would be difficult, and sometimes impossible, to achieve with a traditional textual view of performance activity. Two languages with which programmers can specify the collection and display of performance information about parallel and distributed application programs are discussed. It is demonstrated that visual environments for program information display may be developed within a uniform conceptual framework. The display language allows the user to create displays tailored for viewing the performance of various monitored application components. The next step is to use this monitored information as input to a variety of user-specified performance models.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"105 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115760902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Durra is a language designed to support the development of distributed applications consisting of multiple, concurrent, large-grained tasks executing in a heterogeneous network. An application-level program is written in Durra as a set of task descriptions that prescribes a way to manage the resources of a heterogeneous machine network. The application describes the tasks to be instantiated and executed as concurrent processes, the intermediate queues required to store the messages as they move from producer to consumer processes, and the possible dynamic reconfigurations of the application. The application-level programming paradigm fits a top-down, incremental method of software development very naturally. It is suggested that a language like Durra would be of great value in the development of large, distributed systems.<>
{"title":"A language for distributed applications","authors":"M. Barbacci, Jeannette M. Wing","doi":"10.1109/ICCL.1990.63761","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63761","url":null,"abstract":"Durra is a language designed to support the development of distributed applications consisting of multiple, concurrent, large-grained tasks executing in a heterogeneous network. An application-level program is written in Durra as a set of task descriptions that prescribes a way to manage the resources of a heterogeneous machine network. The application describes the tasks to be instantiated and executed as concurrent processes, the intermediate queues required to store the messages as they move from producer to consumer processes, and the possible dynamic reconfigurations of the application. The application-level programming paradigm fits a top-down, incremental method of software development very naturally. It is suggested that a language like Durra would be of great value in the development of large, distributed systems.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127008767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The detection of various dependencies that exist among the definitions and uses of variables in a program is necessary in many language-processing tools. The computation of definition-use dependencies that reach across procedure boundaries is considered. In particular, efficient techniques for computing interprocedural definition-use and use-definition chains and for incrementally updating the chains when a change is made in a procedure are presented. Intraprocedural definition and use information for each procedure is first abstracted and used to construct an interprocedural flow graph. The intraprocedural information is then propagated in two phases throughout the interprocedural flow graph to obtain the complete set of interprocedural reaching definitions and reachable uses. Interprocedural definition-use and use-definition chains are computed from this reaching information. The technique handles the interprocedural effects of the flow of data caused by both reference parameters and global variables, as well as supports separate compilation even in the presence of recursion. The technique has been implemented using a Sun 3/50 workstation and incorporated into an interprocedural data flow tester.<>
{"title":"Computation of interprocedural definition and use dependencies","authors":"M. J. Harrold, M. Soffa","doi":"10.1109/ICCL.1990.63786","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63786","url":null,"abstract":"The detection of various dependencies that exist among the definitions and uses of variables in a program is necessary in many language-processing tools. The computation of definition-use dependencies that reach across procedure boundaries is considered. In particular, efficient techniques for computing interprocedural definition-use and use-definition chains and for incrementally updating the chains when a change is made in a procedure are presented. Intraprocedural definition and use information for each procedure is first abstracted and used to construct an interprocedural flow graph. The intraprocedural information is then propagated in two phases throughout the interprocedural flow graph to obtain the complete set of interprocedural reaching definitions and reachable uses. Interprocedural definition-use and use-definition chains are computed from this reaching information. The technique handles the interprocedural effects of the flow of data caused by both reference parameters and global variables, as well as supports separate compilation even in the presence of recursion. The technique has been implemented using a Sun 3/50 workstation and incorporated into an interprocedural data flow tester.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127059631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A two-dimensional organization for object-oriented systems and a browser supporting that organization are described. The organization provides sites for documenting both generic functions and object types, allows convenient browsing and information hiding according to both function and type, and supports the notion of abstract types. Also described is the extension of the organization and browser to multiple dimensions to allow for multi-methods that are split into separate implementations based on criteria in addition to receiver type. Inheritance and information hiding in the multidimensional case are discussed briefly. The multidimensional browser has been implemented on top of the RPDE/sup 3/ environment framework.<>
{"title":"Multi-dimensional organization and browsing of object-oriented systems","authors":"H. Ossher","doi":"10.1109/ICCL.1990.63768","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63768","url":null,"abstract":"A two-dimensional organization for object-oriented systems and a browser supporting that organization are described. The organization provides sites for documenting both generic functions and object types, allows convenient browsing and information hiding according to both function and type, and supports the notion of abstract types. Also described is the extension of the organization and browser to multiple dimensions to allow for multi-methods that are split into separate implementations based on criteria in addition to receiver type. Inheritance and information hiding in the multidimensional case are discussed briefly. The multidimensional browser has been implemented on top of the RPDE/sup 3/ environment framework.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129374405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A tightly coupled multiprocessor system where each processor has direct access to a shared memory is studied. The system used for the experiments has only eight processors, but it supports the concurrent fetch and add operation, which is used extensively in the graph-reducer. A parallel graph-reduction technique is developed for such a system, and the measurement of its performance is made via some benchmark programs. As a byproduct, a new on-the-fly garbage collector which combines two different collection techniques has been developed. A new read-only graph-traversal technique for any number of concurrent processes independently traversing a shared graph has also been developed.<>
{"title":"Parallel graph-reduction with a shared memory multiprocessor system","authors":"György E. Révész","doi":"10.1109/ICCL.1990.63758","DOIUrl":"https://doi.org/10.1109/ICCL.1990.63758","url":null,"abstract":"A tightly coupled multiprocessor system where each processor has direct access to a shared memory is studied. The system used for the experiments has only eight processors, but it supports the concurrent fetch and add operation, which is used extensively in the graph-reducer. A parallel graph-reduction technique is developed for such a system, and the measurement of its performance is made via some benchmark programs. As a byproduct, a new on-the-fly garbage collector which combines two different collection techniques has been developed. A new read-only graph-traversal technique for any number of concurrent processes independently traversing a shared graph has also been developed.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121119242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}