We discuss the development and implementation of a component of a heterogeneous supercomputing environment at the Pittsburgh Supercomputing Center (PSC). This component, built upon a CRAY Y-W/832, a Thinking Machines CM-2, and HiPPI communications channel, allows applications to be partitioned between the two supercomputers. Development work included the design of a user environment and performance analysis of the resulting system. Implementation involved tackling communications and data presentation issues in the heterogeneous environment. This work provides a case study of a heterogeneous supercomputing environment, showing strengths and weaknesses of currently available technology along with directions for future development.
{"title":"Deployment of a Hippi-Based Distributed Supercomputing Environment at the Pittsburgh Supercomputing Center","authors":"J. Mahdavi, G.L. Huntoon, M. Mathis","doi":"10.1109/WHP.1992.664391","DOIUrl":"https://doi.org/10.1109/WHP.1992.664391","url":null,"abstract":"We discuss the development and implementation of a component of a heterogeneous supercomputing environment at the Pittsburgh Supercomputing Center (PSC). This component, built upon a CRAY Y-W/832, a Thinking Machines CM-2, and HiPPI communications channel, allows applications to be partitioned between the two supercomputers. Development work included the design of a user environment and performance analysis of the resulting system. Implementation involved tackling communications and data presentation issues in the heterogeneous environment. This work provides a case study of a heterogeneous supercomputing environment, showing strengths and weaknesses of currently available technology along with directions for future development.","PeriodicalId":201815,"journal":{"name":"Proceedings. Workshop on Heterogeneous Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122222073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper will discuss some issues of parallelism in functional programs and how to exploit it efficiently by improving the granularity of such programs on a multiprocessor. The challenge is to partition a functional program (or a process) into appropriately-sized sub-processes to make sure that the computation time of the local sub-process is at least greater than the communication overheads involved in sending other sub-processes for remote evaluation. It is shown how some parallel programs can be run more efficiently with the prior information of time complexities (in big-0 notation) and relative time complexities of its sub-expressions with the help of some practical examples on the larger-grain distributed multiprocessor machine LAGER.
{"title":"Controlling Parallelism for Larger Grain Execution of Functional Programs Using Complexity Information","authors":"P. Maheshwari","doi":"10.1109/WHP.1992.664387","DOIUrl":"https://doi.org/10.1109/WHP.1992.664387","url":null,"abstract":"This paper will discuss some issues of parallelism in functional programs and how to exploit it efficiently by improving the granularity of such programs on a multiprocessor. The challenge is to partition a functional program (or a process) into appropriately-sized sub-processes to make sure that the computation time of the local sub-process is at least greater than the communication overheads involved in sending other sub-processes for remote evaluation. It is shown how some parallel programs can be run more efficiently with the prior information of time complexities (in big-0 notation) and relative time complexities of its sub-expressions with the help of some practical examples on the larger-grain distributed multiprocessor machine LAGER.","PeriodicalId":201815,"journal":{"name":"Proceedings. Workshop on Heterogeneous Processing","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132969150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ashfaq A. Khokhar, Viktor K. Prasanna, Muhammad Shaaban, Cho-Li Wang
While promising feasible solutions to several short comings of homogeneous parallel computing, Hetero geneous SuperComputing (HSC) poses new problems to be solved. In this paper, we outline the issues and problems arising in using heterogeneous environments for parallel solutions to various applications. Prelim inary solutions to a set of problems in heterogeneous supercomputing are presented. AB an example appli cation, implementation of image understanding algo rithms in a heterogeneous environment is studied.
{"title":"Heterogeneous Supercomputing: Problems and Issues","authors":"Ashfaq A. Khokhar, Viktor K. Prasanna, Muhammad Shaaban, Cho-Li Wang","doi":"10.1109/WHP.1992.664379","DOIUrl":"https://doi.org/10.1109/WHP.1992.664379","url":null,"abstract":"While promising feasible solutions to several short comings of homogeneous parallel computing, Hetero geneous SuperComputing (HSC) poses new problems to be solved. In this paper, we outline the issues and problems arising in using heterogeneous environments for parallel solutions to various applications. Prelim inary solutions to a set of problems in heterogeneous supercomputing are presented. AB an example appli cation, implementation of image understanding algo rithms in a heterogeneous environment is studied.","PeriodicalId":201815,"journal":{"name":"Proceedings. Workshop on Heterogeneous Processing","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124213636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mu-Cheng Wang, Shin-Dug Kim, M. A. Nichols, R. F. Freund, H. Siegel, W. Nation
An approach for jinding the optimal configuration of heterogeneous computer systems to solve supercomputing problem is presented. Superconcurrency as a form of distributed heterogeneous supercomputing is an approach for matching and managing an optimally configured suite of super-speed machines to minimize the execution time on a given task. The approach performs best when the computational requirements for a given set of tasks are diverse. A supercomputing application task is decomposed into a collection of code segments, where the processing requirement is homogeneous in each code segment. The optimal selection theory has been proposed to choose the optimal configuration of machines for a supercomputing problem. This technique is based on code projiling and analytical benchmarking. Here, the previously presented optimal selection theory approach is augmented in two ways: the performance of code segments on non-optimal machine choices is incorporated and non-uniform &compositions of code segments are considered.
{"title":"Augmenting the Optimal Selection Theory for Superconcurrency","authors":"Mu-Cheng Wang, Shin-Dug Kim, M. A. Nichols, R. F. Freund, H. Siegel, W. Nation","doi":"10.1109/WHP.1992.664380","DOIUrl":"https://doi.org/10.1109/WHP.1992.664380","url":null,"abstract":"An approach for jinding the optimal configuration of heterogeneous computer systems to solve supercomputing problem is presented. Superconcurrency as a form of distributed heterogeneous supercomputing is an approach for matching and managing an optimally configured suite of super-speed machines to minimize the execution time on a given task. The approach performs best when the computational requirements for a given set of tasks are diverse. A supercomputing application task is decomposed into a collection of code segments, where the processing requirement is homogeneous in each code segment. The optimal selection theory has been proposed to choose the optimal configuration of machines for a supercomputing problem. This technique is based on code projiling and analytical benchmarking. Here, the previously presented optimal selection theory approach is augmented in two ways: the performance of code segments on non-optimal machine choices is incorporated and non-uniform &compositions of code segments are considered.","PeriodicalId":201815,"journal":{"name":"Proceedings. Workshop on Heterogeneous Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129800727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}