Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.555368
W. Bain
A discrete event simulation model of air traffic flow within the United States has been written and executed on the Intel iPSC8/2 parallel system. The simulation program was written in an object oriented manner using the Interwork IITM Concurrent Programming Toolkit. This simulation demonstrates how object oriented programming can simplify the design of complex simulations and can simplify the effort to distribute and balance the processing load on distributed memory, parallel architectures, such as the iPSC/2. It also demonstrates the capacity of these architectures to solve very large simulation problems.
{"title":"Air Traffic Simulation: An Object Oriented, Discrete Event Simulation on the Intel iPSC/2 Parallel System","authors":"W. Bain","doi":"10.1109/DMCC.1990.555368","DOIUrl":"https://doi.org/10.1109/DMCC.1990.555368","url":null,"abstract":"A discrete event simulation model of air traffic flow within the United States has been written and executed on the Intel iPSC8/2 parallel system. The simulation program was written in an object oriented manner using the Interwork IITM Concurrent Programming Toolkit. This simulation demonstrates how object oriented programming can simplify the design of complex simulations and can simplify the effort to distribute and balance the processing load on distributed memory, parallel architectures, such as the iPSC/2. It also demonstrates the capacity of these architectures to solve very large simulation problems.","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115498835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.556282
A. Grinshaw, D. Mack, W. Strayer
Models of parallel computation based upon message passing are in wide-spread use today, yet the message passing primitives available on different architectures are often different in subtle ways. The situation on distributed systems is even worse; not only are there different interfaces, but the services provided are not sufficient for data driven computation. MMPS is a solution to the problem. First, MMPS provides a basic message passing service with guaranteed delivery that can be ported to a wide variety of architectures. Applications that use the MMPS interface will be portable with respect to the message system. Second, MMPS provides a customizable interface that exploits the C++ [l] class hierarchy to allow the user to define new types of messages with new services. The new services can be efficiently implemented using existing code and inheritance.
{"title":"MMPS: Portable Message Passing Support for Parallel Computing","authors":"A. Grinshaw, D. Mack, W. Strayer","doi":"10.1109/DMCC.1990.556282","DOIUrl":"https://doi.org/10.1109/DMCC.1990.556282","url":null,"abstract":"Models of parallel computation based upon message passing are in wide-spread use today, yet the message passing primitives available on different architectures are often different in subtle ways. The situation on distributed systems is even worse; not only are there different interfaces, but the services provided are not sufficient for data driven computation. MMPS is a solution to the problem. First, MMPS provides a basic message passing service with guaranteed delivery that can be ported to a wide variety of architectures. Applications that use the MMPS interface will be portable with respect to the message system. Second, MMPS provides a customizable interface that exploits the C++ [l] class hierarchy to allow the user to define new types of messages with new services. The new services can be efficiently implemented using existing code and inheritance.","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116271520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.556300
D.A. Sykes
Distributing Linked Lists David A. Sykes Department of Computer Science Clemson University Clemson, SC 29634-1906 dsykesO hubcap.clemson.edu
分发链表David A. Sykes克莱姆森大学计算机科学系克莱姆森,SC 29634-1906 dsykesO hubcap.clemson.edu
{"title":"On Distributing Linked Lists","authors":"D.A. Sykes","doi":"10.1109/DMCC.1990.556300","DOIUrl":"https://doi.org/10.1109/DMCC.1990.556300","url":null,"abstract":"Distributing Linked Lists David A. Sykes Department of Computer Science Clemson University Clemson, SC 29634-1906 dsykesO hubcap.clemson.edu","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127312695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.556394
S. Seidel, T.E. Schmiermund
The communication model of a message-passing network must take into account interactions between concurrent communication operations in order to describe its behavior during high levels of communication activity. This work describes the results of measuring certain concurrent communication operations on an Intel ipSC/2-d4 hypercube. The operations measured set practical limits on the number and type of concurrent communication operations that can be supported by each node. These aspects of the iF'SC/2 communication network, along with measurements of bandwidth and latency given here and by others, constitute a communication model that can be used to describe the costs of intensive communication Operations, such as solutions to the broadcast and the scatter/gather problems.
{"title":"Refining the Communication Model for the Intel iPSC/2","authors":"S. Seidel, T.E. Schmiermund","doi":"10.1109/DMCC.1990.556394","DOIUrl":"https://doi.org/10.1109/DMCC.1990.556394","url":null,"abstract":"The communication model of a message-passing network must take into account interactions between concurrent communication operations in order to describe its behavior during high levels of communication activity. This work describes the results of measuring certain concurrent communication operations on an Intel ipSC/2-d4 hypercube. The operations measured set practical limits on the number and type of concurrent communication operations that can be supported by each node. These aspects of the iF'SC/2 communication network, along with measurements of bandwidth and latency given here and by others, constitute a communication model that can be used to describe the costs of intensive communication Operations, such as solutions to the broadcast and the scatter/gather problems.","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126875984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.555436
H. Eissfeller, S. Muller
We present a new parallel implementation of explicit time stepping methods for time dependent equations in one or two spatial dimensions. The aim is to minimize the number of data transfers, to get faster algorthms. In one spatial dimension, z explicit time steps on p processors using a grid of size n need O i t n / p ) arithmetical operations and O( z ) startup operations The triangle method also requires Oi t n / p 1 arithmetical operations but only O! z p / n ) startup operations. In two spatial dimensions, using a grid of size n n and given the same algorithm, the startup time of OCTI operations using the conventional approach is considerably reduced to O( T 6 / n 1 startup operations. All constants regarding the 0-notation are less than 5
我们提出了一种新的并行实现的显式时间步进方法的时间相关方程在一个或两个空间维度。其目的是最小化数据传输的数量,以获得更快的算法。在一个空间维度上,使用大小为n的网格的p个处理器上的z个显式时间步需要Oi (n / p)个算术运算和O(z)个启动运算。三角形方法也需要O t n / p个算术运算,但只需要O!zp / n)启动操作。在两个空间维度中,使用大小为n n的网格并给定相同的算法,使用传统方法的OCTI操作的启动时间大大减少到O(T 6 / n 1)启动操作。所有与0符号有关的常数都小于5
{"title":"The Triangle Method for Saving Startup Time in Parallel Computers","authors":"H. Eissfeller, S. Muller","doi":"10.1109/DMCC.1990.555436","DOIUrl":"https://doi.org/10.1109/DMCC.1990.555436","url":null,"abstract":"We present a new parallel implementation of explicit time stepping methods for time dependent equations in one or two spatial dimensions. The aim is to minimize the number of data transfers, to get faster algorthms. In one spatial dimension, z explicit time steps on p processors using a grid of size n need O i t n / p ) arithmetical operations and O( z ) startup operations The triangle method also requires Oi t n / p 1 arithmetical operations but only O! z p / n ) startup operations. In two spatial dimensions, using a grid of size n n and given the same algorithm, the startup time of OCTI operations using the conventional approach is considerably reduced to O( T 6 / n 1 startup operations. All constants regarding the 0-notation are less than 5","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123733253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.555440
U. Miekkala, O. Nevanlinna, A. Ruehli
This paper gives a mathematical investigation of the convergence properties of a model problem which, at first sight, seems to be unsuitable for waveform relaxation. The model circuit represents a limiting case for capacitive coupling where the capacitances to ground are zero. We show that the WR approach converges. Since the convergence is generally slow we discuss appropriate techniques for accelerating convergence.
{"title":"Convergence, and Circuit Partitioning Aspects for Waveform Relaxation","authors":"U. Miekkala, O. Nevanlinna, A. Ruehli","doi":"10.1109/DMCC.1990.555440","DOIUrl":"https://doi.org/10.1109/DMCC.1990.555440","url":null,"abstract":"This paper gives a mathematical investigation of the convergence properties of a model problem which, at first sight, seems to be unsuitable for waveform relaxation. The model circuit represents a limiting case for capacitive coupling where the capacitances to ground are zero. We show that the WR approach converges. Since the convergence is generally slow we discuss appropriate techniques for accelerating convergence.","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121896931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.556308
Kwei-Jay Lin, Jen-Yao Chung, J.W.-S. Liu
This paper discusses the problem of scheduling hard real-time jobs on hypercube machines. Our model is dierent from traditional systems in that a job in our systems may be terminated if there is not enough time, producing an acceptable but lees precise result. Each job thus can be divided into a hutd (real-time) task which must always be finished before its deadline, and a flexible soft task which is executed only if there is enough processor time left. All hard tasks are scheduled first so that at least a minimally acceptable result is available for each job, but soft tasks are scheduled only after all hard tasks are done. We propose a new scheduling algorithm which combines the shortest refinement first algorithm with the well-known gradient load balancing method. The performance of the algorithm is studied by simulations.
{"title":"Scheduling Real-Time Computations on Hypercubes with Load Balancing","authors":"Kwei-Jay Lin, Jen-Yao Chung, J.W.-S. Liu","doi":"10.1109/DMCC.1990.556308","DOIUrl":"https://doi.org/10.1109/DMCC.1990.556308","url":null,"abstract":"This paper discusses the problem of scheduling hard real-time jobs on hypercube machines. Our model is dierent from traditional systems in that a job in our systems may be terminated if there is not enough time, producing an acceptable but lees precise result. Each job thus can be divided into a hutd (real-time) task which must always be finished before its deadline, and a flexible soft task which is executed only if there is enough processor time left. All hard tasks are scheduled first so that at least a minimally acceptable result is available for each job, but soft tasks are scheduled only after all hard tasks are done. We propose a new scheduling algorithm which combines the shortest refinement first algorithm with the well-known gradient load balancing method. The performance of the algorithm is studied by simulations.","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126281136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.556290
C. Liang, S.K. Chen, W. Tsai
{"title":"An Approach to Reconfigure a Fault-Tolerant Loop System","authors":"C. Liang, S.K. Chen, W. Tsai","doi":"10.1109/DMCC.1990.556290","DOIUrl":"https://doi.org/10.1109/DMCC.1990.556290","url":null,"abstract":"","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130569624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.556386
R. Shah, S. Lamb, R. J. Smith
The Experimental Systems Kit (ES-Kit) Project at the Microelectronics and Computer Technology Corporation has developed an open collection of high performance hardware and software modules that can be assembled into many different experimental distributed object-oriented systems. This paper presents the results of a preliminary comparative performance evaluation of three environments on which the ES-Kit software currently runs.
{"title":"Performance Characterization of ES-Kit Distributed Environments","authors":"R. Shah, S. Lamb, R. J. Smith","doi":"10.1109/DMCC.1990.556386","DOIUrl":"https://doi.org/10.1109/DMCC.1990.556386","url":null,"abstract":"The Experimental Systems Kit (ES-Kit) Project at the Microelectronics and Computer Technology Corporation has developed an open collection of high performance hardware and software modules that can be assembled into many different experimental distributed object-oriented systems. This paper presents the results of a preliminary comparative performance evaluation of three environments on which the ES-Kit software currently runs.","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129642321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.556276
H. Clark, B. McMillin
Abstract A collection of powerful workstations interconnected by a local area network forms a large computing resource. The problem of locating and efficiently using this resource has been the subject of much study. When the system is composed of workstations, an attractive technique may be employed to make use of workstations left idle by their owners. The Distributed Automated Workload balancinG System (DAWGS) is designed to allow users to utilize this networked computing power for their programs. Essentially, DAWGS is an interface between the user and the kernel which allows users to submit batch-type or interactive-type processes or jobs for execution on an idle workstation somewhere on a local area network. DAWGS uses a distributed scheduler based on a bidding scheme which resolves many of the problems with bidding to determine which machine to run a process on. It properly redirects all I/O from the remotely running process back to the machine from whence the process came. DAWGS is capable of checkpointing processes and restarting any type of process, including interactive ones, even when the restart is on a machine different than the one the process was previously running on. We show that running processes remotely on idle workstations can result in significantly lower execution times, particularly for processes with a large execution time. Our method is different from previous work in that it is fault-tolerant, maintains total remote execution transparency for the user, and is fully distributed.
{"title":"DAWGS - A Distributed Compute Server Utilizing Idle Workstations","authors":"H. Clark, B. McMillin","doi":"10.1109/DMCC.1990.556276","DOIUrl":"https://doi.org/10.1109/DMCC.1990.556276","url":null,"abstract":"Abstract A collection of powerful workstations interconnected by a local area network forms a large computing resource. The problem of locating and efficiently using this resource has been the subject of much study. When the system is composed of workstations, an attractive technique may be employed to make use of workstations left idle by their owners. The Distributed Automated Workload balancinG System (DAWGS) is designed to allow users to utilize this networked computing power for their programs. Essentially, DAWGS is an interface between the user and the kernel which allows users to submit batch-type or interactive-type processes or jobs for execution on an idle workstation somewhere on a local area network. DAWGS uses a distributed scheduler based on a bidding scheme which resolves many of the problems with bidding to determine which machine to run a process on. It properly redirects all I/O from the remotely running process back to the machine from whence the process came. DAWGS is capable of checkpointing processes and restarting any type of process, including interactive ones, even when the restart is on a machine different than the one the process was previously running on. We show that running processes remotely on idle workstations can result in significantly lower execution times, particularly for processes with a large execution time. Our method is different from previous work in that it is fault-tolerant, maintains total remote execution transparency for the user, and is fully distributed.","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134487474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}