首页 > 最新文献

The Sixth Distributed Memory Computing Conference, 1991. Proceedings最新文献

英文 中文
Using an Optical Bus in a Distributed Memory Multicomputer 在分布式存储多计算机中使用光总线
Pub Date : 1991-04-28 DOI: 10.1109/DMCC.1991.633311
M. H. Davis, U. Ramachandran
This research examines the use of an optical bus in a distributed memory multicomputer. We formulate a simple multicomputer model in order to concentrate on the performance of the optical bus from a Computer Architecture viewpoint. In this report we consider the optical bus to be a Small Local Area Network (S-LAN) and apply two classical LAN medium access protocols, Time Division Multiple Access (TDMA) and Carrier Sense Multiple Access/Collision Detection (CSMA/CD). From our standard discrete-event simulation experiments we principally conclude that CSMA/CD does not outperform TDMA as much as might be expected from classical analyses. We also conclude that TDMA allows a large number of nodes to be connected to the optical bus for our model.
本研究探讨了光总线在分布式存储多计算机中的应用。为了从计算机体系结构的角度研究光总线的性能,我们建立了一个简单的多计算机模型。在本报告中,我们考虑光总线是一个小型局域网(S-LAN),并应用两种经典的局域网介质访问协议,时分多址(TDMA)和载波感知多址/碰撞检测(CSMA/CD)。从我们的标准离散事件模拟实验中,我们主要得出结论,CSMA/CD并不像经典分析所期望的那样优于TDMA。我们还得出结论,TDMA允许大量节点连接到我们的模型的光总线。
{"title":"Using an Optical Bus in a Distributed Memory Multicomputer","authors":"M. H. Davis, U. Ramachandran","doi":"10.1109/DMCC.1991.633311","DOIUrl":"https://doi.org/10.1109/DMCC.1991.633311","url":null,"abstract":"This research examines the use of an optical bus in a distributed memory multicomputer. We formulate a simple multicomputer model in order to concentrate on the performance of the optical bus from a Computer Architecture viewpoint. In this report we consider the optical bus to be a Small Local Area Network (S-LAN) and apply two classical LAN medium access protocols, Time Division Multiple Access (TDMA) and Carrier Sense Multiple Access/Collision Detection (CSMA/CD). From our standard discrete-event simulation experiments we principally conclude that CSMA/CD does not outperform TDMA as much as might be expected from classical analyses. We also conclude that TDMA allows a large number of nodes to be connected to the optical bus for our model.","PeriodicalId":313314,"journal":{"name":"The Sixth Distributed Memory Computing Conference, 1991. Proceedings","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123659628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
DAWRS: A Differential - Algebraic System Solver by the Waveform Relaxation Method 基于波形松弛法的微分-代数系统求解器
Pub Date : 1991-04-28 DOI: 10.1109/DMCC.1991.633306
A. Secchi, M. Morari, E. Biscaia
We investigate the concurrent solution of low-index differential-algebraic equations (DAE’s) by the waveform relaxation (WR) method, an iterative method for system integration. We present our new simulation code, DAWRS (Differential - Algebraic - Waveform Relaxation Solver), to solve DAE’s on parallel machines using the WR methods, and describe new techniques to improve the convergence of such methods. As experimental results, we demonstrate the achievable concurrent performance to solve DAE’s for a class of applications in chemical engineering.
利用系统积分的一种迭代方法——波形松弛法研究了低指数微分代数方程的并发解。我们提出了新的仿真代码DAWRS(微分-代数-波形松弛求解器),用于使用WR方法在并行机器上求解DAE,并描述了改进这些方法收敛性的新技术。作为实验结果,我们展示了可实现的并发性能,以解决DAE的一类化工应用。
{"title":"DAWRS: A Differential - Algebraic System Solver by the Waveform Relaxation Method","authors":"A. Secchi, M. Morari, E. Biscaia","doi":"10.1109/DMCC.1991.633306","DOIUrl":"https://doi.org/10.1109/DMCC.1991.633306","url":null,"abstract":"We investigate the concurrent solution of low-index \u0000differential-algebraic equations (DAE’s) by the waveform \u0000relaxation (WR) method, an iterative method for system \u0000integration. We present our new simulation code, DAWRS \u0000(Differential - Algebraic - Waveform Relaxation Solver), to \u0000solve DAE’s on parallel machines using the WR methods, and \u0000describe new techniques to improve the convergence of \u0000such methods. As experimental results, we demonstrate the achievable concurrent performance to solve DAE’s for \u0000a class of applications in chemical engineering.","PeriodicalId":313314,"journal":{"name":"The Sixth Distributed Memory Computing Conference, 1991. Proceedings","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125546257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Toward an Efficient Pamallel Implementation of the Bisection Method for Computing Eigenvalues 计算特征值的等分法的高效并行实现
Pub Date : 1991-04-28 DOI: 10.1109/DMCC.1991.633210
S. Crivelli, E. Jessup
In this paper, we compare the costs of computing a single eigenvalue of a. symmetric tridiagonal matrix b y serial bisection and b y parallel multisection on a hypercube multiprocessor. We show how the optimal method for computing one eigenvalue depends on such variables as the matrix order and parameters of the hypercube used. Our analysis is supported b y experiments on an Intel iPSC-2 hypercube multiprocessor.
本文比较了在超立方体多处理机上计算对称三对角矩阵的单个特征值的代价。我们展示了计算一个特征值的最优方法如何依赖于这些变量,如所使用的超立方体的矩阵顺序和参数。在Intel iPSC-2超立方体多处理器上的实验支持了我们的分析。
{"title":"Toward an Efficient Pamallel Implementation of the Bisection Method for Computing Eigenvalues","authors":"S. Crivelli, E. Jessup","doi":"10.1109/DMCC.1991.633210","DOIUrl":"https://doi.org/10.1109/DMCC.1991.633210","url":null,"abstract":"In this paper, we compare the costs of computing a single eigenvalue of a. symmetric tridiagonal matrix b y serial bisection and b y parallel multisection on a hypercube multiprocessor. We show how the optimal method for computing one eigenvalue depends on such variables as the matrix order and parameters of the hypercube used. Our analysis is supported b y experiments on an Intel iPSC-2 hypercube multiprocessor.","PeriodicalId":313314,"journal":{"name":"The Sixth Distributed Memory Computing Conference, 1991. Proceedings","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131841790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Probabilistic Analysis of the Optimal Efficiency of the Multi-Level Dynamic Load Balancing Scheme 多级动态负载均衡方案最优效率的概率分析
Pub Date : 1991-04-28 DOI: 10.1109/DMCC.1991.633108
Kouichi Kimura, Nobuyuki Ichiyoshi
This paper investigates the optimal efficiency of the multi-level dynamic load balancing scheme for ORparallel programs, using probability theory. In the single-level dynamic load balancing scheme, one processor divides a given task into a number of subtasks, which are distributed to other processors on demand and then executed independently. We introduce a formal model of the execution as a queuing system with several servers. And we investigate the optimal granularity of the subtasks to attain the maximal efficiency, taking account of dividing costs and load imbalance between the processors. Thus we obtain estimates of the maximal efficiency. We then apply these results to analysis of the efficiency of the multi-level dynamic load balancing scheme, which is the iterated application of the singlelevel scheme in a hierarchical manner. And we show how the scalability is thereby improved over the singlelevel scheme.
本文运用概率论的方法,研究了多级动态负载均衡方案的最优效率。在单级动态负载平衡方案中,一个处理器将给定的任务划分为若干个子任务,这些子任务根据需要分发给其他处理器,然后独立执行。我们引入了一个正式的执行模型,作为一个具有多个服务器的排队系统。同时考虑到处理器之间的成本分配和负载不平衡,研究了子任务的最优粒度,以获得最大的效率。这样我们就得到了最大效率的估计。然后,我们将这些结果应用于分析多级动态负载均衡方案的效率,该方案是单级方案的分层迭代应用。我们将展示如何通过单级方案提高可伸缩性。
{"title":"Probabilistic Analysis of the Optimal Efficiency of the Multi-Level Dynamic Load Balancing Scheme","authors":"Kouichi Kimura, Nobuyuki Ichiyoshi","doi":"10.1109/DMCC.1991.633108","DOIUrl":"https://doi.org/10.1109/DMCC.1991.633108","url":null,"abstract":"This paper investigates the optimal efficiency of the multi-level dynamic load balancing scheme for ORparallel programs, using probability theory. In the single-level dynamic load balancing scheme, one processor divides a given task into a number of subtasks, which are distributed to other processors on demand and then executed independently. We introduce a formal model of the execution as a queuing system with several servers. And we investigate the optimal granularity of the subtasks to attain the maximal efficiency, taking account of dividing costs and load imbalance between the processors. Thus we obtain estimates of the maximal efficiency. We then apply these results to analysis of the efficiency of the multi-level dynamic load balancing scheme, which is the iterated application of the singlelevel scheme in a hierarchical manner. And we show how the scalability is thereby improved over the singlelevel scheme.","PeriodicalId":313314,"journal":{"name":"The Sixth Distributed Memory Computing Conference, 1991. Proceedings","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116620894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Zipcocle and the Reactive Kernel for the Caltech Intel Delta Prototype and nCUBE/2* Zipcocle和加州理工学院Intel Delta原型和nCUBE/2*的反应内核
Pub Date : 1991-04-28 DOI: 10.1109/DMCC.1991.633073
A. Skjellum, C. Still
{"title":"Zipcocle and the Reactive Kernel for the Caltech Intel Delta Prototype and nCUBE/2*","authors":"A. Skjellum, C. Still","doi":"10.1109/DMCC.1991.633073","DOIUrl":"https://doi.org/10.1109/DMCC.1991.633073","url":null,"abstract":"","PeriodicalId":313314,"journal":{"name":"The Sixth Distributed Memory Computing Conference, 1991. Proceedings","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115073976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
The Reactive Kernel on a Shared-Memory Computer 共享内存计算机上的响应内核
Pub Date : 1991-04-28 DOI: 10.1109/DMCC.1991.633141
L. Hamren, S. Mattisson
This paper describes an efficient implemientation of the Caltech Cosmic Environment/Reactive Kernel multicomputer communication primitives on a Sequent Symmetry, a shared-memory multiprocessor. With this implementation, the Reactive Kernel primitives exist on distributed-memory as well as sharedmemory computers, and a program can be ported between machines like the Symult s2010 multicomputer and the Sequent Symmetry by just recompiling the code. The message startup time on the Sequent is comparable to that of the Symult.
本文描述了Caltech Cosmic Environment/Reactive Kernel多机通信原语在sequential Symmetry(共享内存多处理器)上的高效实现。有了这个实现,响应内核原语既存在于分布式内存计算机上,也存在于共享内存计算机上,程序可以在像Symult s2010多计算机和顺序对称这样的机器之间移植,只需重新编译代码。序列上的消息启动时间与Symult上的消息启动时间相当。
{"title":"The Reactive Kernel on a Shared-Memory Computer","authors":"L. Hamren, S. Mattisson","doi":"10.1109/DMCC.1991.633141","DOIUrl":"https://doi.org/10.1109/DMCC.1991.633141","url":null,"abstract":"This paper describes an efficient implemientation of the Caltech Cosmic Environment/Reactive Kernel multicomputer communication primitives on a Sequent Symmetry, a shared-memory multiprocessor. With this implementation, the Reactive Kernel primitives exist on distributed-memory as well as sharedmemory computers, and a program can be ported between machines like the Symult s2010 multicomputer and the Sequent Symmetry by just recompiling the code. The message startup time on the Sequent is comparable to that of the Symult.","PeriodicalId":313314,"journal":{"name":"The Sixth Distributed Memory Computing Conference, 1991. Proceedings","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133870161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Basic Linear Algebra Comrnunication Subprograms 基本线性代数通讯子程序
Pub Date : 1991-04-28 DOI: 10.1109/DMCC.1991.633146
E. Anderson, A. Benzoni, J. Dongarra, S. Moulton, S. Ostrouchov, B. Tourancheau, R. van de Geijn
{"title":"Basic Linear Algebra Comrnunication Subprograms","authors":"E. Anderson, A. Benzoni, J. Dongarra, S. Moulton, S. Ostrouchov, B. Tourancheau, R. van de Geijn","doi":"10.1109/DMCC.1991.633146","DOIUrl":"https://doi.org/10.1109/DMCC.1991.633146","url":null,"abstract":"","PeriodicalId":313314,"journal":{"name":"The Sixth Distributed Memory Computing Conference, 1991. Proceedings","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132974541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Structured Parallel Programming on Multicomputers 多台计算机上的结构化并行编程
Pub Date : 1991-04-28 DOI: 10.1109/DMCC.1991.633130
Zhiwei Xu
Currently, parallel programs for distributed memory multicomputers are difficult to write, understand, test, and reason about. It is observed that these difficulties can be attributed to the lack of a structured style in current parallel programming practice. In this paper, we present a structured methodology to facilitate parallel program development on distributed memory multicomputers. The methodology aims to developing parallel programs that are determinate (the same input always produces the same output, in other words, the result is repeatable), terminating (the program is free of deadlock and other infinite waiting anomalies), and easy to understand and test. It also enables us to take advantage of the conventional, well established techniques of sofhvare engineering. ming to parallel program development. However, some new ideas are added to handle parallelism. The methodology contains three basic principles: (1) Use structured constructs; (2) develop determinate and terminating programs; (3) follow a two-phase design; (4) use a mathematical model to define semantics of parallel programs; and (5) employ computer aided techniques for analyzing and checking programs. Our basic approach is to combine these principles to cope with the complexity of parallel programming. As shown in Fig.1, while the total space of all parallel programs is very large, applying the first three principles drastically reduces the space to a subspace (Class IV). Since this subspace is much smaller, the programming task becomes simpler.
目前,分布式存储多计算机的并行程序难以编写、理解、测试和推理。可以观察到,这些困难可以归因于当前并行编程实践中缺乏结构化风格。在本文中,我们提出了一种结构化的方法来促进分布式存储多计算机上并行程序的开发。该方法旨在开发确定的并行程序(相同的输入总是产生相同的输出,换句话说,结果是可重复的),终止(程序没有死锁和其他无限等待异常),易于理解和测试。它还使我们能够利用传统的、建立良好的软件工程技术。明并行程序开发。然而,增加了一些新的想法来处理并行性。该方法包含三个基本原则:(1)使用结构化结构;(2)制定确定的和终止的方案;(3)采用两阶段设计;(4)用数学模型定义并行程序的语义;(5)利用计算机辅助技术对程序进行分析和校核。我们的基本方法是结合这些原则来处理并行编程的复杂性。如图1所示,虽然所有并行程序的总空间非常大,但应用前三个原则将空间大大减少到一个子空间(第IV类),由于该子空间小得多,因此编程任务变得更简单。
{"title":"Structured Parallel Programming on Multicomputers","authors":"Zhiwei Xu","doi":"10.1109/DMCC.1991.633130","DOIUrl":"https://doi.org/10.1109/DMCC.1991.633130","url":null,"abstract":"Currently, parallel programs for distributed memory multicomputers are difficult to write, understand, test, and reason about. It is observed that these difficulties can be attributed to the lack of a structured style in current parallel programming practice. In this paper, we present a structured methodology to facilitate parallel program development on distributed memory multicomputers. The methodology aims to developing parallel programs that are determinate (the same input always produces the same output, in other words, the result is repeatable), terminating (the program is free of deadlock and other infinite waiting anomalies), and easy to understand and test. It also enables us to take advantage of the conventional, well established techniques of sofhvare engineering. ming to parallel program development. However, some new ideas are added to handle parallelism. The methodology contains three basic principles: (1) Use structured constructs; (2) develop determinate and terminating programs; (3) follow a two-phase design; (4) use a mathematical model to define semantics of parallel programs; and (5) employ computer aided techniques for analyzing and checking programs. Our basic approach is to combine these principles to cope with the complexity of parallel programming. As shown in Fig.1, while the total space of all parallel programs is very large, applying the first three principles drastically reduces the space to a subspace (Class IV). Since this subspace is much smaller, the programming task becomes simpler.","PeriodicalId":313314,"journal":{"name":"The Sixth Distributed Memory Computing Conference, 1991. Proceedings","volume":"26 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114126831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Scalable VLSI MIMD Routing Cell 一个可扩展的VLSI MIMD路由单元
Pub Date : 1991-04-28 DOI: 10.1109/DMCC.1991.633351
H. Corporaal, J. Olk
It is a well known fact that full custom designed computer architectures can achieve much higher performance for specific applications than general purpose computers. Thisperjormance has to be paidfor: a long design trajectory results in a high cost-performance ratio. Current VLSI design and compilation tools however, make semi-custom designs feasible with greatly reduced costs and time to market. This paper presents a scalable andflexible communication processor for message passing MIMD systems. This communication processor is implemented as a parametrisized VLSI routing cell in a VLSI compilation system. This cell fits into the SCARCE RISC processor framework [ I ] , which is an architectural framework for automatic generation of application specific processors. By use of application analysis, the cell is tuned to the specijic requirements during silicon compilation time. This approach is new, in that it avoids the general performance penalty paid for requiredflexibility.
众所周知,完全定制设计的计算机体系结构在特定应用中可以比通用计算机实现更高的性能。这种性能需要付出代价:长设计轨迹导致高性价比。然而,目前的VLSI设计和编译工具使半定制设计成为可能,大大降低了成本和上市时间。本文提出了一种可扩展的、灵活的用于消息传递MIMD系统的通信处理器。该通信处理器在VLSI编译系统中作为参数化VLSI路由单元实现。这个单元适合于稀缺的RISC处理器框架[1],这是一个用于自动生成特定应用程序处理器的架构框架。通过使用应用程序分析,在硅编译期间将单元调整为特定的需求。这种方法是新的,因为它避免了为所需的灵活性而付出的一般性能损失。
{"title":"A Scalable VLSI MIMD Routing Cell","authors":"H. Corporaal, J. Olk","doi":"10.1109/DMCC.1991.633351","DOIUrl":"https://doi.org/10.1109/DMCC.1991.633351","url":null,"abstract":"It is a well known fact that full custom designed computer architectures can achieve much higher performance for specific applications than general purpose computers. Thisperjormance has to be paidfor: a long design trajectory results in a high cost-performance ratio. Current VLSI design and compilation tools however, make semi-custom designs feasible with greatly reduced costs and time to market. This paper presents a scalable andflexible communication processor for message passing MIMD systems. This communication processor is implemented as a parametrisized VLSI routing cell in a VLSI compilation system. This cell fits into the SCARCE RISC processor framework [ I ] , which is an architectural framework for automatic generation of application specific processors. By use of application analysis, the cell is tuned to the specijic requirements during silicon compilation time. This approach is new, in that it avoids the general performance penalty paid for requiredflexibility.","PeriodicalId":313314,"journal":{"name":"The Sixth Distributed Memory Computing Conference, 1991. Proceedings","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134419702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Comparing Some Approaches to Programming Distributed Memory Machines 分布式内存机编程方法的比较
Pub Date : 1991-04-28 DOI: 10.1109/DMCC.1991.633131
M. Haveraaen
We show that programs written for the SIMD machine model are equivalent to a special form of barrier MIMD programs. This form is called CPP. The CPP form is also produced when compiling functional languages like Crystal and Sapphire. CPP programs may be executed on MIMD computers without any need for global synchronization and little or no communication overhead, probably with a gain in execution speed as a result. This raises a challenge to construct MIMD computers with many processors and Eow-cost communication in order to ful ly utilize this potential.
我们证明了为SIMD机器模型编写的程序相当于一种特殊形式的屏障MIMD程序。这种形式被称为CPP。在编译像Crystal和Sapphire这样的函数式语言时也会生成CPP表单。CPP程序可以在MIMD计算机上执行,不需要全局同步,很少或没有通信开销,因此可能会提高执行速度。为了充分利用这一潜力,构建具有多处理器和低成本通信的MIMD计算机提出了挑战。
{"title":"Comparing Some Approaches to Programming Distributed Memory Machines","authors":"M. Haveraaen","doi":"10.1109/DMCC.1991.633131","DOIUrl":"https://doi.org/10.1109/DMCC.1991.633131","url":null,"abstract":"We show that programs written for the SIMD machine model are equivalent to a special form of barrier MIMD programs. This form is called CPP. The CPP form is also produced when compiling functional languages like Crystal and Sapphire. CPP programs may be executed on MIMD computers without any need for global synchronization and little or no communication overhead, probably with a gain in execution speed as a result. This raises a challenge to construct MIMD computers with many processors and Eow-cost communication in order to ful ly utilize this potential.","PeriodicalId":313314,"journal":{"name":"The Sixth Distributed Memory Computing Conference, 1991. Proceedings","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133154836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
The Sixth Distributed Memory Computing Conference, 1991. Proceedings
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1