首页 > 最新文献

Proceedings of Workshop on Programming Models for Massively Parallel Computers最新文献

英文 中文
Structured parallel programming 结构化并行程序设计
Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315543
J. Darlington, M. Ghanem, H. To
Parallel programming is a difficult task involving many complex issues such as resource allocation, and process coordination. We propose a solution to this problem based on the use of a repertoire of parallel algorithmic forms, known as skeletons. The use of skeletons enables the meaning of a parallel program to be separated from its behaviour. Central to this methodology is the use of transformations and performance models. Transformations provide portability and implementation choices, whilst performance models guide the choices by providing predictions of execution time. We describe the methodology and investigate the use and construction of performance models by studying an example.<>
并行编程是一项困难的任务,涉及许多复杂的问题,如资源分配和进程协调。我们提出了一个解决方案,基于使用并行算法形式的剧目,被称为骨架。骨架的使用使并行程序的意义与其行为分离开来。该方法的核心是转换和性能模型的使用。转换提供可移植性和实现选择,而性能模型通过提供执行时间预测来指导选择。我们描述了方法,并通过研究一个例子来研究性能模型的使用和构建。
{"title":"Structured parallel programming","authors":"J. Darlington, M. Ghanem, H. To","doi":"10.1109/PMMP.1993.315543","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315543","url":null,"abstract":"Parallel programming is a difficult task involving many complex issues such as resource allocation, and process coordination. We propose a solution to this problem based on the use of a repertoire of parallel algorithmic forms, known as skeletons. The use of skeletons enables the meaning of a parallel program to be separated from its behaviour. Central to this methodology is the use of transformations and performance models. Transformations provide portability and implementation choices, whilst performance models guide the choices by providing predictions of execution time. We describe the methodology and investigate the use and construction of performance models by studying an example.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"9 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114104290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 81
Compiling data parallel programs to message passing programs for massively parallel MIMD systems 将数据并行程序编译为大规模并行MIMD系统的消息传递程序
Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315550
T. Brandes
The currently dominant message-passing programming paradigm for MIMD systems is difficult to use and error prone. One approach that avoids explicit communication is the data-parallel programming model. This model stands for a single thread of control, global name space, and loosely synchronous parallel computation. It is easy to use and data-parallel programs usually scale very well. Based on the experiences of an existing compilation system for data-parallel Fortran programs it is shown how to design such a compilation system and which optimization techniques are required to make data-parallel programs competitive with their handwritten counterparts using message-passing.<>
目前在MIMD系统中占主导地位的消息传递编程范式难以使用且容易出错。避免显式通信的一种方法是数据并行编程模型。该模型代表单个控制线程、全局命名空间和松散同步并行计算。它易于使用,并且数据并行程序通常可伸缩性很好。基于现有的数据并行Fortran程序编译系统的经验,展示了如何设计这样的编译系统,以及需要哪些优化技术才能使数据并行程序与使用消息传递的手写程序竞争。
{"title":"Compiling data parallel programs to message passing programs for massively parallel MIMD systems","authors":"T. Brandes","doi":"10.1109/PMMP.1993.315550","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315550","url":null,"abstract":"The currently dominant message-passing programming paradigm for MIMD systems is difficult to use and error prone. One approach that avoids explicit communication is the data-parallel programming model. This model stands for a single thread of control, global name space, and loosely synchronous parallel computation. It is easy to use and data-parallel programs usually scale very well. Based on the experiences of an existing compilation system for data-parallel Fortran programs it is shown how to design such a compilation system and which optimization techniques are required to make data-parallel programs competitive with their handwritten counterparts using message-passing.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123715505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
The Modula-2* environment for parallel programming 用于并行编程的Modula-2*环境
Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315555
S.U. Hanssgen, E. A. Heinz, P. Lukowicz, M. Philippsen, W. Tichy
Presents a portable parallel programming environment for Modula-2*, an explicitly parallel machine-independent extension of Modula-2. Modula-2* offers synchronous and asynchronous parallelism, a global single address space, and automatic data and process distribution. The Modula-2* system consists of a compiler, a debugger, a cross-architecture make, graphical X Windows control panel, run-time systems for different machines, and sets of scalable parallel libraries. The existing implementation targets the MasPar MP series of massively parallel processors (SIMD), the KSR-1 parallel computer (MIMD), heterogeneous LANs of workstations (MIMD), and single workstations (SISD). We describe the important components of the Modula-2* environment, and discuss selected implementation issues. We focus on how we achieve a high degree of portability for our system, while at the same time ensuring efficiency.<>
给出了Modula-2*的可移植并行编程环境,Modula-2*是Modula-2的显式并行机器无关扩展。Modula-2*提供同步和异步并行性、全局单地址空间以及自动数据和进程分发。Modula-2*系统由编译器、调试器、跨架构make、图形化X Windows控制面板、不同机器的运行时系统和可伸缩的并行库集合组成。现有的实现目标是MasPar MP系列大规模并行处理器(SIMD)、KSR-1并行计算机(MIMD)、工作站的异构局域网(MIMD)和单个工作站(SISD)。我们描述了Modula-2*环境的重要组件,并讨论了选定的实现问题。我们专注于如何为我们的系统实现高度的可移植性,同时确保效率。
{"title":"The Modula-2* environment for parallel programming","authors":"S.U. Hanssgen, E. A. Heinz, P. Lukowicz, M. Philippsen, W. Tichy","doi":"10.1109/PMMP.1993.315555","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315555","url":null,"abstract":"Presents a portable parallel programming environment for Modula-2*, an explicitly parallel machine-independent extension of Modula-2. Modula-2* offers synchronous and asynchronous parallelism, a global single address space, and automatic data and process distribution. The Modula-2* system consists of a compiler, a debugger, a cross-architecture make, graphical X Windows control panel, run-time systems for different machines, and sets of scalable parallel libraries. The existing implementation targets the MasPar MP series of massively parallel processors (SIMD), the KSR-1 parallel computer (MIMD), heterogeneous LANs of workstations (MIMD), and single workstations (SISD). We describe the important components of the Modula-2* environment, and discuss selected implementation issues. We focus on how we achieve a high degree of portability for our system, while at the same time ensuring efficiency.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128510152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Modeling parallel computers as memory hierarchies 将并行计算机建模为内存层次结构
Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315548
B. Alpern, L. Carter, J. Ferrante
A parameterized generic model that captures the features of diverse computer architectures would facilitate the development of portable programs. Specific models appropriate to particular computers are obtained by specifying parameters of the generic model. A generic model should be simple, and for each machine that it is intended to represent, it should have a reasonably accurate specific model. The Parallel Memory Hierarchy (PMH) model of computation uses a single mechanism to model the costs of both interprocessor communication and memory hierarchy traffic. A computer is modeled as a tree of memory modules with processors at the leaves. All data movement takes the form of block transfers between children and their parents. The paper assesses the strengths and weaknesses of the PMH model as a generic model.<>
捕获不同计算机体系结构特征的参数化通用模型将促进可移植程序的开发。通过指定通用模型的参数,可以得到适合于特定计算机的特定模型。通用模型应该是简单的,并且对于它打算表示的每台机器,它应该有一个相当准确的特定模型。并行内存层次(PMH)计算模型使用单一机制来模拟处理器间通信和内存层次流量的成本。计算机被建模为一棵内存模块的树,处理器位于树的叶子处。所有数据移动都采用子节点和父节点之间的块传输形式。本文评价了PMH模型作为通用模型的优缺点。
{"title":"Modeling parallel computers as memory hierarchies","authors":"B. Alpern, L. Carter, J. Ferrante","doi":"10.1109/PMMP.1993.315548","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315548","url":null,"abstract":"A parameterized generic model that captures the features of diverse computer architectures would facilitate the development of portable programs. Specific models appropriate to particular computers are obtained by specifying parameters of the generic model. A generic model should be simple, and for each machine that it is intended to represent, it should have a reasonably accurate specific model. The Parallel Memory Hierarchy (PMH) model of computation uses a single mechanism to model the costs of both interprocessor communication and memory hierarchy traffic. A computer is modeled as a tree of memory modules with processors at the leaves. All data movement takes the form of block transfers between children and their parents. The paper assesses the strengths and weaknesses of the PMH model as a generic model.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126001293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 93
MANIFOLD: a programming model for massive parallelism 一个大规模并行的编程模型
Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315544
F. Arbab, É. Rutten
MANIFOLD is a coordination language for orchestration of the communications among independent, cooperating processes in a massively parallel or distributed application. The fundamental principle underlying MANIFOLD is the complete separation of computation from communication. This means that in MANIFOLD: computation processes know nothing about their own communication with other processes; and coordinator processes manage the communications among a set of processes, but know nothing about the computation they carry out. This principle leads to more flexible software made out of more re-usable components, and supports open systems. MANIFOLD is a new programming language based on a number of novel concepts. MANIFOLD is about concurrency of cooperation as opposed to the concern of the classical work on concurrency, that deals with concurrency of competition. In order to better understand the fundamentals of this language and its underlying model, we focus on the kernel of a simple sub-language of MANIFOLD, called MINIFOLD.<>
MANIFOLD是一种协调语言,用于在大规模并行或分布式应用程序中对独立、协作进程之间的通信进行编排。MANIFOLD的基本原理是计算与通信的完全分离。这意味着在MANIFOLD:计算进程不知道自己与其他进程的通信;协调进程管理一组进程之间的通信,但对它们执行的计算一无所知。这一原则导致了由更多可重用组件组成的更灵活的软件,并支持开放系统。MANIFOLD是一种基于许多新颖概念的新型编程语言。MANIFOLD是关于合作的并发性,而不是关于并发性的经典研究,它处理的是竞争的并发性。为了更好地理解这种语言的基础及其底层模型,我们将重点放在MANIFOLD的一种简单子语言的内核上,称为MINIFOLD。
{"title":"MANIFOLD: a programming model for massive parallelism","authors":"F. Arbab, É. Rutten","doi":"10.1109/PMMP.1993.315544","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315544","url":null,"abstract":"MANIFOLD is a coordination language for orchestration of the communications among independent, cooperating processes in a massively parallel or distributed application. The fundamental principle underlying MANIFOLD is the complete separation of computation from communication. This means that in MANIFOLD: computation processes know nothing about their own communication with other processes; and coordinator processes manage the communications among a set of processes, but know nothing about the computation they carry out. This principle leads to more flexible software made out of more re-usable components, and supports open systems. MANIFOLD is a new programming language based on a number of novel concepts. MANIFOLD is about concurrency of cooperation as opposed to the concern of the classical work on concurrency, that deals with concurrency of competition. In order to better understand the fundamentals of this language and its underlying model, we focus on the kernel of a simple sub-language of MANIFOLD, called MINIFOLD.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"732 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116069317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Performance analysis of distributed applications by suitability functions 通过适用性函数分析分布式应用程序的性能
Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315540
V. Getov, R. Hockney, A. Hey
A simple programming model of distributed-memory message-passing computer systems is first applied to describe the couple architecture/application by two sets of parameters. The node timing formula is then derived on the basis of scalar, vector and communication components. A set of suitability functions, extracted from the performance formulae, are defined. These functions are applied as an example to the performance analysis of the 1-dimensional FFT benchmark from the GENESIS benchmark suite. The suitability functions could also be useful for comparative performance analysis of both existing distributed-memory systems and new architectures under development.<>
首先应用分布式内存消息传递计算机系统的一个简单编程模型,用两组参数来描述耦合体系结构/应用。然后在标量、矢量和通信分量的基础上推导节点定时公式。定义了一组从性能公式中提取的适用性函数。这些函数作为示例应用于GENESIS基准测试套件中的一维FFT基准测试的性能分析。适用性函数也可以用于比较现有的分布式内存系统和正在开发的新架构的性能分析。
{"title":"Performance analysis of distributed applications by suitability functions","authors":"V. Getov, R. Hockney, A. Hey","doi":"10.1109/PMMP.1993.315540","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315540","url":null,"abstract":"A simple programming model of distributed-memory message-passing computer systems is first applied to describe the couple architecture/application by two sets of parameters. The node timing formula is then derived on the basis of scalar, vector and communication components. A set of suitability functions, extracted from the performance formulae, are defined. These functions are applied as an example to the performance analysis of the 1-dimensional FFT benchmark from the GENESIS benchmark suite. The suitability functions could also be useful for comparative performance analysis of both existing distributed-memory systems and new architectures under development.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123307473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Parallel programming models and their interdependence with parallel architectures 并行编程模型及其与并行体系结构的相互依存关系
Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315560
W. Giloi
Because of its superior performance and cost-effectiveness, parallel computing will become the future standard, provided we have the appropriate programming models, tools and compilers needed to make parallel computers widely usable. The dominating programming style is procedural, given in the form of either the memory sharing or the message-passing paradigm. The advantages and disadvantages of these models and their supporting architectures are discussed, as well as the tools by which parallel programming is made machine-independent. Further improvements can be expected from very high level coordination languages. A general breakthrough of parallel computing, however, will only come with the parallelizing compiler that enable the user to program applications in the conventional sequential style. The state-of-the-art of parallelizing compilers is outlined, and it is shown how they will be supported by higher-level programming models and multi-threaded architectures.<>
由于其优越的性能和成本效益,并行计算将成为未来的标准,只要我们有适当的编程模型、工具和编译器,使并行计算机广泛使用。主要的编程风格是过程式的,以内存共享或消息传递范式的形式给出。讨论了这些模型及其支持体系结构的优缺点,以及使并行编程与机器无关的工具。可以期望从非常高级的协调语言中得到进一步的改进。然而,并行计算的普遍突破只会出现在并行编译器上,它使用户能够以传统的顺序风格编程应用程序。本文概述了最先进的并行编译器,并展示了它们将如何得到高级编程模型和多线程体系结构的支持
{"title":"Parallel programming models and their interdependence with parallel architectures","authors":"W. Giloi","doi":"10.1109/PMMP.1993.315560","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315560","url":null,"abstract":"Because of its superior performance and cost-effectiveness, parallel computing will become the future standard, provided we have the appropriate programming models, tools and compilers needed to make parallel computers widely usable. The dominating programming style is procedural, given in the form of either the memory sharing or the message-passing paradigm. The advantages and disadvantages of these models and their supporting architectures are discussed, as well as the tools by which parallel programming is made machine-independent. Further improvements can be expected from very high level coordination languages. A general breakthrough of parallel computing, however, will only come with the parallelizing compiler that enable the user to program applications in the conventional sequential style. The state-of-the-art of parallelizing compilers is outlined, and it is shown how they will be supported by higher-level programming models and multi-threaded architectures.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"10 Suppl 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116188441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Massively parallel programming using object parallelism 使用对象并行的大规模并行编程
Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315545
W. Joosen, S. Bijnens, P. Verbaeten
We introduce the concept of object parallelism. Object parallelism offers a unified model in comparison with traditional parallelisation techniques such as data parallelism and algorithmic parallelism. In addition, two fundamental advantages of the object-oriented approach are exploited. First, the abstraction level of object parallelism is application-oriented, ie., it hides the details of the underlying parallel architecture. Thus, the portability of parallel applications is inherent and program development can occur on monoprocessor systems. Secondly, the concept of specialisation (through inheritance) enables the integration of the given application code with advanced run time support for load balancing and fault tolerance.<>
我们引入了对象并行的概念。与传统的并行化技术(如数据并行和算法并行)相比,对象并行提供了一个统一的模型。此外,还利用了面向对象方法的两个基本优点。首先,对象并行的抽象层是面向应用的。,它隐藏了底层并行架构的细节。因此,并行应用程序的可移植性是固有的,程序开发可以在单处理器系统上进行。其次,专门化的概念(通过继承)使给定的应用程序代码与负载平衡和容错的高级运行时支持集成在一起。
{"title":"Massively parallel programming using object parallelism","authors":"W. Joosen, S. Bijnens, P. Verbaeten","doi":"10.1109/PMMP.1993.315545","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315545","url":null,"abstract":"We introduce the concept of object parallelism. Object parallelism offers a unified model in comparison with traditional parallelisation techniques such as data parallelism and algorithmic parallelism. In addition, two fundamental advantages of the object-oriented approach are exploited. First, the abstraction level of object parallelism is application-oriented, ie., it hides the details of the underlying parallel architecture. Thus, the portability of parallel applications is inherent and program development can occur on monoprocessor systems. Secondly, the concept of specialisation (through inheritance) enables the integration of the given application code with advanced run time support for load balancing and fault tolerance.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134010971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
PROMOTER: an application-oriented programming model for massive parallelism 大规模并行的面向应用程序的编程模型
Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315539
W. Giloi, A. Schramm
The article deals with rationale and concepts of a programming model for massive parallelism. We mention the basic properties of massively parallel applications and develop a programming model for data parallelism on distributed-memory computers. Its key features are a suitable combination of homogeneity and heterogeneity aspects, a unified representation of data point configuration and interconnection schemes by explicit virtual data topologies, and various synchronization schemes and nondeterminisms. The outline of the linguistic representation and the abstract executional model are given.<>
本文讨论了大规模并行编程模型的基本原理和概念。我们提到了大规模并行应用程序的基本特性,并开发了分布式内存计算机上数据并行的编程模型。它的主要特点是同质性和异构性的适当结合,数据点配置的统一表示和通过显式虚拟数据拓扑的互连方案,以及各种同步方案和不确定性。给出了语言表示的概要和抽象执行模型
{"title":"PROMOTER: an application-oriented programming model for massive parallelism","authors":"W. Giloi, A. Schramm","doi":"10.1109/PMMP.1993.315539","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315539","url":null,"abstract":"The article deals with rationale and concepts of a programming model for massive parallelism. We mention the basic properties of massively parallel applications and develop a programming model for data parallelism on distributed-memory computers. Its key features are a suitable combination of homogeneity and heterogeneity aspects, a unified representation of data point configuration and interconnection schemes by explicit virtual data topologies, and various synchronization schemes and nondeterminisms. The outline of the linguistic representation and the abstract executional model are given.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134014961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Interprocedural heap analysis for parallelizing imperative programs 并行命令式程序的过程间堆分析
Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315553
U. Assman, M. Weinhardt
The parallelization of imperative programs working on pointer data structures is possible by using extensive heap analysis. Therefore, we consider a new interprocedural version of the heap analysis algorithm with summary nodes from Chase, Wegman and Zadeck (1990). Our analysis handles arbitrary call graph inclusive recursion, works on a realistic low-level intermediate language, and uses a modified propagation method to correct an inaccuracy of the original algorithm. Furthermore, we discuss how loops and recursions over heap data structures can be parallelized based on the analysis information.<>
通过使用广泛的堆分析,可以并行化处理指针数据结构的命令式程序。因此,我们考虑使用Chase, Wegman和Zadeck(1990)的汇总节点的堆分析算法的新程序间版本。我们的分析处理任意调用图包含递归,在实际的低级中间语言上工作,并使用改进的传播方法来纠正原始算法的不准确性。此外,我们还讨论了如何基于分析信息并行化堆数据结构上的循环和递归。
{"title":"Interprocedural heap analysis for parallelizing imperative programs","authors":"U. Assman, M. Weinhardt","doi":"10.1109/PMMP.1993.315553","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315553","url":null,"abstract":"The parallelization of imperative programs working on pointer data structures is possible by using extensive heap analysis. Therefore, we consider a new interprocedural version of the heap analysis algorithm with summary nodes from Chase, Wegman and Zadeck (1990). Our analysis handles arbitrary call graph inclusive recursion, works on a realistic low-level intermediate language, and uses a modified propagation method to correct an inaccuracy of the original algorithm. Furthermore, we discuss how loops and recursions over heap data structures can be parallelized based on the analysis information.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"266 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133909156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
期刊
Proceedings of Workshop on Programming Models for Massively Parallel Computers
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1