首页 > 最新文献

Programming Models for Massively Parallel Computers最新文献

英文 中文
Distributed memory implementation of elliptic partial differential equations in a dataparallel functional language 椭圆型偏微分方程在数据并行函数语言中的分布式内存实现
Pub Date : 1995-10-09 DOI: 10.1109/PMMPC.1995.504352
H. Kuchen, H. Stoltze, I. Dimov, A. Karaivanova
We show that the numerical solution of partial differential equations can be elegantly and efficiently addressed in a functional language. Two statistical numerical methods are considered. We discuss why current parallel imperative languages are difficult to use and why general (expression parallel) functional languages are not efficient enough. The key point of our approach is to offer "unique" arrays and some operations on them which allow to handle their elements in parallel, including operations which exchange the partitions of an array between the processors. These operations constitute a deadlock-free high-level way of communication.
我们证明了偏微分方程的数值解可以用函数语言优雅而有效地求解。考虑了两种统计数值方法。我们讨论了为什么当前的并行命令式语言难以使用,以及为什么通用(表达式并行)函数式语言不够高效。我们方法的关键是提供“唯一”数组和一些允许并行处理数组元素的操作,包括在处理器之间交换数组分区的操作。这些操作构成了一种无死锁的高级通信方式。
{"title":"Distributed memory implementation of elliptic partial differential equations in a dataparallel functional language","authors":"H. Kuchen, H. Stoltze, I. Dimov, A. Karaivanova","doi":"10.1109/PMMPC.1995.504352","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504352","url":null,"abstract":"We show that the numerical solution of partial differential equations can be elegantly and efficiently addressed in a functional language. Two statistical numerical methods are considered. We discuss why current parallel imperative languages are difficult to use and why general (expression parallel) functional languages are not efficient enough. The key point of our approach is to offer \"unique\" arrays and some operations on them which allow to handle their elements in parallel, including operations which exchange the partitions of an array between the processors. These operations constitute a deadlock-free high-level way of communication.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121467761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Compiling SVM-Fortran for the Intel Paragon XP/S 为Intel Paragon XP/S编译SVM-Fortran
Pub Date : 1995-10-09 DOI: 10.1109/PMMPC.1995.504341
R. Berrendorf, M. Gerndt
SVM-Fortran is a language designed to program highly parallel systems with a global address space. A compiler for SVM-Fortran is described which generates code for parallel machines; our current target machine is the Intel Paragon XP/S with an SVM-extension called ASVM. Performance numbers are given for applications and compared to results obtained with corresponding HPF-versions.
SVM-Fortran是一种设计用于编程具有全局地址空间的高度并行系统的语言。描述了一种为并行机器生成代码的SVM-Fortran编译器;我们当前的目标机器是带有支持向量机扩展ASVM的Intel Paragon XP/S。给出了应用程序的性能数字,并与使用相应hpf版本获得的结果进行了比较。
{"title":"Compiling SVM-Fortran for the Intel Paragon XP/S","authors":"R. Berrendorf, M. Gerndt","doi":"10.1109/PMMPC.1995.504341","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504341","url":null,"abstract":"SVM-Fortran is a language designed to program highly parallel systems with a global address space. A compiler for SVM-Fortran is described which generates code for parallel machines; our current target machine is the Intel Paragon XP/S with an SVM-extension called ASVM. Performance numbers are given for applications and compared to results obtained with corresponding HPF-versions.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"539 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124504058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Interactive visualization of high-dimension iteration and data sets 高维迭代和数据集的交互式可视化
Pub Date : 1995-10-09 DOI: 10.1109/PMMPC.1995.504358
Z.S. Chamski, G. A. Hedayat
Many well-formalized program transformations rely on techniques derived from the linear algebra theory. In such transformations, program entities are represented using polyhedra, which are then transformed using linear or affine functions. However, reasoning within this abstract framework is made extremely difficult by high dimensionality of spaces used to represent complex program transformations and various entities in the resulting programs: data, sets, iteration domains, access functions etc. This difficulty can be alleviated, at least partly, by providing tools for interactive visualization and manipulation of polyhedra and integrating such tools into a programming environment. In this paper we explore the issues involved in designing an interactive visualization tool for high-dimensionality polyhedra, and discuss the possible research directions arising from our current experience.
许多形式化良好的程序转换依赖于线性代数理论衍生的技术。在这种转换中,程序实体使用多面体表示,然后使用线性或仿射函数进行转换。然而,在这个抽象框架内进行推理是非常困难的,因为高维空间用于表示复杂的程序转换和结果程序中的各种实体:数据、集合、迭代域、访问函数等。通过提供用于多面体的交互式可视化和操作的工具并将这些工具集成到编程环境中,可以至少部分地减轻这种困难。本文探讨了设计高维多面体交互式可视化工具所涉及的问题,并根据目前的经验讨论了可能的研究方向。
{"title":"Interactive visualization of high-dimension iteration and data sets","authors":"Z.S. Chamski, G. A. Hedayat","doi":"10.1109/PMMPC.1995.504358","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504358","url":null,"abstract":"Many well-formalized program transformations rely on techniques derived from the linear algebra theory. In such transformations, program entities are represented using polyhedra, which are then transformed using linear or affine functions. However, reasoning within this abstract framework is made extremely difficult by high dimensionality of spaces used to represent complex program transformations and various entities in the resulting programs: data, sets, iteration domains, access functions etc. This difficulty can be alleviated, at least partly, by providing tools for interactive visualization and manipulation of polyhedra and integrating such tools into a programming environment. In this paper we explore the issues involved in designing an interactive visualization tool for high-dimensionality polyhedra, and discuss the possible research directions arising from our current experience.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123160998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The parallel Fortran family and a new perspective 并行Fortran家族和一个新的视角
Pub Date : 1995-10-09 DOI: 10.1109/PMMPC.1995.504350
John Darlington, Yike Guo, Jin Yang
Various parallel Fortran languages have been developed over the years. The research work in creating this Parallel Fortran Family has made significant contributions to parallel programming language design and implementation. In this paper, various parallel Fortran languages are studied based on a uniform co-ordination approach towards parallel programming. That is, new language constructs in parallel Fortran systems are regarded as providing a co-ordination mechanism organising a set of single-threaded computations, coded in standard Fortran, into a parallel ensemble. Features of different parallel Fortran languages are studied by investigating their corresponding co-ordination models. A new perspective on designing a structured parallel Fortran system is proposed by using a generic structured co-ordination language, SCL, as the uniform means to organise parallel Fortran computation.
多年来,各种并行Fortran语言被开发出来。创建这个并行Fortran族的研究工作对并行程序设计语言的设计和实现做出了重大贡献。本文基于统一的并行编程协调方法,对各种并行Fortran语言进行了研究。也就是说,并行Fortran系统中的新语言结构被认为提供了一种协调机制,将一组用标准Fortran编码的单线程计算组织成并行集成。通过研究不同并行Fortran语言的协调模型,研究了不同并行Fortran语言的特点。采用通用的结构化协调语言SCL作为组织Fortran并行计算的统一手段,提出了设计结构化并行Fortran系统的新视角。
{"title":"The parallel Fortran family and a new perspective","authors":"John Darlington, Yike Guo, Jin Yang","doi":"10.1109/PMMPC.1995.504350","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504350","url":null,"abstract":"Various parallel Fortran languages have been developed over the years. The research work in creating this Parallel Fortran Family has made significant contributions to parallel programming language design and implementation. In this paper, various parallel Fortran languages are studied based on a uniform co-ordination approach towards parallel programming. That is, new language constructs in parallel Fortran systems are regarded as providing a co-ordination mechanism organising a set of single-threaded computations, coded in standard Fortran, into a parallel ensemble. Features of different parallel Fortran languages are studied by investigating their corresponding co-ordination models. A new perspective on designing a structured parallel Fortran system is proposed by using a generic structured co-ordination language, SCL, as the uniform means to organise parallel Fortran computation.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115226845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Term graph rewriting as a specification and implementation framework for concurrent object-oriented programming languages 术语图重写作为并发面向对象编程语言的规范和实现框架
Pub Date : 1995-10-09 DOI: 10.1109/PMMPC.1995.504353
R. Banach, G. A. Papadopoulos
The usefulness of the generalised computational model of Term Graph Rewriting Systems (TGRS) for designing and implementing concurrent object-oriented languages, and also for specifying and reasoning about the interaction between concurrency and object-orientation (such as concurrent synchronisation of methods or interference problems between concurrency and inheritance), is examined in this paper by mapping a state-of-the-art functional object-oriented language onto the MONSTR computational model, a restricted form of TGRS specifically designed to act as a point of reference in the design and implementation of declarative and semi-declarative programming languages especially suited for distributed architectures.
术语图重写系统(TGRS)的广义计算模型对于设计和实现并发面向对象语言的有用性,以及对于指定和推理并发性和面向对象之间的交互(例如方法的并发同步或并发性和继承之间的干扰问题)的有用性;本文通过将一种最先进的功能面向对象语言映射到MONSTR计算模型来研究这一问题,MONSTR计算模型是一种受限制的TGRS形式,专门设计用于作为设计和实现声明性和半声明性编程语言的参考点,特别适合于分布式体系结构。
{"title":"Term graph rewriting as a specification and implementation framework for concurrent object-oriented programming languages","authors":"R. Banach, G. A. Papadopoulos","doi":"10.1109/PMMPC.1995.504353","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504353","url":null,"abstract":"The usefulness of the generalised computational model of Term Graph Rewriting Systems (TGRS) for designing and implementing concurrent object-oriented languages, and also for specifying and reasoning about the interaction between concurrency and object-orientation (such as concurrent synchronisation of methods or interference problems between concurrency and inheritance), is examined in this paper by mapping a state-of-the-art functional object-oriented language onto the MONSTR computational model, a restricted form of TGRS specifically designed to act as a point of reference in the design and implementation of declarative and semi-declarative programming languages especially suited for distributed architectures.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128770071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Deriving optimal data distributions for group parallel numerical algorithms 群并行数值算法的最优数据分布
Pub Date : 1995-10-09 DOI: 10.1109/PMMPC.1995.504339
T. Rauber, G. Runger, R. Wilhelm
Numerical algorithms often exhibit potential parallelism caused by a coarse structure of submethods in addition to the medium grain parallelism of systems within submethods. We present a derivation methodology for parallel programs of numerical methods on distributed memory machines that exploits both levels of parallelism in a group-SPMD parallel computation model. The derivation process starts with a specification of the numerical method in a module structure of submethods, and results in a parallel frame program containing all implementation decisions of the parallel implementation. The implementation derivation includes scheduling of modules, assigning processors to modules and choosing data distributions for basic modules. The methodology eases parallel programming and supplies a formal basis for automatic support. An analysis model allows performance predictions for parallel frame programs. In this article we concentrate on the determination of optimal data distributions using a dynamic programming approach based on data distribution types and incomplete run-time formulas.
除了子方法内系统的中等粒度并行性外,数值算法还经常表现出由子方法的粗结构引起的潜在并行性。我们提出了一种分布式存储机器上的数值方法并行程序的推导方法,该方法利用了群- spmd并行计算模型中的两个并行级别。推导过程从子方法模块结构的数值方法规范开始,得到包含并行实现的所有实现决策的并行框架程序。实现派生包括模块调度、为模块分配处理器和为基本模块选择数据分布。该方法简化了并行编程,并为自动支持提供了正式的基础。分析模型允许对并行帧程序进行性能预测。在本文中,我们将集中讨论使用基于数据分布类型和不完整运行时公式的动态规划方法确定最佳数据分布。
{"title":"Deriving optimal data distributions for group parallel numerical algorithms","authors":"T. Rauber, G. Runger, R. Wilhelm","doi":"10.1109/PMMPC.1995.504339","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504339","url":null,"abstract":"Numerical algorithms often exhibit potential parallelism caused by a coarse structure of submethods in addition to the medium grain parallelism of systems within submethods. We present a derivation methodology for parallel programs of numerical methods on distributed memory machines that exploits both levels of parallelism in a group-SPMD parallel computation model. The derivation process starts with a specification of the numerical method in a module structure of submethods, and results in a parallel frame program containing all implementation decisions of the parallel implementation. The implementation derivation includes scheduling of modules, assigning processors to modules and choosing data distributions for basic modules. The methodology eases parallel programming and supplies a formal basis for automatic support. An analysis model allows performance predictions for parallel frame programs. In this article we concentrate on the determination of optimal data distributions using a dynamic programming approach based on data distribution types and incomplete run-time formulas.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125077427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Parallel EARS [edge addition rewrite systems] 平行耳[边缘添加重写系统]
Pub Date : 1995-10-09 DOI: 10.1109/PMMPC.1995.504359
U. Assmann
In this paper we show how edge addition rewrite systems (EARS) can be evaluated in parallel. EARS are a simple variant of graph rewrite systems, which only add edges to graphs. Because EARS are equivalent to a subset of Datalog, they provide a programming model for rule-based applications. EARS terminate and are strongly confluent, which makes them perfectly apt for parallel execution. In this paper we present two parallel evaluation methods, order-domain partitioning and evaluation on carrier-graphs. EARS provide scalable parallelism because efficient sequential evaluation techniques also exist.
在本文中,我们展示了如何并行地评估边缘加法重写系统(ear)。ear是图重写系统的一个简单变体,它只向图添加边。因为ear相当于Datalog的一个子集,所以它们为基于规则的应用程序提供了编程模型。ear终止并且是强合流的,这使得它们非常适合并行执行。本文提出了两种并行评价方法:有序域划分和载波图上的评价。ear提供可伸缩的并行性,因为也存在有效的顺序评估技术。
{"title":"Parallel EARS [edge addition rewrite systems]","authors":"U. Assmann","doi":"10.1109/PMMPC.1995.504359","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504359","url":null,"abstract":"In this paper we show how edge addition rewrite systems (EARS) can be evaluated in parallel. EARS are a simple variant of graph rewrite systems, which only add edges to graphs. Because EARS are equivalent to a subset of Datalog, they provide a programming model for rule-based applications. EARS terminate and are strongly confluent, which makes them perfectly apt for parallel execution. In this paper we present two parallel evaluation methods, order-domain partitioning and evaluation on carrier-graphs. EARS provide scalable parallelism because efficient sequential evaluation techniques also exist.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129116871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Provably correct vectorization of nested-parallel programs 可证明正确的向量化嵌套并行程序
Pub Date : 1995-10-09 DOI: 10.1109/PMMPC.1995.504361
J. Riely, J. Prins, S. Iyer
The work/step framework provides a high-level cost model for nested data-parallel programming languages, allowing programmers to understand the efficiency of their codes without concern for the eventual mapping of tasks to processors. Vectorization, or flattening, is the key technique for compiling nested-parallel languages. This paper presents a formal study of vectorization, considering three low-level targets: the EREW, bounded-contention CREW, and CREW variants of the VRAM. For each, we describe a variant of the cost model and prove the correctness of vectorization for that model. The models impose different constraints on the set of programs and implementations that can be considered; we discuss these in detail.
工作/步骤框架为嵌套的数据并行编程语言提供了一个高级成本模型,允许程序员了解其代码的效率,而不必关心任务到处理器的最终映射。向量化或平坦化是编译嵌套并行语言的关键技术。本文提出了一种形式化的矢量化研究,考虑了三个低级目标:VRAM的EREW,有界争用CREW和CREW变体。对于每一种情况,我们都描述了代价模型的一个变体,并证明了该模型矢量化的正确性。这些模型对可考虑的程序集和实现施加了不同的约束;我们将详细讨论这些问题。
{"title":"Provably correct vectorization of nested-parallel programs","authors":"J. Riely, J. Prins, S. Iyer","doi":"10.1109/PMMPC.1995.504361","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504361","url":null,"abstract":"The work/step framework provides a high-level cost model for nested data-parallel programming languages, allowing programmers to understand the efficiency of their codes without concern for the eventual mapping of tasks to processors. Vectorization, or flattening, is the key technique for compiling nested-parallel languages. This paper presents a formal study of vectorization, considering three low-level targets: the EREW, bounded-contention CREW, and CREW variants of the VRAM. For each, we describe a variant of the cost model and prove the correctness of vectorization for that model. The models impose different constraints on the set of programs and implementations that can be considered; we discuss these in detail.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133874529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A package for automatic parallelization of serial C-programs for distributed systems 一个用于分布式系统串行c程序自动并行化的包
Pub Date : 1995-10-09 DOI: 10.1109/PMMPC.1995.504357
V. Beletsky, Alexander Bagaterenco, A. Chemeris
Problems arising due to run existent software in parallel computer systems are considered. The problem may be formulated as the serial programs should be analyzed first and then through modification of them are to be brought in to make them able to run in parallel computers. The problems that arise have been analyzed and ways to tackle them are given. The structure of programming package is given. It is substantiated that for most sequential programs the major share of time spent for their execution is accounted for by processing loops. Three loop parallelization methods have been selected for implementation of programs: method of coordinates, method of linear transformations, and modified method of linear-piece parallelization. The dependence graph construction principles are expounded and scheduling methods are enumerated.
考虑了在并行计算机系统中运行现有软件所产生的问题。这个问题可以表述为:首先对串行程序进行分析,然后通过对它们的修改,使它们能够在并行计算机上运行。分析了出现的问题,并提出了解决问题的方法。给出了程序包的结构。有证据表明,对于大多数顺序程序,用于执行的大部分时间是由处理循环占用的。本文选择了三种循环并行化方法:坐标法、线性变换法和改进的线片并行化方法。阐述了依赖图的构造原则,列举了调度方法。
{"title":"A package for automatic parallelization of serial C-programs for distributed systems","authors":"V. Beletsky, Alexander Bagaterenco, A. Chemeris","doi":"10.1109/PMMPC.1995.504357","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504357","url":null,"abstract":"Problems arising due to run existent software in parallel computer systems are considered. The problem may be formulated as the serial programs should be analyzed first and then through modification of them are to be brought in to make them able to run in parallel computers. The problems that arise have been analyzed and ways to tackle them are given. The structure of programming package is given. It is substantiated that for most sequential programs the major share of time spent for their execution is accounted for by processing loops. Three loop parallelization methods have been selected for implementation of programs: method of coordinates, method of linear transformations, and modified method of linear-piece parallelization. The dependence graph construction principles are expounded and scheduling methods are enumerated.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":" 46","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114053310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Facilitating the development of portable parallel applications on distributed memory systems 促进分布式内存系统上可移植并行应用程序的开发
Pub Date : 1995-10-09 DOI: 10.1109/PMMPC.1995.504356
C. Voliotis, G. Manis, A. Thanos, P. Tsanakas, G. Papakonstantinou
In this paper, two programming tools are presented, facilitating the development of portable parallel applications on distributed memory systems. The Orchid system is a software platform, i.e. a set of facilities for parallel programming. It consists of mechanisms for transparent message passing and a set of primitive functions to support the distributed shared memory programming model. In order to free the user from the tedius task of parallel programming, a new environment for logic programming is introduced: the Daffodil framework. Daffodil, implemented on top of Orchid, evaluates pure PROLOG programs exploiting the inherent AND/OR parallelism. Both systems have been implemented and evaluated on various platforms, since the layered structure of Orchid ensures portability only by re-engineering a small part of the code.
本文提出了两种编程工具,便于在分布式存储系统上开发可移植的并行应用程序。兰花系统是一个软件平台,即一套并行编程的设施。它由透明消息传递机制和一组支持分布式共享内存编程模型的基本函数组成。为了使用户从并行编程的繁琐任务中解脱出来,引入了一种新的逻辑编程环境:Daffodil框架。在Orchid之上实现的Daffodil利用固有的AND/OR并行性对纯PROLOG程序进行评估。这两个系统已经在不同的平台上实现和评估,因为Orchid的分层结构仅通过重新设计一小部分代码来确保可移植性。
{"title":"Facilitating the development of portable parallel applications on distributed memory systems","authors":"C. Voliotis, G. Manis, A. Thanos, P. Tsanakas, G. Papakonstantinou","doi":"10.1109/PMMPC.1995.504356","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504356","url":null,"abstract":"In this paper, two programming tools are presented, facilitating the development of portable parallel applications on distributed memory systems. The Orchid system is a software platform, i.e. a set of facilities for parallel programming. It consists of mechanisms for transparent message passing and a set of primitive functions to support the distributed shared memory programming model. In order to free the user from the tedius task of parallel programming, a new environment for logic programming is introduced: the Daffodil framework. Daffodil, implemented on top of Orchid, evaluates pure PROLOG programs exploiting the inherent AND/OR parallelism. Both systems have been implemented and evaluated on various platforms, since the layered structure of Orchid ensures portability only by re-engineering a small part of the code.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116536118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Programming Models for Massively Parallel Computers
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1