首页 > 最新文献

Proceedings of Workshop on Programming Models for Massively Parallel Computers最新文献

英文 中文
Reduced interprocessor-communication architecture for supporting programming models 简化了支持编程模型的处理器间通信架构
Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315546
S. Sakai, K. Okamoto, Y. Kodama, M. Sato
The paper presents an execution model and a processor architecture for general purpose massively parallel computers. To construct an efficient massively parallel computer: the execution model should be natural enough to map an actual problem structure into a processor architecture; each processor should have efficient and simple communication structure; and computation and communication should be tightly coupled and their operation should be highly overlapped. To meet these, we obtain a simplified architecture with a Continuation Driven Execution Model. We call this architecture RICA. RICA consists of a simplified message handling pipeline, a continuation-driven thread invocation mechanism, a RISC core for instruction execution, a message generation pipeline which can send messages asynchronously with other operations, and a thread switching mechanism with little overhead, all of which are fused in a simple architecture. Next, we state how RICA realizes parallel primitives of programming models and how efficiently it does. The primitives examined are-shared memory primitives, message passing primitives and barriers.<>
本文提出了一种通用大规模并行计算机的执行模型和处理器体系结构。为了构建高效的大规模并行计算机:执行模型应该足够自然地将实际问题结构映射到处理器体系结构中;每个处理器应具有高效、简单的通信结构;计算和通信要紧密耦合,运算要高度重叠。为了满足这些需求,我们使用延续驱动执行模型获得了一个简化的体系结构。我们称这种建筑为RICA。RICA由简化的消息处理管道、持续驱动的线程调用机制、用于指令执行的RISC内核、可以与其他操作异步发送消息的消息生成管道以及开销很小的线程切换机制组成,所有这些都融合在一个简单的架构中。接下来,我们陈述了RICA如何实现编程模型的并行原语以及它的效率。检查的原语是共享内存原语、消息传递原语和屏障。
{"title":"Reduced interprocessor-communication architecture for supporting programming models","authors":"S. Sakai, K. Okamoto, Y. Kodama, M. Sato","doi":"10.1109/PMMP.1993.315546","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315546","url":null,"abstract":"The paper presents an execution model and a processor architecture for general purpose massively parallel computers. To construct an efficient massively parallel computer: the execution model should be natural enough to map an actual problem structure into a processor architecture; each processor should have efficient and simple communication structure; and computation and communication should be tightly coupled and their operation should be highly overlapped. To meet these, we obtain a simplified architecture with a Continuation Driven Execution Model. We call this architecture RICA. RICA consists of a simplified message handling pipeline, a continuation-driven thread invocation mechanism, a RISC core for instruction execution, a message generation pipeline which can send messages asynchronously with other operations, and a thread switching mechanism with little overhead, all of which are fused in a simple architecture. Next, we state how RICA realizes parallel primitives of programming models and how efficiently it does. The primitives examined are-shared memory primitives, message passing primitives and barriers.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116321857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
The DSPL programming environment DSPL编程环境
Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315556
A. Mitschele-Thiel
Gives an overview on the principle concepts employed in the DSPL (Data Stream Processing Language) programming environment, an integrated approach to automate system design and implementation of parallel applications. The programming environment consists of a programming language and the following set of integrated tools: (1) The modeling tool automatically derives a software model from the given application program. (2) The model based optimization tool uses the software model to compute such design decisions as network topology, task granularity, task assignment and task execution order. (3) Finally, the compiler/optimizer transforms the application program into executable code for the chosen processor network, reflecting the design decisions.<>
概述了DSPL(数据流处理语言)编程环境中使用的基本概念,DSPL是一种自动化系统设计和并行应用程序实现的集成方法。编程环境由编程语言和以下一组集成工具组成:(1)建模工具从给定的应用程序自动派生出软件模型。(2)基于模型的优化工具利用软件模型计算网络拓扑、任务粒度、任务分配、任务执行顺序等设计决策。(3)最后,编译器/优化器将应用程序转换为所选处理器网络的可执行代码,反映设计决策。
{"title":"The DSPL programming environment","authors":"A. Mitschele-Thiel","doi":"10.1109/PMMP.1993.315556","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315556","url":null,"abstract":"Gives an overview on the principle concepts employed in the DSPL (Data Stream Processing Language) programming environment, an integrated approach to automate system design and implementation of parallel applications. The programming environment consists of a programming language and the following set of integrated tools: (1) The modeling tool automatically derives a software model from the given application program. (2) The model based optimization tool uses the software model to compute such design decisions as network topology, task granularity, task assignment and task execution order. (3) Finally, the compiler/optimizer transforms the application program into executable code for the chosen processor network, reflecting the design decisions.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"s3-26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130110766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Parallel symbolic processing-can it be done? 并行符号处理——能做到吗?
Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315558
A. Sodan
My principle answer is: yes, but it depends. Parallelization of symbolic applications is possible, but only for certain classes of applications. Distributed memory may prevent parallelization in some cases where the relation of computation and communication overhead becomes too high, but also may be an advantage when applications require much garbage collection, which can then be done in a distributed way. There are also some applications which have a higher degree of parallelism than can be supported by shared memory, and so are candidates for profiting by massively parallel architectures.<>
我的主要回答是:是的,但要视情况而定。符号应用程序的并行化是可能的,但只适用于某些类型的应用程序。在某些情况下,当计算和通信开销的关系变得过高时,分布式内存可能会阻止并行化,但当应用程序需要大量垃圾收集时,分布式内存也可能是一个优势,然后可以以分布式方式完成垃圾收集。还有一些应用程序具有比共享内存所支持的更高的并行度,因此可以从大规模并行架构中获利。
{"title":"Parallel symbolic processing-can it be done?","authors":"A. Sodan","doi":"10.1109/PMMP.1993.315558","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315558","url":null,"abstract":"My principle answer is: yes, but it depends. Parallelization of symbolic applications is possible, but only for certain classes of applications. Distributed memory may prevent parallelization in some cases where the relation of computation and communication overhead becomes too high, but also may be an advantage when applications require much garbage collection, which can then be done in a distributed way. There are also some applications which have a higher degree of parallelism than can be supported by shared memory, and so are candidates for profiting by massively parallel architectures.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131400358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the implementation of virtual shared memory 关于虚拟共享内存的实现
Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315542
W. Zimmermann, H. Kumm
The field of parallel algorithms demonstrated that a machine model with virtual shared memory is easy to program. Most efforts in this field have been achieved on the PRAM-model. Theoretical results show that a PRAM can be simulated optimally on an interconnection network. We discuss implementations of some of these PRAM simulations and discuss their performance.<>
并行算法领域的研究表明,具有虚拟共享内存的机器模型易于编程。这一领域的大部分工作都是在pram模式上取得的。理论结果表明,在互连网络上可以对PRAM进行最优仿真。我们讨论了其中一些PRAM仿真的实现,并讨论了它们的性能
{"title":"On the implementation of virtual shared memory","authors":"W. Zimmermann, H. Kumm","doi":"10.1109/PMMP.1993.315542","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315542","url":null,"abstract":"The field of parallel algorithms demonstrated that a machine model with virtual shared memory is easy to program. Most efforts in this field have been achieved on the PRAM-model. Theoretical results show that a PRAM can be simulated optimally on an interconnection network. We discuss implementations of some of these PRAM simulations and discuss their performance.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126444563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Virtual shared memory-based support for novel (parallel) programming paradigms 基于虚拟共享内存的新型(并行)编程范例支持
Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315552
J. Keane, M. Xu
Discusses the implementation of novel programming paradigms on virtual shared memory (VSM) parallel architectures. A wide spectrum of paradigms (data-parallel, functional and logic languages) have been investigated in order to achieve, within the context of VSM parallel architectures, a better understanding of the underlying support mechanisms for the paradigms and to identify commonality amongst the different mechanisms. An overview of VSM is given in the context of a commercially available VSM machine: a KSR-1. The correspondence between the features of the high level languages and the VSM features which assist efficient implementation are presented. Case studies are discussed as concrete examples of the issues involved.<>
讨论了在虚拟共享内存(VSM)并行架构上新的编程范式的实现。为了在VSM并行架构的背景下更好地理解范式的底层支持机制,并识别不同机制之间的共性,研究了广泛的范式(数据并行、函数式和逻辑语言)。在商用VSM机器KSR-1的上下文中给出了VSM的概述。给出了高级语言特征与VSM特征之间的对应关系,这些特征有助于高效实现VSM。案例研究作为所涉及问题的具体例子进行讨论。
{"title":"Virtual shared memory-based support for novel (parallel) programming paradigms","authors":"J. Keane, M. Xu","doi":"10.1109/PMMP.1993.315552","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315552","url":null,"abstract":"Discusses the implementation of novel programming paradigms on virtual shared memory (VSM) parallel architectures. A wide spectrum of paradigms (data-parallel, functional and logic languages) have been investigated in order to achieve, within the context of VSM parallel architectures, a better understanding of the underlying support mechanisms for the paradigms and to identify commonality amongst the different mechanisms. An overview of VSM is given in the context of a commercially available VSM machine: a KSR-1. The correspondence between the features of the high level languages and the VSM features which assist efficient implementation are presented. Case studies are discussed as concrete examples of the issues involved.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"40 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113999562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Overall design of Pandore II: an environment for high performance C programming on DMPCs Pandore II的总体设计:一个在dmpc上进行高性能C编程的环境
Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315557
F. André, Jean-Louis Pazat
Pandore II is an environment designed for parallel execution of imperative sequential programs on distributed memory parallel computers (DMPCs). It comprises a compiler, libraries for different target distributed computers and execution analysis tools. No specific knowledge of the target machine is required of the user: only the specification of data decomposition is left to his duty. The purpose of the paper is to present the overall design of the Pandore II environment. The high performance C input language is described and the main principles of the compilation and optimization techniques are presented. An example is used along the paper to illustrate the development process from a sequential C program with the Pandore II environment.<>
Pandore II是为在分布式内存并行计算机(DMPCs)上并行执行命令式顺序程序而设计的环境。它包括编译器、针对不同目标分布式计算机的库和执行分析工具。用户不需要对目标机器有特定的了解:只需要对数据分解进行规范。本文的目的是呈现Pandore II环境的总体设计。介绍了高性能的C输入语言,并介绍了其编译和优化技术的主要原理。本文用一个实例说明了在Pandore II环境下,从一个顺序C程序开始的开发过程
{"title":"Overall design of Pandore II: an environment for high performance C programming on DMPCs","authors":"F. André, Jean-Louis Pazat","doi":"10.1109/PMMP.1993.315557","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315557","url":null,"abstract":"Pandore II is an environment designed for parallel execution of imperative sequential programs on distributed memory parallel computers (DMPCs). It comprises a compiler, libraries for different target distributed computers and execution analysis tools. No specific knowledge of the target machine is required of the user: only the specification of data decomposition is left to his duty. The purpose of the paper is to present the overall design of the Pandore II environment. The high performance C input language is described and the main principles of the compilation and optimization techniques are presented. An example is used along the paper to illustrate the development process from a sequential C program with the Pandore II environment.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131398612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An experimental parallelizing systolic compiler for regular programs 一个用于常规程序的实验性并行收缩编译器
Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315551
F. Wichmann
Systolic transformation techniques are used for parallelization of regular loop programs. After a short introduction to systolic transformation, an experimental compiler system is presented that generates parallel C code by applying different transformation methods. This system is designed as a basis for development towards a systolic compiler generating efficient fine-grained parallel code for regular programs or program parts.<>
收缩变换技术用于正则循环程序的并行化。在简要介绍收缩转换的基础上,提出了一个实验编译系统,通过不同的转换方法生成并行的C代码。该系统被设计为一个为常规程序或程序部分生成高效细粒度并行代码的压缩编译器的开发基础。
{"title":"An experimental parallelizing systolic compiler for regular programs","authors":"F. Wichmann","doi":"10.1109/PMMP.1993.315551","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315551","url":null,"abstract":"Systolic transformation techniques are used for parallelization of regular loop programs. After a short introduction to systolic transformation, an experimental compiler system is presented that generates parallel C code by applying different transformation methods. This system is designed as a basis for development towards a systolic compiler generating efficient fine-grained parallel code for regular programs or program parts.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133221673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Beyond the data parallel paradigm: issues and options 超越数据并行范式:问题和选项
Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315541
G. Gao, Vivek Sarkar, L. A. Vazquez
Currently, the predominant approach in compiling a program for parallel execution on a distributed memory multiprocessor is driven by the data parallel paradigm, in which user-specified data mappings are used to derive computation mappings via ad hoc rules such as owner-computes. We explore a more general approach which is driven by the selection of computation mappings from the program dependence constraints, and by the selection of dynamic data mappings from the localization constraints in different computation phases of the program. We state the optimization problems addressed by this approach and outline the solution methods that can be used. We believe that this approach provides promising solutions beyond what can be achieved by the data parallel paradigm. The paper outlines the general program model assumed for this work, states the optimization problems addressed by the approach and presents solutions to these problems.<>
目前,在分布式内存多处理器上编译并行执行程序的主要方法是由数据并行范式驱动,其中使用用户指定的数据映射来通过特定规则(如所有者计算)派生计算映射。我们探索了一种更通用的方法,该方法通过从程序依赖约束中选择计算映射,以及在程序的不同计算阶段从本地化约束中选择动态数据映射来驱动。我们陈述了这种方法所解决的优化问题,并概述了可以使用的解决方法。我们相信,这种方法提供了比数据并行范式更有前途的解决方案。本文概述了这项工作所假定的一般程序模型,说明了该方法所解决的优化问题,并给出了这些问题的解决方案
{"title":"Beyond the data parallel paradigm: issues and options","authors":"G. Gao, Vivek Sarkar, L. A. Vazquez","doi":"10.1109/PMMP.1993.315541","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315541","url":null,"abstract":"Currently, the predominant approach in compiling a program for parallel execution on a distributed memory multiprocessor is driven by the data parallel paradigm, in which user-specified data mappings are used to derive computation mappings via ad hoc rules such as owner-computes. We explore a more general approach which is driven by the selection of computation mappings from the program dependence constraints, and by the selection of dynamic data mappings from the localization constraints in different computation phases of the program. We state the optimization problems addressed by this approach and outline the solution methods that can be used. We believe that this approach provides promising solutions beyond what can be achieved by the data parallel paradigm. The paper outlines the general program model assumed for this work, states the optimization problems addressed by the approach and presents solutions to these problems.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115145922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A programming model for reconfigurable mesh based parallel computers 基于可重构网格的并行计算机编程模型
Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315547
M. Maresca, P. Baglietto
The paper describes a high level programming model for reconfigurable mesh architectures. We analyze the engineering and technological issues of the implementation of reconfigurable mesh architectures and define an abstract architecture, called polymorphic processor array. We define both a computation model and a programming model for polymorphic processor arrays and design a parallel programming language called Polymorphic Parallel C based on this programming model, for which we have implemented a compiler and a simulator. We have used such tools to validate a number of PPA algorithms and to estimate the performance of the corresponding programs.<>
本文描述了可重构网格体系结构的高级编程模型。我们分析了实现可重构网格架构的工程和技术问题,并定义了一个抽象架构,称为多态处理器阵列。我们定义了多态处理器阵列的计算模型和编程模型,并在此基础上设计了一种并行编程语言——多态并行C语言,并实现了编译器和模拟器。我们已经使用这些工具来验证一些PPA算法并估计相应程序的性能。
{"title":"A programming model for reconfigurable mesh based parallel computers","authors":"M. Maresca, P. Baglietto","doi":"10.1109/PMMP.1993.315547","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315547","url":null,"abstract":"The paper describes a high level programming model for reconfigurable mesh architectures. We analyze the engineering and technological issues of the implementation of reconfigurable mesh architectures and define an abstract architecture, called polymorphic processor array. We define both a computation model and a programming model for polymorphic processor arrays and design a parallel programming language called Polymorphic Parallel C based on this programming model, for which we have implemented a compiler and a simulator. We have used such tools to validate a number of PPA algorithms and to estimate the performance of the corresponding programs.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125812407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Structuring data parallelism using categorical data types 使用分类数据类型构建数据并行性
Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315549
D. Skillicorn
Data parallelism is a powerful approach to parallel computation, particularly when it is used with complex data types. Categorical data types are extensions of abstract data types that structure computations in a way that is useful for parallel implementation. In particular, they decompose the search for good algorithms on a data type into subproblems, all homomorphisms can be implemented by a single recursive, and often parallel, schema, and they are equipped with an equational system that can be used for software development by transformation.<>
数据并行是一种强大的并行计算方法,尤其是在使用复杂数据类型时。分类数据类型是抽象数据类型的扩展,它以有助于并行执行的方式构造计算。特别是,分类数据类型将在数据类型上寻找好算法的过程分解为多个子问题,所有同态性都可以通过单一递归模式(通常是并行模式)来实现,而且分类数据类型还配备了一个等式系统,可以通过转换用于软件开发。
{"title":"Structuring data parallelism using categorical data types","authors":"D. Skillicorn","doi":"10.1109/PMMP.1993.315549","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315549","url":null,"abstract":"Data parallelism is a powerful approach to parallel computation, particularly when it is used with complex data types. Categorical data types are extensions of abstract data types that structure computations in a way that is useful for parallel implementation. In particular, they decompose the search for good algorithms on a data type into subproblems, all homomorphisms can be implemented by a single recursive, and often parallel, schema, and they are equipped with an equational system that can be used for software development by transformation.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131171088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
期刊
Proceedings of Workshop on Programming Models for Massively Parallel Computers
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1