{"title":"群并行数值算法的最优数据分布","authors":"T. Rauber, G. Runger, R. Wilhelm","doi":"10.1109/PMMPC.1995.504339","DOIUrl":null,"url":null,"abstract":"Numerical algorithms often exhibit potential parallelism caused by a coarse structure of submethods in addition to the medium grain parallelism of systems within submethods. We present a derivation methodology for parallel programs of numerical methods on distributed memory machines that exploits both levels of parallelism in a group-SPMD parallel computation model. The derivation process starts with a specification of the numerical method in a module structure of submethods, and results in a parallel frame program containing all implementation decisions of the parallel implementation. The implementation derivation includes scheduling of modules, assigning processors to modules and choosing data distributions for basic modules. The methodology eases parallel programming and supplies a formal basis for automatic support. An analysis model allows performance predictions for parallel frame programs. In this article we concentrate on the determination of optimal data distributions using a dynamic programming approach based on data distribution types and incomplete run-time formulas.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":"{\"title\":\"Deriving optimal data distributions for group parallel numerical algorithms\",\"authors\":\"T. Rauber, G. Runger, R. Wilhelm\",\"doi\":\"10.1109/PMMPC.1995.504339\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Numerical algorithms often exhibit potential parallelism caused by a coarse structure of submethods in addition to the medium grain parallelism of systems within submethods. We present a derivation methodology for parallel programs of numerical methods on distributed memory machines that exploits both levels of parallelism in a group-SPMD parallel computation model. The derivation process starts with a specification of the numerical method in a module structure of submethods, and results in a parallel frame program containing all implementation decisions of the parallel implementation. The implementation derivation includes scheduling of modules, assigning processors to modules and choosing data distributions for basic modules. The methodology eases parallel programming and supplies a formal basis for automatic support. An analysis model allows performance predictions for parallel frame programs. In this article we concentrate on the determination of optimal data distributions using a dynamic programming approach based on data distribution types and incomplete run-time formulas.\",\"PeriodicalId\":344246,\"journal\":{\"name\":\"Programming Models for Massively Parallel Computers\",\"volume\":\"6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1995-10-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"13\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Programming Models for Massively Parallel Computers\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/PMMPC.1995.504339\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Programming Models for Massively Parallel Computers","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PMMPC.1995.504339","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deriving optimal data distributions for group parallel numerical algorithms
Numerical algorithms often exhibit potential parallelism caused by a coarse structure of submethods in addition to the medium grain parallelism of systems within submethods. We present a derivation methodology for parallel programs of numerical methods on distributed memory machines that exploits both levels of parallelism in a group-SPMD parallel computation model. The derivation process starts with a specification of the numerical method in a module structure of submethods, and results in a parallel frame program containing all implementation decisions of the parallel implementation. The implementation derivation includes scheduling of modules, assigning processors to modules and choosing data distributions for basic modules. The methodology eases parallel programming and supplies a formal basis for automatic support. An analysis model allows performance predictions for parallel frame programs. In this article we concentrate on the determination of optimal data distributions using a dynamic programming approach based on data distribution types and incomplete run-time formulas.