{"title":"The MetaMP approach to parallel programming","authors":"S. Otto, M. Wolfe","doi":"10.1109/FMPC.1992.234921","DOIUrl":null,"url":null,"abstract":"The authors are researching techniques for the programming of large-scale parallel machines for scientific computation. They use an intermediate-level language, MetaMP, that sits between High Performance Fortran (HPF) and low-level message passing. They are developing an efficient set of primitives in the intermediate language and are investigating compilation methods that can semi-automatically reason about parallel programs. The focus is on distributed memory hardware. The work has many similarities with HPF efforts although their approach is aimed at shorter-term solutions. They plan to keep the programmer centrally involved in the development and optimization of the parallel program.<<ETX>>","PeriodicalId":117789,"journal":{"name":"[Proceedings 1992] The Fourth Symposium on the Frontiers of Massively Parallel Computation","volume":"72 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1992-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"[Proceedings 1992] The Fourth Symposium on the Frontiers of Massively Parallel Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FMPC.1992.234921","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
The authors are researching techniques for the programming of large-scale parallel machines for scientific computation. They use an intermediate-level language, MetaMP, that sits between High Performance Fortran (HPF) and low-level message passing. They are developing an efficient set of primitives in the intermediate language and are investigating compilation methods that can semi-automatically reason about parallel programs. The focus is on distributed memory hardware. The work has many similarities with HPF efforts although their approach is aimed at shorter-term solutions. They plan to keep the programmer centrally involved in the development and optimization of the parallel program.<>