{"title":"Scalable Abstractions for Parallel Programming","authors":"W. Griswold, G. Harrison, D. Notkin, L. Snyder","doi":"10.1109/DMCC.1990.556312","DOIUrl":null,"url":null,"abstract":"Writing parallel programs that scale-that is, that naturally and efficiently adapt to the size of the problem and the number of processors available-is difficult for two reasons. First, the overhead of multiplexing the processing of data points assigned to a given processor is often great. Second, to achieve scaling in asymptotic performance, the algorithm that uses the interprocessor communication structure may need to differ from the algorithm used to process points located within an individual processor. We present abstractions intended to overcome these problems, making it straightforward to define scalable parallel program. The central abstraction is an ensemble,. which gives programmers a global view of physically distributed data, computation, and communication. We demonstrate the application of these ensembles to two variants of Batcher’s sort, describing how the concepts apply to other parallel programs.","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"126 30","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"36","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DMCC.1990.556312","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 36
Abstract
Writing parallel programs that scale-that is, that naturally and efficiently adapt to the size of the problem and the number of processors available-is difficult for two reasons. First, the overhead of multiplexing the processing of data points assigned to a given processor is often great. Second, to achieve scaling in asymptotic performance, the algorithm that uses the interprocessor communication structure may need to differ from the algorithm used to process points located within an individual processor. We present abstractions intended to overcome these problems, making it straightforward to define scalable parallel program. The central abstraction is an ensemble,. which gives programmers a global view of physically distributed data, computation, and communication. We demonstrate the application of these ensembles to two variants of Batcher’s sort, describing how the concepts apply to other parallel programs.