{"title":"近常数时间并行算法的理论探讨","authors":"Joseph Gil, Yossi Matias, U. Vishkin","doi":"10.1109/SFCS.1991.185438","DOIUrl":null,"url":null,"abstract":"It is demonstrated that randomization is an extremely powerful tool for designing very fast and efficient parallel algorithms. Specifically, a running time of O(lg* n) (nearly-constant), with high probability, is achieved using n/lg* n (optimal speedup) processors for a wide range of fundamental problems. Also given is a constant time algorithm which, using n processors, approximates the sum of n positive numbers to within an error which is smaller than the sum by an order of magnitude. A variety of known and new techniques are used. New techniques, which are of independent interest, include estimation of the size of a set in constant time for several settings, and ways for deriving superfast optimal algorithms from superfast nonoptimal ones.<<ETX>>","PeriodicalId":320781,"journal":{"name":"[1991] Proceedings 32nd Annual Symposium of Foundations of Computer Science","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1991-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"137","resultStr":"{\"title\":\"Towards a theory of nearly constant time parallel algorithms\",\"authors\":\"Joseph Gil, Yossi Matias, U. Vishkin\",\"doi\":\"10.1109/SFCS.1991.185438\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"It is demonstrated that randomization is an extremely powerful tool for designing very fast and efficient parallel algorithms. Specifically, a running time of O(lg* n) (nearly-constant), with high probability, is achieved using n/lg* n (optimal speedup) processors for a wide range of fundamental problems. Also given is a constant time algorithm which, using n processors, approximates the sum of n positive numbers to within an error which is smaller than the sum by an order of magnitude. A variety of known and new techniques are used. New techniques, which are of independent interest, include estimation of the size of a set in constant time for several settings, and ways for deriving superfast optimal algorithms from superfast nonoptimal ones.<<ETX>>\",\"PeriodicalId\":320781,\"journal\":{\"name\":\"[1991] Proceedings 32nd Annual Symposium of Foundations of Computer Science\",\"volume\":\"21 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1991-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"137\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"[1991] Proceedings 32nd Annual Symposium of Foundations of Computer Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SFCS.1991.185438\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"[1991] Proceedings 32nd Annual Symposium of Foundations of Computer Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SFCS.1991.185438","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Towards a theory of nearly constant time parallel algorithms
It is demonstrated that randomization is an extremely powerful tool for designing very fast and efficient parallel algorithms. Specifically, a running time of O(lg* n) (nearly-constant), with high probability, is achieved using n/lg* n (optimal speedup) processors for a wide range of fundamental problems. Also given is a constant time algorithm which, using n processors, approximates the sum of n positive numbers to within an error which is smaller than the sum by an order of magnitude. A variety of known and new techniques are used. New techniques, which are of independent interest, include estimation of the size of a set in constant time for several settings, and ways for deriving superfast optimal algorithms from superfast nonoptimal ones.<>