超多维数据集架构上的分布式优先级队列

Sajal K. Das, M. C. Pinotti, F. Sarkar
{"title":"超多维数据集架构上的分布式优先级队列","authors":"Sajal K. Das, M. C. Pinotti, F. Sarkar","doi":"10.1109/ICDCS.1996.508013","DOIUrl":null,"url":null,"abstract":"We efficiently map a priority queue on the hypercube architecture in a load balanced manner, with no additional communication overhead. Two implementations for insert and deletemin operations are proposed on the single-port hypercube model. In a b-bandwidth, n-item priority queue in which every node contains b items in sorted order, the first implementation achieves optimal speed-up of O[min{log n, b(log n)/(log b+log log n)}] for inserting b pre-sorted items or deleting b smallest items, where b=O(n/sup 1/c/) with c>1. In particular, single insertion and deletion operations are cost-optimal and require O(log n/p+log p) time using O(log n/log log n) processors. The second implementation is more scalable since it uses a larger number of processors, and attains a 'nearly' optimal speed-up on the single-port hypercube. The insertion of log n pre-sorted items or the deletion of log n smallest items requires O(log log n)/sup 2/ time and O(log/sup 2/ n/log log n) processors. However, on the slightly more powerful pipelined hypercube model, we are able to reduce the time complexity to O(log log n) thus attaining optimal speed-up. To the best of our knowledge, our algorithms provide the first implementations of b-bandwidth distributed priority queues, which are load balanced and yet guarantee optimal speed-up.","PeriodicalId":159322,"journal":{"name":"Proceedings of 16th International Conference on Distributed Computing Systems","volume":"50 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1996-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Distributed priority queues on hypercube architectures\",\"authors\":\"Sajal K. Das, M. C. Pinotti, F. Sarkar\",\"doi\":\"10.1109/ICDCS.1996.508013\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We efficiently map a priority queue on the hypercube architecture in a load balanced manner, with no additional communication overhead. Two implementations for insert and deletemin operations are proposed on the single-port hypercube model. In a b-bandwidth, n-item priority queue in which every node contains b items in sorted order, the first implementation achieves optimal speed-up of O[min{log n, b(log n)/(log b+log log n)}] for inserting b pre-sorted items or deleting b smallest items, where b=O(n/sup 1/c/) with c>1. In particular, single insertion and deletion operations are cost-optimal and require O(log n/p+log p) time using O(log n/log log n) processors. The second implementation is more scalable since it uses a larger number of processors, and attains a 'nearly' optimal speed-up on the single-port hypercube. The insertion of log n pre-sorted items or the deletion of log n smallest items requires O(log log n)/sup 2/ time and O(log/sup 2/ n/log log n) processors. However, on the slightly more powerful pipelined hypercube model, we are able to reduce the time complexity to O(log log n) thus attaining optimal speed-up. To the best of our knowledge, our algorithms provide the first implementations of b-bandwidth distributed priority queues, which are load balanced and yet guarantee optimal speed-up.\",\"PeriodicalId\":159322,\"journal\":{\"name\":\"Proceedings of 16th International Conference on Distributed Computing Systems\",\"volume\":\"50 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1996-05-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of 16th International Conference on Distributed Computing Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICDCS.1996.508013\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of 16th International Conference on Distributed Computing Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDCS.1996.508013","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

摘要

我们以负载均衡的方式在超多维数据集架构上有效地映射优先级队列,没有额外的通信开销。在单端口超立方体模型上提出了插入和删除操作的两种实现。在带宽为b,项数为n的优先级队列中,每个节点按排序顺序包含b个项,对于插入b个预排序项或删除b个最小项,第一种实现获得了O[min{log n, b(log n)/(log b+log log n)}]的最优加速,其中b=O(n/sup 1/c/),其中c>1。特别是,单个插入和删除操作是成本最优的,使用O(log n/log log n)个处理器需要O(log n/p+log p)时间。第二种实现更具可扩展性,因为它使用了更多的处理器,并且在单端口超立方体上实现了“近乎”最佳的加速。插入log n个预排序项或删除log n个最小项需要O(log log n)/sup 2/时间和O(log/sup 2/ n/log log n)个处理器。然而,在更强大的管道超立方体模型上,我们能够将时间复杂度降低到O(log log n),从而获得最佳的加速。据我们所知,我们的算法提供了b带宽分布式优先级队列的第一个实现,它是负载平衡的,但保证了最佳的加速。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Distributed priority queues on hypercube architectures
We efficiently map a priority queue on the hypercube architecture in a load balanced manner, with no additional communication overhead. Two implementations for insert and deletemin operations are proposed on the single-port hypercube model. In a b-bandwidth, n-item priority queue in which every node contains b items in sorted order, the first implementation achieves optimal speed-up of O[min{log n, b(log n)/(log b+log log n)}] for inserting b pre-sorted items or deleting b smallest items, where b=O(n/sup 1/c/) with c>1. In particular, single insertion and deletion operations are cost-optimal and require O(log n/p+log p) time using O(log n/log log n) processors. The second implementation is more scalable since it uses a larger number of processors, and attains a 'nearly' optimal speed-up on the single-port hypercube. The insertion of log n pre-sorted items or the deletion of log n smallest items requires O(log log n)/sup 2/ time and O(log/sup 2/ n/log log n) processors. However, on the slightly more powerful pipelined hypercube model, we are able to reduce the time complexity to O(log log n) thus attaining optimal speed-up. To the best of our knowledge, our algorithms provide the first implementations of b-bandwidth distributed priority queues, which are load balanced and yet guarantee optimal speed-up.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
An extended network scheduling model Distributed priority queues on hypercube architectures An embeddable and extendable language for large-scale programming on the Internet Conservative garbage collection on distributed shared memory systems Optimal deadlock detection in distributed systems based on locally constructed wait-for graphs
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1