Pub Date : 2010-04-19DOI: 10.1109/IPDPSW.2010.5470862
Ralph robert Erdt, M. Gergeleit
The Beacon Vector Routing (BVR) protocol [1] is a well known routing protocol for Wireless Sensor Networks (WSNs). Simulations have shown that the protocol scales well in an environment with perfect links and an ideal circular radio coverage. However, when it comes to an implementation on an embedded hardware that uses IEEE 802.15.4 2.4 GHz wireless transceivers, some problems turn out that have significant impact on the overall performance of the protocol.
{"title":"Lessons learned during the implementation of the BVR Wireless Sensor Network protocol on SunSPOTs","authors":"Ralph robert Erdt, M. Gergeleit","doi":"10.1109/IPDPSW.2010.5470862","DOIUrl":"https://doi.org/10.1109/IPDPSW.2010.5470862","url":null,"abstract":"The Beacon Vector Routing (BVR) protocol [1] is a well known routing protocol for Wireless Sensor Networks (WSNs). Simulations have shown that the protocol scales well in an environment with perfect links and an ideal circular radio coverage. However, when it comes to an implementation on an embedded hardware that uses IEEE 802.15.4 2.4 GHz wireless transceivers, some problems turn out that have significant impact on the overall performance of the protocol.","PeriodicalId":329280,"journal":{"name":"2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130092070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-04-19DOI: 10.1109/IPDPSW.2010.5470702
Farah Belmecheri, C. Prins, F. Yalaoui, L. Amodeo
Many distribution companies must deliver and pick up goods to satisfy customers. This problem is called the Vehicle Routing Problem with Mixed linehauls and Backhauls (VRPMB) which considers that some goods must be delivered from a depot to linehaul customers, while others must be picked up at backhaul customers to be brought to the depot. This paper studies an enriched version called Heterogeneous fleet VRPMB with Time Windows or HVRPMBTW which has not much been studied in the literature. A Particle Swarm Optimization heuristic (PSO) is proposed to solve this problem. This approach uses and models the social behavior of bird flocking, fish schooling. The adaptation and implementation of PSO search strategy to HVRPMBTW is explained, then the results are compared to previous works (Ant Colony Optimization) and compared also to the high quality solutions obtained by an exact method (solver CPLEX). Good promising results are reported and have shown the effectiveness of the method.
{"title":"Particle Swarm Optimization to solve the Vehicle Routing Problem with Heterogeneous fleet, Mixed Backhauls, and time windows","authors":"Farah Belmecheri, C. Prins, F. Yalaoui, L. Amodeo","doi":"10.1109/IPDPSW.2010.5470702","DOIUrl":"https://doi.org/10.1109/IPDPSW.2010.5470702","url":null,"abstract":"Many distribution companies must deliver and pick up goods to satisfy customers. This problem is called the Vehicle Routing Problem with Mixed linehauls and Backhauls (VRPMB) which considers that some goods must be delivered from a depot to linehaul customers, while others must be picked up at backhaul customers to be brought to the depot. This paper studies an enriched version called Heterogeneous fleet VRPMB with Time Windows or HVRPMBTW which has not much been studied in the literature. A Particle Swarm Optimization heuristic (PSO) is proposed to solve this problem. This approach uses and models the social behavior of bird flocking, fish schooling. The adaptation and implementation of PSO search strategy to HVRPMBTW is explained, then the results are compared to previous works (Ant Colony Optimization) and compared also to the high quality solutions obtained by an exact method (solver CPLEX). Good promising results are reported and have shown the effectiveness of the method.","PeriodicalId":329280,"journal":{"name":"2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134077880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-04-19DOI: 10.1109/IPDPSW.2010.5470757
A. Holder, C. Carothers, Kerim Kalafala
This paper focuses on parallelization of the classic static timing analysis (STA) algorithm for verifying timing characteristics of digital integrated circuits. Given ever-increasing circuit complexities, including the need to analyze circuits with billions of transistors, across potentially thousands of process corners, with accuracy tolerances down to the picosecond range, sequential execution of STA algorithms is quickly becoming a bottleneck to the overall chip design closure process. A message passing based parallel processing technique for performing STA leveraging an IBM Blue Gene/L supercomputing platform is presented. Results are collected for a small industrial 65 nm benchmarking design, where the algorithm demonstrates speedup of nearly 39 times on 64 processors and a peak of 119 times (without partitioning costs, speedup is 263 times) on 1024 processors. With an idealized synthetic circuit, the algorithm demonstrated 259 times speedup, 925 times speedup without partitioning overhead, on 1024 processors. To the best of our knowledge, this is the first result demonstrating scalable STA on the IBM Blue Gene.
本文重点研究了用于验证数字集成电路时序特性的经典静态时序分析算法的并行化。考虑到电路复杂性的不断增加,包括需要分析具有数十亿晶体管的电路,可能跨越数千个工艺角,精度公差低至皮秒范围,STA算法的顺序执行正迅速成为整体芯片设计闭合过程的瓶颈。提出了一种利用IBM Blue Gene/L超级计算平台执行STA的基于消息传递的并行处理技术。结果是针对小型工业65纳米基准测试设计收集的,其中该算法在64个处理器上的加速速度接近39倍,在1024个处理器上的加速速度峰值为119倍(没有分区成本,加速速度为263倍)。在一个理想的合成电路中,该算法在1024个处理器上加速了259倍,在没有分区开销的情况下加速了925倍。据我们所知,这是在IBM Blue Gene上演示可伸缩STA的第一个结果。
{"title":"Prototype for a large-scale static timing analyzer running on an IBM Blue Gene","authors":"A. Holder, C. Carothers, Kerim Kalafala","doi":"10.1109/IPDPSW.2010.5470757","DOIUrl":"https://doi.org/10.1109/IPDPSW.2010.5470757","url":null,"abstract":"This paper focuses on parallelization of the classic static timing analysis (STA) algorithm for verifying timing characteristics of digital integrated circuits. Given ever-increasing circuit complexities, including the need to analyze circuits with billions of transistors, across potentially thousands of process corners, with accuracy tolerances down to the picosecond range, sequential execution of STA algorithms is quickly becoming a bottleneck to the overall chip design closure process. A message passing based parallel processing technique for performing STA leveraging an IBM Blue Gene/L supercomputing platform is presented. Results are collected for a small industrial 65 nm benchmarking design, where the algorithm demonstrates speedup of nearly 39 times on 64 processors and a peak of 119 times (without partitioning costs, speedup is 263 times) on 1024 processors. With an idealized synthetic circuit, the algorithm demonstrated 259 times speedup, 925 times speedup without partitioning overhead, on 1024 processors. To the best of our knowledge, this is the first result demonstrating scalable STA on the IBM Blue Gene.","PeriodicalId":329280,"journal":{"name":"2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134246880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-04-19DOI: 10.1109/IPDPSW.2010.5470814
Niall Emmart, C. Weems
In this paper we evaluate the potential for using an NVIDIA graphics processing unit (GPU) to accelerate high precision integer multiplication. The reported peak vector performance for a typical GPU appears to offer considerable potential for accelerating such a regular computation. Because of limitations in the on-chip memory, the high cost of kernel launches, and the particular nature of the architecture's support for parallelism, we found it necessary to use a hybrid algorithmic approach to obtain good performance. On the GPU itself we use an adaptation of the Strassen FFT algorithm to multiply 32KB chunks, while on the CPU we adapt the Karatsuba divide-and-conquer approach to optimize the application of the GPU's partial multiplies, which are viewed as “digits” by our implementation of Karatsuba. Even with this approach, the result is at best a modest increase in performance, compared with executing the same multiplication using the GMP package on a CPU at a comparable technology node. We identify the sources of this lackluster performance and discuss the likely impact of planned advances in GPU architecture.
{"title":"High precision integer multiplication with a graphics processing unit","authors":"Niall Emmart, C. Weems","doi":"10.1109/IPDPSW.2010.5470814","DOIUrl":"https://doi.org/10.1109/IPDPSW.2010.5470814","url":null,"abstract":"In this paper we evaluate the potential for using an NVIDIA graphics processing unit (GPU) to accelerate high precision integer multiplication. The reported peak vector performance for a typical GPU appears to offer considerable potential for accelerating such a regular computation. Because of limitations in the on-chip memory, the high cost of kernel launches, and the particular nature of the architecture's support for parallelism, we found it necessary to use a hybrid algorithmic approach to obtain good performance. On the GPU itself we use an adaptation of the Strassen FFT algorithm to multiply 32KB chunks, while on the CPU we adapt the Karatsuba divide-and-conquer approach to optimize the application of the GPU's partial multiplies, which are viewed as “digits” by our implementation of Karatsuba. Even with this approach, the result is at best a modest increase in performance, compared with executing the same multiplication using the GMP package on a CPU at a comparable technology node. We identify the sources of this lackluster performance and discuss the likely impact of planned advances in GPU architecture.","PeriodicalId":329280,"journal":{"name":"2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134171643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-04-19DOI: 10.1109/IPDPSW.2010.5470686
P. Fratta, P. Kogge
Heterogeneous multicore architectures have gained widespread use in the general purpose and scientific computing communities, and architects continue to investigate techniques for easing the burden of parallelization from the programmer. This paper presents a new class of heterogeneous multicores that leverages past work in architectures supporting the execution of traveling threads. These traveling threads execute on simple cores distributed across the chip and can move up the hierarchy and between cores based on data locality. This new design offers the benefits of improved performance at lower energy and power density than centralized counterparts through intelligent data placement and cooperative caching policies. We employ a methodology consisting of mathematical modeling and simulation to estimate the upper bounds on migration overhead for various architectural organizations. Results illustrate that the new architecture can match the performance of a conventional processor with reasonable thread sizes. We have observed that between 0.04 and 7.09 instructions per migration (IPM) (1.88 IPM on average) are sufficient to match the performance of the conventional processor. These results confirm that this distributed architecture and corresponding execution model offer promising potential in overcoming the design challenges of centralized counterparts.
{"title":"Modeling bounds on migration overhead for a traveling thread architecture","authors":"P. Fratta, P. Kogge","doi":"10.1109/IPDPSW.2010.5470686","DOIUrl":"https://doi.org/10.1109/IPDPSW.2010.5470686","url":null,"abstract":"Heterogeneous multicore architectures have gained widespread use in the general purpose and scientific computing communities, and architects continue to investigate techniques for easing the burden of parallelization from the programmer. This paper presents a new class of heterogeneous multicores that leverages past work in architectures supporting the execution of traveling threads. These traveling threads execute on simple cores distributed across the chip and can move up the hierarchy and between cores based on data locality. This new design offers the benefits of improved performance at lower energy and power density than centralized counterparts through intelligent data placement and cooperative caching policies. We employ a methodology consisting of mathematical modeling and simulation to estimate the upper bounds on migration overhead for various architectural organizations. Results illustrate that the new architecture can match the performance of a conventional processor with reasonable thread sizes. We have observed that between 0.04 and 7.09 instructions per migration (IPM) (1.88 IPM on average) are sufficient to match the performance of the conventional processor. These results confirm that this distributed architecture and corresponding execution model offer promising potential in overcoming the design challenges of centralized counterparts.","PeriodicalId":329280,"journal":{"name":"2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132901599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-04-19DOI: 10.1142/S0129054112400394
Daniel Fajardo-Delgado, José Alberto Fernández-Zepeda, A. Bourgeois
The performance of processors in a distributed system can be measured by parameters such as bandwidth, storage capacity, work capability, reliability, manufacture technology, years of usage, among others. An algorithm using a preference-based approach uses these parameters to make decisions. In this paper we introduce a randomized self-stabilizing leader election algorithm for preference-based anonymous trees. Our algorithm uses the preference of the processors as criteria to select a leader under symmetric or non-symmetric configurations. It is partially inspired on Xu and Srimani's algorithm, but we use a distributed daemon and randomization to break symmetry. We prove that our algorithm has an optimal average complexity time and performed simulations to verify our results.
{"title":"Randomized self-stabilizing leader election in preference-based anonymous trees","authors":"Daniel Fajardo-Delgado, José Alberto Fernández-Zepeda, A. Bourgeois","doi":"10.1142/S0129054112400394","DOIUrl":"https://doi.org/10.1142/S0129054112400394","url":null,"abstract":"The performance of processors in a distributed system can be measured by parameters such as bandwidth, storage capacity, work capability, reliability, manufacture technology, years of usage, among others. An algorithm using a preference-based approach uses these parameters to make decisions. In this paper we introduce a randomized self-stabilizing leader election algorithm for preference-based anonymous trees. Our algorithm uses the preference of the processors as criteria to select a leader under symmetric or non-symmetric configurations. It is partially inspired on Xu and Srimani's algorithm, but we use a distributed daemon and randomization to break symmetry. We prove that our algorithm has an optimal average complexity time and performed simulations to verify our results.","PeriodicalId":329280,"journal":{"name":"2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133002581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-04-19DOI: 10.1109/IPDPSW.2010.5470733
E. Manolakos, I. Stamoulias
The k-nearest neighbor (k-NN) is a popular non-parametric benchmark classification algorithm to which new classifiers are usually compared. It is used in numerous applications, some of which may involve thousands of data vectors in a possibly very high dimensional feature space. For real-time classification a hardware implementation of the algorithm can deliver high performance gains by exploiting parallel processing and block pipelining. We present two different linear array architectures that have been described as soft parameterized IP cores in VHDL. The IP cores are used to synthesize and evaluate a variety of array architectures for a different k-NN problem instances and Xilinx FPGAs. It is shown that we can solve efficiently, using a medium size FPGA device, very large size classification problems, with thousands of reference data vectors or vector dimensions, while achieving very high throughput. To the best of our knowledge, this is the first effort to design flexible IP cores for the FPGA implementation of the widely used k-NN classifier.
{"title":"Flexible IP cores for the k-NN classification problem and their FPGA implementation","authors":"E. Manolakos, I. Stamoulias","doi":"10.1109/IPDPSW.2010.5470733","DOIUrl":"https://doi.org/10.1109/IPDPSW.2010.5470733","url":null,"abstract":"The k-nearest neighbor (k-NN) is a popular non-parametric benchmark classification algorithm to which new classifiers are usually compared. It is used in numerous applications, some of which may involve thousands of data vectors in a possibly very high dimensional feature space. For real-time classification a hardware implementation of the algorithm can deliver high performance gains by exploiting parallel processing and block pipelining. We present two different linear array architectures that have been described as soft parameterized IP cores in VHDL. The IP cores are used to synthesize and evaluate a variety of array architectures for a different k-NN problem instances and Xilinx FPGAs. It is shown that we can solve efficiently, using a medium size FPGA device, very large size classification problems, with thousands of reference data vectors or vector dimensions, while achieving very high throughput. To the best of our knowledge, this is the first effort to design flexible IP cores for the FPGA implementation of the widely used k-NN classifier.","PeriodicalId":329280,"journal":{"name":"2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133051082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-04-19DOI: 10.1109/IPDPSW.2010.5470781
S. Zikos, H. Karatza
In this paper we evaluate performance of three different site allocation policies in a 2-level computational grid with heterogeneous sites. We consider that schedulers are aware of service demands of jobs which show high variability. A simulation model is used to evaluate performance in terms of the average response time and slowdown, under medium and high load. Simulation results show that the proposed policy outperforms the other two that are examined, especially at high load.
{"title":"Clairvoyant site allocation of jobs with highly variable service demands in a computational grid","authors":"S. Zikos, H. Karatza","doi":"10.1109/IPDPSW.2010.5470781","DOIUrl":"https://doi.org/10.1109/IPDPSW.2010.5470781","url":null,"abstract":"In this paper we evaluate performance of three different site allocation policies in a 2-level computational grid with heterogeneous sites. We consider that schedulers are aware of service demands of jobs which show high variability. A simulation model is used to evaluate performance in terms of the average response time and slowdown, under medium and high load. Simulation results show that the proposed policy outperforms the other two that are examined, especially at high load.","PeriodicalId":329280,"journal":{"name":"2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW)","volume":"210 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116012252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-04-19DOI: 10.1109/IPDPSW.2010.5470724
Fabrício A. B. Silva, H. Senger
This work presents a scalability analysis of embarrassingly parallel applications running on cluster and multi-cluster machines. Several applications can be included in this category. Examples are Bag-of-tasks (BoT) applications and some classes of online web services, such as index processing in online web search. The analysis presented here is divided in two parts: first, the impact of front end topology on scalability is assessed through a lower bound analysis. In a second step several task mapping strategies are compared from the scalability standpoint.
{"title":"Scalability analysis of embarassingly parallel applications on large clusters","authors":"Fabrício A. B. Silva, H. Senger","doi":"10.1109/IPDPSW.2010.5470724","DOIUrl":"https://doi.org/10.1109/IPDPSW.2010.5470724","url":null,"abstract":"This work presents a scalability analysis of embarrassingly parallel applications running on cluster and multi-cluster machines. Several applications can be included in this category. Examples are Bag-of-tasks (BoT) applications and some classes of online web services, such as index processing in online web search. The analysis presented here is divided in two parts: first, the impact of front end topology on scalability is assessed through a lower bound analysis. In a second step several task mapping strategies are compared from the scalability standpoint.","PeriodicalId":329280,"journal":{"name":"2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116283027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-04-19DOI: 10.1109/IPDPSW.2010.5470833
H. Peters, Ole Schulz-Hildebrandt, N. Luttenberger
Sorting is a well-investigated topic in Computer Science in general and by now many efficient sorting algorithms for CPUs and GPUs have been developed. There is no swapping, paging, etc. available on GPUs to provide more virtual memory than physically available, thus if one wants to sort sequences that exceed GPU memory using the GPU the problem of external sorting arises.
{"title":"Parallel external sorting for CUDA-enabled GPUs with load balancing and low transfer overhead","authors":"H. Peters, Ole Schulz-Hildebrandt, N. Luttenberger","doi":"10.1109/IPDPSW.2010.5470833","DOIUrl":"https://doi.org/10.1109/IPDPSW.2010.5470833","url":null,"abstract":"Sorting is a well-investigated topic in Computer Science in general and by now many efficient sorting algorithms for CPUs and GPUs have been developed. There is no swapping, paging, etc. available on GPUs to provide more virtual memory than physically available, thus if one wants to sort sequences that exceed GPU memory using the GPU the problem of external sorting arises.","PeriodicalId":329280,"journal":{"name":"2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW)","volume":"507 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116361731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}