In the FPGA design flow, placement remains one of the most time-consuming stages, and is also crucial in terms of quality of result. HPWL and Star+ are widely used as cost metrics in FPGA placement for estimating the total wire-length of a candidate placement prior to routing. However, both wire-length models are expensive to compute requiring O(nm) time, where n is the number of nets and m is the average net cardinality. This paper proposes using the massively multi-threaded architecture provided by GPUs to reduce the time required to compute HPWL and Star+. First, a specialized set of data structures is developed for storing net-connectivity information on the GPU. Next, a study is performed to determine how to best map the data structures onto the GPU to exploit the heterogeneous memories and thread-level parallelism that are available. Finally, a study is performed to determine what effect circuit size and net cardinality have on the speedups that can be achieved. Overall, the results show that speedups of as much as 160x over a serial CPU implementation can be achieved for both models when tested using standard benchmarks.
{"title":"GPU-Accelerated Wire-Length Estimation for FPGA Placement","authors":"C. Fobel, G. Grewal, D. Stacey","doi":"10.1109/SAAHPC.2011.16","DOIUrl":"https://doi.org/10.1109/SAAHPC.2011.16","url":null,"abstract":"In the FPGA design flow, placement remains one of the most time-consuming stages, and is also crucial in terms of quality of result. HPWL and Star+ are widely used as cost metrics in FPGA placement for estimating the total wire-length of a candidate placement prior to routing. However, both wire-length models are expensive to compute requiring O(nm) time, where n is the number of nets and m is the average net cardinality. This paper proposes using the massively multi-threaded architecture provided by GPUs to reduce the time required to compute HPWL and Star+. First, a specialized set of data structures is developed for storing net-connectivity information on the GPU. Next, a study is performed to determine how to best map the data structures onto the GPU to exploit the heterogeneous memories and thread-level parallelism that are available. Finally, a study is performed to determine what effect circuit size and net cardinality have on the speedups that can be achieved. Overall, the results show that speedups of as much as 160x over a serial CPU implementation can be achieved for both models when tested using standard benchmarks.","PeriodicalId":331604,"journal":{"name":"2011 Symposium on Application Accelerators in High-Performance Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125972903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose an approach for high-performance scientific computing that separates the description of algorithms from the generation of code for parallel hardware architectures like Multi-Core CPUs, GPUs or FPGAs. This way, a scientist can focus on his domain of expertise by describing his algorithms generically without the need to have knowledge of specific hardware architectures, programming languages, APIs or tool flows. We present our prototype implementation that allows for transforming generic descriptions of algorithms with intensive array-type data access to highly optimized code for GPU and multi GPU cluster systems. We evaluate the approach for an example from the domain of computational nanophotonics and show that our current tool flow is able to generate efficient code that achieves speedups of up to 15.3x for a single GPU and even 35.9x for a multi GPU setup compared to a reference CPU implementation.
{"title":"Transformation of Scientific Algorithms to Parallel Computing Code: Single GPU and MPI Multi GPU Backends with Subdomain Support","authors":"B. Meyer, Christian Plessl, Jens Forstner","doi":"10.1109/SAAHPC.2011.12","DOIUrl":"https://doi.org/10.1109/SAAHPC.2011.12","url":null,"abstract":"We propose an approach for high-performance scientific computing that separates the description of algorithms from the generation of code for parallel hardware architectures like Multi-Core CPUs, GPUs or FPGAs. This way, a scientist can focus on his domain of expertise by describing his algorithms generically without the need to have knowledge of specific hardware architectures, programming languages, APIs or tool flows. We present our prototype implementation that allows for transforming generic descriptions of algorithms with intensive array-type data access to highly optimized code for GPU and multi GPU cluster systems. We evaluate the approach for an example from the domain of computational nanophotonics and show that our current tool flow is able to generate efficient code that achieves speedups of up to 15.3x for a single GPU and even 35.9x for a multi GPU setup compared to a reference CPU implementation.","PeriodicalId":331604,"journal":{"name":"2011 Symposium on Application Accelerators in High-Performance Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134395249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Alexandru, M. Lujan, C. Pelissier, B. Gamari, F. Lee
Lattice QCD calculations were one of the first applications to show the potential of GPUs in the area of high performance computing. Our interest is to find ways to effectively use GPUs for lattice calculations using the overlap operator. The large memory footprint of these codes requires the use of multiple GPUs in parallel. In this paper we show the methods we used to implement this operator efficiently. We run our codes both on a GPU cluster and a CPU cluster with similar interconnects. We find that to match performance the CPU cluster requires 20-30 times more CPU cores than GPUs.
{"title":"Efficient Implementation of the Overlap Operator on Multi-GPUs","authors":"A. Alexandru, M. Lujan, C. Pelissier, B. Gamari, F. Lee","doi":"10.1109/SAAHPC.2011.13","DOIUrl":"https://doi.org/10.1109/SAAHPC.2011.13","url":null,"abstract":"Lattice QCD calculations were one of the first applications to show the potential of GPUs in the area of high performance computing. Our interest is to find ways to effectively use GPUs for lattice calculations using the overlap operator. The large memory footprint of these codes requires the use of multiple GPUs in parallel. In this paper we show the methods we used to implement this operator efficiently. We run our codes both on a GPU cluster and a CPU cluster with similar interconnects. We find that to match performance the CPU cluster requires 20-30 times more CPU cores than GPUs.","PeriodicalId":331604,"journal":{"name":"2011 Symposium on Application Accelerators in High-Performance Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127410108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}