首页 > 最新文献

2011 Symposium on Application Accelerators in High-Performance Computing最新文献

英文 中文
GPU-Accelerated Wire-Length Estimation for FPGA Placement FPGA放置的gpu加速线长估计
Pub Date : 2011-07-19 DOI: 10.1109/SAAHPC.2011.16
C. Fobel, G. Grewal, D. Stacey
In the FPGA design flow, placement remains one of the most time-consuming stages, and is also crucial in terms of quality of result. HPWL and Star+ are widely used as cost metrics in FPGA placement for estimating the total wire-length of a candidate placement prior to routing. However, both wire-length models are expensive to compute requiring O(nm) time, where n is the number of nets and m is the average net cardinality. This paper proposes using the massively multi-threaded architecture provided by GPUs to reduce the time required to compute HPWL and Star+. First, a specialized set of data structures is developed for storing net-connectivity information on the GPU. Next, a study is performed to determine how to best map the data structures onto the GPU to exploit the heterogeneous memories and thread-level parallelism that are available. Finally, a study is performed to determine what effect circuit size and net cardinality have on the speedups that can be achieved. Overall, the results show that speedups of as much as 160x over a serial CPU implementation can be achieved for both models when tested using standard benchmarks.
在FPGA设计流程中,放置仍然是最耗时的阶段之一,并且在结果质量方面也至关重要。HPWL和Star+被广泛用作FPGA放置的成本指标,用于在路由之前估计候选放置的总线长。然而,两种线长模型的计算都很昂贵,需要O(nm)时间,其中n是网络的数量,m是平均网络基数。本文提出利用gpu提供的大规模多线程架构来减少计算HPWL和Star+所需的时间。首先,开发了一套专门的数据结构,用于在GPU上存储网络连接信息。接下来,进行一项研究以确定如何最好地将数据结构映射到GPU上,以利用可用的异构内存和线程级并行性。最后,进行了一项研究,以确定电路大小和净基数对可以实现的加速有什么影响。总的来说,结果表明,当使用标准基准测试时,两种型号的速度都可以比串行CPU实现提高160倍。
{"title":"GPU-Accelerated Wire-Length Estimation for FPGA Placement","authors":"C. Fobel, G. Grewal, D. Stacey","doi":"10.1109/SAAHPC.2011.16","DOIUrl":"https://doi.org/10.1109/SAAHPC.2011.16","url":null,"abstract":"In the FPGA design flow, placement remains one of the most time-consuming stages, and is also crucial in terms of quality of result. HPWL and Star+ are widely used as cost metrics in FPGA placement for estimating the total wire-length of a candidate placement prior to routing. However, both wire-length models are expensive to compute requiring O(nm) time, where n is the number of nets and m is the average net cardinality. This paper proposes using the massively multi-threaded architecture provided by GPUs to reduce the time required to compute HPWL and Star+. First, a specialized set of data structures is developed for storing net-connectivity information on the GPU. Next, a study is performed to determine how to best map the data structures onto the GPU to exploit the heterogeneous memories and thread-level parallelism that are available. Finally, a study is performed to determine what effect circuit size and net cardinality have on the speedups that can be achieved. Overall, the results show that speedups of as much as 160x over a serial CPU implementation can be achieved for both models when tested using standard benchmarks.","PeriodicalId":331604,"journal":{"name":"2011 Symposium on Application Accelerators in High-Performance Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125972903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Transformation of Scientific Algorithms to Parallel Computing Code: Single GPU and MPI Multi GPU Backends with Subdomain Support 科学算法到并行计算代码的转换:支持子域的单GPU和MPI多GPU后端
Pub Date : 2011-07-01 DOI: 10.1109/SAAHPC.2011.12
B. Meyer, Christian Plessl, Jens Forstner
We propose an approach for high-performance scientific computing that separates the description of algorithms from the generation of code for parallel hardware architectures like Multi-Core CPUs, GPUs or FPGAs. This way, a scientist can focus on his domain of expertise by describing his algorithms generically without the need to have knowledge of specific hardware architectures, programming languages, APIs or tool flows. We present our prototype implementation that allows for transforming generic descriptions of algorithms with intensive array-type data access to highly optimized code for GPU and multi GPU cluster systems. We evaluate the approach for an example from the domain of computational nanophotonics and show that our current tool flow is able to generate efficient code that achieves speedups of up to 15.3x for a single GPU and even 35.9x for a multi GPU setup compared to a reference CPU implementation.
我们提出了一种高性能科学计算方法,将算法描述与并行硬件架构(如多核cpu、gpu或fpga)的代码生成分离开来。通过这种方式,科学家可以通过描述他的算法来专注于他的专业领域,而不需要了解特定的硬件架构、编程语言、api或工具流。我们提出了我们的原型实现,它允许将具有密集数组类型数据访问的算法的通用描述转换为GPU和多GPU集群系统的高度优化代码。我们对计算纳米光子学领域的一个例子进行了评估,并表明我们当前的工具流能够生成高效的代码,与参考CPU实现相比,单个GPU的速度提高了15.3倍,多GPU设置的速度提高了35.9倍。
{"title":"Transformation of Scientific Algorithms to Parallel Computing Code: Single GPU and MPI Multi GPU Backends with Subdomain Support","authors":"B. Meyer, Christian Plessl, Jens Forstner","doi":"10.1109/SAAHPC.2011.12","DOIUrl":"https://doi.org/10.1109/SAAHPC.2011.12","url":null,"abstract":"We propose an approach for high-performance scientific computing that separates the description of algorithms from the generation of code for parallel hardware architectures like Multi-Core CPUs, GPUs or FPGAs. This way, a scientist can focus on his domain of expertise by describing his algorithms generically without the need to have knowledge of specific hardware architectures, programming languages, APIs or tool flows. We present our prototype implementation that allows for transforming generic descriptions of algorithms with intensive array-type data access to highly optimized code for GPU and multi GPU cluster systems. We evaluate the approach for an example from the domain of computational nanophotonics and show that our current tool flow is able to generate efficient code that achieves speedups of up to 15.3x for a single GPU and even 35.9x for a multi GPU setup compared to a reference CPU implementation.","PeriodicalId":331604,"journal":{"name":"2011 Symposium on Application Accelerators in High-Performance Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134395249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Efficient Implementation of the Overlap Operator on Multi-GPUs 多gpu上重叠算子的高效实现
Pub Date : 2011-06-24 DOI: 10.1109/SAAHPC.2011.13
A. Alexandru, M. Lujan, C. Pelissier, B. Gamari, F. Lee
Lattice QCD calculations were one of the first applications to show the potential of GPUs in the area of high performance computing. Our interest is to find ways to effectively use GPUs for lattice calculations using the overlap operator. The large memory footprint of these codes requires the use of multiple GPUs in parallel. In this paper we show the methods we used to implement this operator efficiently. We run our codes both on a GPU cluster and a CPU cluster with similar interconnects. We find that to match performance the CPU cluster requires 20-30 times more CPU cores than GPUs.
晶格QCD计算是显示gpu在高性能计算领域潜力的首批应用之一。我们的兴趣是找到使用重叠运算符有效地使用gpu进行晶格计算的方法。这些代码的大内存占用要求并行使用多个gpu。本文给出了有效实现该算子的方法。我们在GPU集群和具有相似互连的CPU集群上运行代码。我们发现,为了匹配性能,CPU集群需要比gpu多20-30倍的CPU内核。
{"title":"Efficient Implementation of the Overlap Operator on Multi-GPUs","authors":"A. Alexandru, M. Lujan, C. Pelissier, B. Gamari, F. Lee","doi":"10.1109/SAAHPC.2011.13","DOIUrl":"https://doi.org/10.1109/SAAHPC.2011.13","url":null,"abstract":"Lattice QCD calculations were one of the first applications to show the potential of GPUs in the area of high performance computing. Our interest is to find ways to effectively use GPUs for lattice calculations using the overlap operator. The large memory footprint of these codes requires the use of multiple GPUs in parallel. In this paper we show the methods we used to implement this operator efficiently. We run our codes both on a GPU cluster and a CPU cluster with similar interconnects. We find that to match performance the CPU cluster requires 20-30 times more CPU cores than GPUs.","PeriodicalId":331604,"journal":{"name":"2011 Symposium on Application Accelerators in High-Performance Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127410108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
期刊
2011 Symposium on Application Accelerators in High-Performance Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1