首页 > 最新文献

ACM Transactions on Reconfigurable Technology and Systems最新文献

英文 中文
Improving Fault Tolerance for FPGA SoCs Through Post Radiation Design Analysis 通过辐射后设计分析提高 FPGA SoC 的容错能力
Pub Date : 2024-07-19 DOI: 10.1145/3674841
A. E. Wilson, Nathan Baker, Ethan Campbell, Michael Wirthlin
FPGAs have been shown to operate reliably within harsh radiation environments by employing single-event upset (SEU) mitigation techniques such as configuration scrubbing, triple-modular redundancy, error correction coding, and radiation aware implementation techniques. The effectiveness of these techniques, however, is limited when using complex system-level designs that employ complex I/O interfaces with single-point failures. In previous work, a complex SoC system running Linux applied several of these techniques only to obtain an improvement of 14 (times) in Mean Time to Failure (MTTF). A detailed post-radiation fault analysis found that the limitations in reliability were due to the DDR interface, the global clock network, and interconnect. This paper applied a number of design-specific SEU mitigation techniques to address the limitations in reliability of this design. These changes include triplicating the global clock, optimizing the placement of the reduction output voters and input flip-flops, and employing a mapping technique called “striping”. The application of these techniques improved MTTF of the mitigated design by a factor of 1.54 (times) and thus provides a 22.8X (times) MTTF improvement over the unmitigated design. A post-radiation fault analysis using BFAT was also performed to find the remaining design vulnerabilities.
通过采用配置擦除、三重模块冗余、纠错编码和辐射感知实施技术等单点故障(SEU)缓解技术,FPGA 已经证明能够在恶劣的辐射环境中可靠运行。然而,当使用复杂的系统级设计,并采用具有单点故障的复杂 I/O 接口时,这些技术的有效性就会受到限制。在以前的工作中,一个运行 Linux 的复杂 SoC 系统应用了其中几种技术,但平均故障时间(MTTF)仅提高了 14 (times/)。详细的辐射后故障分析发现,可靠性方面的限制是由 DDR 接口、全局时钟网络和互连造成的。本文采用了一系列针对特定设计的 SEU 缓解技术,以解决该设计在可靠性方面的局限性。这些变化包括将全局时钟复制三倍,优化还原输出投票器和输入触发器的位置,以及采用一种称为 "条带化 "的映射技术。这些技术的应用使减弱设计的 MTTF 提高了 1.54 倍,因此比未减弱设计的 MTTF 提高了 22.8 倍。此外,还使用 BFAT 进行了辐射后故障分析,以发现剩余的设计漏洞。
{"title":"Improving Fault Tolerance for FPGA SoCs Through Post Radiation Design Analysis","authors":"A. E. Wilson, Nathan Baker, Ethan Campbell, Michael Wirthlin","doi":"10.1145/3674841","DOIUrl":"https://doi.org/10.1145/3674841","url":null,"abstract":"\u0000 FPGAs have been shown to operate reliably within harsh radiation environments by employing single-event upset (SEU) mitigation techniques such as configuration scrubbing, triple-modular redundancy, error correction coding, and radiation aware implementation techniques. The effectiveness of these techniques, however, is limited when using complex system-level designs that employ complex I/O interfaces with single-point failures. In previous work, a complex SoC system running Linux applied several of these techniques only to obtain an improvement of 14\u0000 \u0000 (times)\u0000 \u0000 in Mean Time to Failure (MTTF). A detailed post-radiation fault analysis found that the limitations in reliability were due to the DDR interface, the global clock network, and interconnect. This paper applied a number of design-specific SEU mitigation techniques to address the limitations in reliability of this design. These changes include triplicating the global clock, optimizing the placement of the reduction output voters and input flip-flops, and employing a mapping technique called “striping”. The application of these techniques improved MTTF of the mitigated design by a factor of 1.54\u0000 \u0000 (times)\u0000 \u0000 and thus provides a 22.8X\u0000 \u0000 (times)\u0000 \u0000 MTTF improvement over the unmitigated design. A post-radiation fault analysis using BFAT was also performed to find the remaining design vulnerabilities.\u0000","PeriodicalId":505501,"journal":{"name":"ACM Transactions on Reconfigurable Technology and Systems","volume":"103 22","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141821702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SPARTA: High-Level Synthesis of Parallel Multi-Threaded Accelerators SPARTA:并行多线程加速器的高层合成
Pub Date : 2024-07-12 DOI: 10.1145/3677035
Giovanni Gozzi, M. Fiorito, S. Curzel, Claudio Barone, Vito Giovanni Castellana, Marco Minutoli, Antonino Tumeo, Fabrizio Ferrandi
This paper presents a methodology for the Synthesis of PARallel multi-Threaded Accelerators (SPARTA) from OpenMP annotated C/C++ specifications. SPARTA extends an open-source HLS tool, enabling the generation of accelerators that provide latency tolerance for irregular memory accesses through multithreading, support fine-grained memory-level parallelism through a hot-potato deflection-based network-on-chip (NoC), support synchronization constructs, and can instantiate memory-side caches. Our approach is based on a custom runtime OpenMP library, providing flexibility and extensibility. Experimental results show high scalability when synthesizing irregular graph kernels. The accelerators generated with our approach are, on average, 2.29x faster than state-of-the-art HLS methodologies.
本文介绍了一种根据 OpenMP 注释的 C/C++ 规范合成 PARallel 多线程加速器(SPARTA)的方法。SPARTA 扩展了开源 HLS 工具,使生成的加速器能够通过多线程为不规则内存访问提供延迟容差,通过基于热土豆偏转的片上网络(NoC)支持细粒度内存级并行,支持同步构造,并能实例化内存侧缓存。我们的方法基于定制的运行时 OpenMP 库,具有灵活性和可扩展性。实验结果表明,在合成不规则图内核时具有很高的可扩展性。使用我们的方法生成的加速器平均比最先进的 HLS 方法快 2.29 倍。
{"title":"SPARTA: High-Level Synthesis of Parallel Multi-Threaded Accelerators","authors":"Giovanni Gozzi, M. Fiorito, S. Curzel, Claudio Barone, Vito Giovanni Castellana, Marco Minutoli, Antonino Tumeo, Fabrizio Ferrandi","doi":"10.1145/3677035","DOIUrl":"https://doi.org/10.1145/3677035","url":null,"abstract":"This paper presents a methodology for the Synthesis of PARallel multi-Threaded Accelerators (SPARTA) from OpenMP annotated C/C++ specifications. SPARTA extends an open-source HLS tool, enabling the generation of accelerators that provide latency tolerance for irregular memory accesses through multithreading, support fine-grained memory-level parallelism through a hot-potato deflection-based network-on-chip (NoC), support synchronization constructs, and can instantiate memory-side caches. Our approach is based on a custom runtime OpenMP library, providing flexibility and extensibility. Experimental results show high scalability when synthesizing irregular graph kernels. The accelerators generated with our approach are, on average, 2.29x faster than state-of-the-art HLS methodologies.","PeriodicalId":505501,"journal":{"name":"ACM Transactions on Reconfigurable Technology and Systems","volume":"97 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141652841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SQL2FPGA: Automated Acceleration of SQL Query Processing on Modern CPU-FPGA Platforms SQL2FPGA:在现代 CPU-FPGA 平台上自动加速 SQL 查询处理
Pub Date : 2024-07-02 DOI: 10.1145/3674843
Alec Lu, Jahanvi Narendra Agrawal, Zhenman Fang
Today’s big data query engines are constantly under pressure to keep up with the rapidly increasing demand for faster processing of more complex workloads. In the past few years, FPGA-based database acceleration efforts have demonstrated promising performance improvement with good energy efficiency. However, few studies target the programming and design automation support to leverage the FPGA accelerator benefits in query processing. Most of them rely on the SQL query plan generated by CPU query engines and manually map the query plan onto the FPGA accelerators, which is tedious and error-prone. Moreover, such CPU-oriented query plans do not consider the utilization of FPGA accelerators and could lose more optimization opportunities. In this paper, we present SQL2FPGA, an FPGA accelerator-aware compiler to automatically map SQL queries onto the heterogeneous CPU-FPGA platforms. Our SQL2FPGA front-end takes an optimized logical plan of a SQL query from a database query engine and transforms it into a unified operator-level intermediate representation. To generate an optimized FPGAaware physical plan, SQL2FPGA implements a set of compiler optimization passes to 1) improve operator acceleration coverage by the FPGA, 2) eliminate redundant computation during physical execution, and 3) minimize data transfer overhead between operators on the CPU and FPGA. Furthermore, it also leverages machine learning techniques to predict and identify the optimal platform, either CPU or FPGA, for the physical execution of individual query operations. Finally, SQL2FPGA generates the associated query acceleration code for heterogeneous CPU-FPGA system deployment. Compared to the widely used Apache Spark SQL framework running on the CPU, SQL2FPGA—using two AMD/Xilinx HBM-based Alveo U280 FPGA boards and Ver.2020 AMD/Xilinx FPGA overlay designs—achieves an average performance speedup of 10.1x and 13.9x across all 22 TPC-H benchmark queries in a scale factor of 1GB (SF1) and 30GB (SF30), respectively. While evaluated on AMD/Xilinx Alveo U50 FPGA boards, SQL2FPGA using Ver. 2022 AMD/Xilinx FPGA overlay designs also achieve an average speedup of 9.6x at SF1 scale factor.
当今的大数据查询引擎面临着持续的压力,必须跟上快速增长的需求,更快地处理更复杂的工作负载。在过去几年中,基于 FPGA 的数据库加速技术在提高性能的同时,还实现了良好的能效。然而,很少有研究针对编程和设计自动化支持,以充分利用 FPGA 加速器在查询处理方面的优势。大多数研究依赖于 CPU 查询引擎生成的 SQL 查询计划,并将查询计划手动映射到 FPGA 加速器上,这既繁琐又容易出错。此外,这种面向 CPU 的查询计划没有考虑到 FPGA 加速器的利用率,可能会失去更多优化机会。在本文中,我们介绍了 SQL2FPGA,这是一种 FPGA 加速器感知编译器,可自动将 SQL 查询映射到异构 CPU-FPGA 平台上。我们的 SQL2FPGA 前端从数据库查询引擎获取经过优化的 SQL 查询逻辑计划,并将其转换为统一的算子级中间表示。为了生成优化的 FPGA 感知物理计划,SQL2FPGA 实施了一系列编译器优化程序,以便:1)提高 FPGA 的运算符加速覆盖率;2)消除物理执行过程中的冗余计算;3)最大限度地减少 CPU 和 FPGA 上运算符之间的数据传输开销。此外,它还利用机器学习技术来预测和确定物理执行单个查询操作的最佳平台(CPU 或 FPGA)。最后,SQL2FPGA 会生成相关的查询加速代码,用于异构 CPU-FPGA 系统的部署。与在 CPU 上运行的广泛使用的 Apache Spark SQL 框架相比,SQL2FPGA(使用两块基于 AMD/Xilinx HBM 的 Alveo U280 FPGA 板和 Ver.2020 AMD/Xilinx FPGA 叠加设计)在 1GB (SF1) 和 30GB (SF30) 规模因子的所有 22 个 TPC-H 基准查询中分别实现了 10.1 倍和 13.9 倍的平均性能加速。在 AMD/Xilinx Alveo U50 FPGA 板上进行评估时,使用 2022 版 AMD/Xilinx FPGA 叠加设计的 SQL2FPGA 在 SF1 扩展因子下的平均速度也提高了 9.6 倍。
{"title":"SQL2FPGA: Automated Acceleration of SQL Query Processing on Modern CPU-FPGA Platforms","authors":"Alec Lu, Jahanvi Narendra Agrawal, Zhenman Fang","doi":"10.1145/3674843","DOIUrl":"https://doi.org/10.1145/3674843","url":null,"abstract":"Today’s big data query engines are constantly under pressure to keep up with the rapidly increasing demand for faster processing of more complex workloads. In the past few years, FPGA-based database acceleration efforts have demonstrated promising performance improvement with good energy efficiency. However, few studies target the programming and design automation support to leverage the FPGA accelerator benefits in query processing. Most of them rely on the SQL query plan generated by CPU query engines and manually map the query plan onto the FPGA accelerators, which is tedious and error-prone. Moreover, such CPU-oriented query plans do not consider the utilization of FPGA accelerators and could lose more optimization opportunities. In this paper, we present SQL2FPGA, an FPGA accelerator-aware compiler to automatically map SQL queries onto the heterogeneous CPU-FPGA platforms. Our SQL2FPGA front-end takes an optimized logical plan of a SQL query from a database query engine and transforms it into a unified operator-level intermediate representation. To generate an optimized FPGAaware physical plan, SQL2FPGA implements a set of compiler optimization passes to 1) improve operator acceleration coverage by the FPGA, 2) eliminate redundant computation during physical execution, and 3) minimize data transfer overhead between operators on the CPU and FPGA. Furthermore, it also leverages machine learning techniques to predict and identify the optimal platform, either CPU or FPGA, for the physical execution of individual query operations. Finally, SQL2FPGA generates the associated query acceleration code for heterogeneous CPU-FPGA system deployment. Compared to the widely used Apache Spark SQL framework running on the CPU, SQL2FPGA—using two AMD/Xilinx HBM-based Alveo U280 FPGA boards and Ver.2020 AMD/Xilinx FPGA overlay designs—achieves an average performance speedup of 10.1x and 13.9x across all 22 TPC-H benchmark queries in a scale factor of 1GB (SF1) and 30GB (SF30), respectively. While evaluated on AMD/Xilinx Alveo U50 FPGA boards, SQL2FPGA using Ver. 2022 AMD/Xilinx FPGA overlay designs also achieve an average speedup of 9.6x at SF1 scale factor.","PeriodicalId":505501,"journal":{"name":"ACM Transactions on Reconfigurable Technology and Systems","volume":"24 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141687883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hardware Acceleration for High-Volume Operations of CRYSTALS-Kyber and CRYSTALS-Dilithium 为 CRYSTALS-Kyber 和 CRYSTALS-Dilithium 的大容量运行提供硬件加速
Pub Date : 2024-07-02 DOI: 10.1145/3675172
Xavier Carril, Charalampos Kardaris, Jordi Ribes-González, O. Farràs, Carles Hernández, Vatistas Kostalabros, Joel Ulises González-Jiménez, Miquel Moretó
Many high-demand digital services need to perform several cryptographic operations, such as key exchange or security credentialing, in a concise amount of time. In turn, the security of some of these cryptographic schemes is threatened by advances in quantum computing, as quantum computer could break their security in the near future. Post-Quantum Cryptography (PQC) is an emerging field that studies cryptographic algorithms that resist such attacks. The National Institute of Standards and Technology (NIST) has selected the CRYSTALS-Kyber Key Encapsulation Mechanism and the CRYSTALSDilithium Digital Signature algorithm as primary PQC standards. In this paper, we present FPGA-based hardware accelerators for high-volume operations of both schemes. We apply High-Level Synthesis (HLS) for hardware optimization, leveraging a batch processing approach to maximize the memory throughput, and applying custom HLS logic to specific algorithmic components. Using reconfigurable field-programmable gate arrays (FPGAs), we show that our hardware accelerators achieve speedups between 3x and 9x over software baseline implementations, even over ones leveraging CPU vector architectures. Furthermore, the methods used in this study can also be extended to the new CRYSTALS-based NIST FIPS drafts, ML-KEM and ML-DSA, with similar acceleration results.
许多高需求的数字服务需要在短时间内执行若干加密操作,如密钥交换或安全认证。反过来,量子计算的进步也威胁到了其中一些加密方案的安全性,因为量子计算机可能在不久的将来破坏它们的安全性。后量子密码学(PQC)是一个新兴领域,研究可抵御此类攻击的密码算法。美国国家标准与技术研究院(NIST)已选定 CRYSTALS-Kyber 密钥封装机制和 CRYSTALSDilithium 数字签名算法作为 PQC 的主要标准。在本文中,我们介绍了基于 FPGA 的硬件加速器,用于这两种方案的大批量操作。我们将高级合成(HLS)应用于硬件优化,利用批处理方法最大限度地提高内存吞吐量,并将定制 HLS 逻辑应用于特定算法组件。通过使用可重新配置的现场可编程门阵列(FPGA),我们展示了我们的硬件加速器比软件基线实现提高了 3 到 9 倍的速度,甚至超过了利用 CPU 向量架构的实现。此外,本研究中使用的方法还可以扩展到基于 CRYSTALS 的新 NIST FIPS 草案、ML-KEM 和 ML-DSA,并获得类似的加速结果。
{"title":"Hardware Acceleration for High-Volume Operations of CRYSTALS-Kyber and CRYSTALS-Dilithium","authors":"Xavier Carril, Charalampos Kardaris, Jordi Ribes-González, O. Farràs, Carles Hernández, Vatistas Kostalabros, Joel Ulises González-Jiménez, Miquel Moretó","doi":"10.1145/3675172","DOIUrl":"https://doi.org/10.1145/3675172","url":null,"abstract":"Many high-demand digital services need to perform several cryptographic operations, such as key exchange or security credentialing, in a concise amount of time. In turn, the security of some of these cryptographic schemes is threatened by advances in quantum computing, as quantum computer could break their security in the near future. Post-Quantum Cryptography (PQC) is an emerging field that studies cryptographic algorithms that resist such attacks. The National Institute of Standards and Technology (NIST) has selected the CRYSTALS-Kyber Key Encapsulation Mechanism and the CRYSTALSDilithium Digital Signature algorithm as primary PQC standards. In this paper, we present FPGA-based hardware accelerators for high-volume operations of both schemes. We apply High-Level Synthesis (HLS) for hardware optimization, leveraging a batch processing approach to maximize the memory throughput, and applying custom HLS logic to specific algorithmic components. Using reconfigurable field-programmable gate arrays (FPGAs), we show that our hardware accelerators achieve speedups between 3x and 9x over software baseline implementations, even over ones leveraging CPU vector architectures. Furthermore, the methods used in this study can also be extended to the new CRYSTALS-based NIST FIPS drafts, ML-KEM and ML-DSA, with similar acceleration results.","PeriodicalId":505501,"journal":{"name":"ACM Transactions on Reconfigurable Technology and Systems","volume":"17 14","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141685685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Scalable Accelerator for Local Score Computation of Structure Learning in Bayesian Networks 贝叶斯网络结构学习局部得分计算的可扩展加速器
Pub Date : 2024-07-02 DOI: 10.1145/3674842
Ryota Miyagi, Ryota Yasudo, Kentaro Sano, Hideki Takase
A Bayesian network is a powerful tool for representing uncertainty in data, offering transparent and interpretable inference, unlike neural networks’ black-box mechanisms. To fully harness the potential of Bayesian networks, it is essential to learn the graph structure that appropriately represents variable interrelations within data. Score-based structure learning, which involves constructing collections of potentially optimal parent sets for each variable, is computationally intensive, especially when dealing with high-dimensional data in discrete random variables. Our proposed novel acceleration algorithm extracts high levels of parallelism, offering significant advantages even with reduced reusability of computational results. In addition, it employs an elastic data representation tailored for parallel computation, making it FPGA-friendly and optimizing module occupancy while ensuring uniform handling of diverse problem scenarios. Demonstrated on a Xilinx Alveo U50 FPGA, our implementation significantly outperforms optimal CPU algorithms and is several times faster than GPU implementations on an NVIDIA TITAN RTX. Furthermore, the results of performance modeling for the accelerator indicate that, for sufficiently large problem instances, it is weakly scalable, meaning that it effectively utilizes increased computational resources for parallelization. To our knowledge, this is the first study to propose a comprehensive methodology for accelerating score-based structure learning, blending algorithmic and architectural considerations.
与神经网络的黑箱机制不同,贝叶斯网络是表示数据不确定性的强大工具,可提供透明、可解释的推理。要充分发挥贝叶斯网络的潜力,就必须学习适当表示数据中变量相互关系的图结构。基于分数的结构学习涉及为每个变量构建潜在最优父集的集合,计算量很大,尤其是在处理离散随机变量的高维数据时。我们提出的新型加速算法可提取高水平的并行性,即使计算结果的可重用性降低,也能提供显著优势。此外,它还采用了专为并行计算量身定制的弹性数据表示,使其对 FPGA 友好,并优化了模块占用率,同时确保统一处理各种问题场景。在 Xilinx Alveo U50 FPGA 上演示时,我们的实现明显优于最佳 CPU 算法,比英伟达 TITAN RTX 上的 GPU 实现快数倍。此外,加速器的性能建模结果表明,对于足够大的问题实例,它具有弱可扩展性,这意味着它能有效利用增加的计算资源进行并行化。据我们所知,这是第一项针对基于分数的结构学习提出综合加速方法的研究,其中融合了算法和架构方面的考虑。
{"title":"A Scalable Accelerator for Local Score Computation of Structure Learning in Bayesian Networks","authors":"Ryota Miyagi, Ryota Yasudo, Kentaro Sano, Hideki Takase","doi":"10.1145/3674842","DOIUrl":"https://doi.org/10.1145/3674842","url":null,"abstract":"A Bayesian network is a powerful tool for representing uncertainty in data, offering transparent and interpretable inference, unlike neural networks’ black-box mechanisms. To fully harness the potential of Bayesian networks, it is essential to learn the graph structure that appropriately represents variable interrelations within data. Score-based structure learning, which involves constructing collections of potentially optimal parent sets for each variable, is computationally intensive, especially when dealing with high-dimensional data in discrete random variables. Our proposed novel acceleration algorithm extracts high levels of parallelism, offering significant advantages even with reduced reusability of computational results. In addition, it employs an elastic data representation tailored for parallel computation, making it FPGA-friendly and optimizing module occupancy while ensuring uniform handling of diverse problem scenarios. Demonstrated on a Xilinx Alveo U50 FPGA, our implementation significantly outperforms optimal CPU algorithms and is several times faster than GPU implementations on an NVIDIA TITAN RTX. Furthermore, the results of performance modeling for the accelerator indicate that, for sufficiently large problem instances, it is weakly scalable, meaning that it effectively utilizes increased computational resources for parallelization. To our knowledge, this is the first study to propose a comprehensive methodology for accelerating score-based structure learning, blending algorithmic and architectural considerations.","PeriodicalId":505501,"journal":{"name":"ACM Transactions on Reconfigurable Technology and Systems","volume":"66 s94","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141688409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Computation of the Ninth Dedekind Number using FPGA Supercomputing 利用 FPGA 超级计算计算第九代德金数
Pub Date : 2024-07-02 DOI: 10.1145/3674147
Lennart Van Hirtum, P. D. Causmaecker, Jens Goemaere, Tobias Kenter, Heinrich Riebler, Michael Lass, Christian Plessl
This manuscript makes the claim of having computed the (9^{th}) Dedekind number, D(9). This was done by accelerating the core operation of the process with an efficient FPGA design that outperforms an optimized 64-core CPU reference by 95 (times) . The FPGA execution was parallelized on the Noctua 2 supercomputer at Paderborn University. The resulting value for D(9) is (286386577668298411128469151667598498812366) . This value can be verified in two steps. We have made the data file containing the 490M results available, each of which can be verified separately on CPU, and the whole file sums to our proposed value. The paper explains the mathematical approach in the first part, before putting the focus on a deep dive into the FPGA accelerator implementation followed by a performance analysis. The FPGA implementation was done in RTL using a dual-clock architecture and shows how we achieved an impressive FMax of 450MHz on the targeted Stratix 10 GX 2800 FPGAs. The total compute time used was 47’000 FPGA Hours.
本手稿宣称已经计算出了(9^{th})Dedekind数D(9)。这是通过使用高效的FPGA设计加速过程的核心操作实现的,其性能比优化的64核CPU基准高出95 (次)。FPGA 的执行在帕德博恩大学的 Noctua 2 超级计算机上并行进行。由此得出的 D(9) 值为 286386577668298411128469151667598498812366) 。这个值可以通过两个步骤来验证。我们提供了包含 490M 结果的数据文件,每个结果都可以在 CPU 上单独验证,整个文件的总和就是我们提出的值。本文在第一部分解释了数学方法,然后重点深入探讨了 FPGA 加速器的实现,并进行了性能分析。FPGA 的实现是在 RTL 中使用双时钟架构完成的,并展示了我们如何在目标 Stratix 10 GX 2800 FPGA 上实现 450MHz 的惊人 FMax。使用的总计算时间为 47,000 FPGA 小时。
{"title":"A Computation of the Ninth Dedekind Number using FPGA Supercomputing","authors":"Lennart Van Hirtum, P. D. Causmaecker, Jens Goemaere, Tobias Kenter, Heinrich Riebler, Michael Lass, Christian Plessl","doi":"10.1145/3674147","DOIUrl":"https://doi.org/10.1145/3674147","url":null,"abstract":"This manuscript makes the claim of having computed the (9^{th}) Dedekind number, D(9). This was done by accelerating the core operation of the process with an efficient FPGA design that outperforms an optimized 64-core CPU reference by 95 (times) . The FPGA execution was parallelized on the Noctua 2 supercomputer at Paderborn University. The resulting value for D(9) is (286386577668298411128469151667598498812366) . This value can be verified in two steps. We have made the data file containing the 490M results available, each of which can be verified separately on CPU, and the whole file sums to our proposed value. The paper explains the mathematical approach in the first part, before putting the focus on a deep dive into the FPGA accelerator implementation followed by a performance analysis. The FPGA implementation was done in RTL using a dual-clock architecture and shows how we achieved an impressive FMax of 450MHz on the targeted Stratix 10 GX 2800 FPGAs. The total compute time used was 47’000 FPGA Hours.","PeriodicalId":505501,"journal":{"name":"ACM Transactions on Reconfigurable Technology and Systems","volume":"2 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141686751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Reconfigurable Technology and Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1