首页 > 最新文献

Parallel Computing最新文献

英文 中文
Integrating FPGA-based hardware acceleration with relational databases 将基于 FPGA 的硬件加速与关系数据库相结合
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-02-01 Epub Date: 2024-02-06 DOI: 10.1016/j.parco.2024.103064
Ke Liu , Haonan Tong , Zhongxiang Sun, Zhixin Ren, Guangkui Huang, Hongyin Zhu, Luyang Liu, Qunyang Lin, Chuang Zhang

The explosion of data over the last decades puts significant strain on the computational capacity of the central processing unit (CPU), challenging online analytical processing (OLAP). While previous studies have shown the potential of using Field Programmable Gate Arrays (FPGAs) in database systems, integrating FPGA-based hardware acceleration with relational databases remains challenging because of the complex nature of relational database operations and the need for specialized FPGA programming skills. Additionally, there are significant challenges related to optimizing FPGA-based acceleration for specific database workloads, ensuring data consistency and reliability, and integrating FPGA-based hardware acceleration with existing database infrastructure. In this study, we proposed a novel end-to-end FPGA-based acceleration system that supports native SQL statements and storage engine. We defined a callback process to reload the database query logic and customize the scanning method for database queries. Through middleware process development, we optimized offloading efficiency on PCIe bus by scheduling data transmission and computation in a pipeline workflow. Additionally, we designed a novel five-stage FPGA microarchitecture module that achieves optimal clock frequency, further enhancing offloading efficiency. Results from systematic evaluations indicate that our solution allows a single FPGA card to perform as well as 8 CPU query processes, while reducing CPU load by 34%. Compared to using 4 CPU cores, our FPGA-based acceleration system reduces query latency by 1.7 times without increasing CPU load. Furthermore, our proposed solution achieves 2.1 times computation speedup for data filtering compared with the software baseline in a single core environment. Overall, our work presents a valuable end-to-end hardware acceleration system for OLAP databases.

过去几十年来,数据量激增,给中央处理器(CPU)的计算能力带来了巨大压力,给联机分析处理(OLAP)带来了挑战。虽然以前的研究已经显示了在数据库系统中使用现场可编程门阵列(FPGA)的潜力,但由于关系数据库操作的复杂性以及对专业 FPGA 编程技能的需求,将基于 FPGA 的硬件加速与关系数据库集成仍然具有挑战性。此外,在针对特定数据库工作负载优化基于 FPGA 的加速、确保数据一致性和可靠性以及将基于 FPGA 的硬件加速与现有数据库基础架构集成等方面也存在重大挑战。在本研究中,我们提出了一种新颖的端到端基于 FPGA 的加速系统,该系统支持本地 SQL 语句和存储引擎。我们定义了一个回调流程,用于重新加载数据库查询逻辑和定制数据库查询的扫描方法。通过中间件流程开发,我们在流水线工作流程中调度数据传输和计算,优化了 PCIe 总线上的卸载效率。此外,我们还设计了一种新颖的五级 FPGA 微体系结构模块,实现了最佳时钟频率,进一步提高了卸载效率。系统评估结果表明,我们的解决方案使单个 FPGA 卡的性能与 8 个 CPU 查询进程相当,同时将 CPU 负载降低了 34%。与使用 4 个 CPU 内核相比,我们基于 FPGA 的加速系统在不增加 CPU 负载的情况下将查询延迟降低了 1.7 倍。此外,与单核环境下的软件基线相比,我们提出的解决方案在数据过滤方面的计算速度提高了 2.1 倍。总之,我们的工作为 OLAP 数据库提供了一个有价值的端到端硬件加速系统。
{"title":"Integrating FPGA-based hardware acceleration with relational databases","authors":"Ke Liu ,&nbsp;Haonan Tong ,&nbsp;Zhongxiang Sun,&nbsp;Zhixin Ren,&nbsp;Guangkui Huang,&nbsp;Hongyin Zhu,&nbsp;Luyang Liu,&nbsp;Qunyang Lin,&nbsp;Chuang Zhang","doi":"10.1016/j.parco.2024.103064","DOIUrl":"10.1016/j.parco.2024.103064","url":null,"abstract":"<div><p>The explosion of data over the last decades puts significant strain on the computational capacity of the central processing unit (CPU), challenging online analytical processing (OLAP). While previous studies have shown the potential of using Field Programmable Gate Arrays (FPGAs) in database systems, integrating FPGA-based hardware acceleration with relational databases remains challenging because of the complex nature of relational database operations and the need for specialized FPGA programming skills. Additionally, there are significant challenges related to optimizing FPGA-based acceleration for specific database workloads, ensuring data consistency and reliability, and integrating FPGA-based hardware acceleration with existing database infrastructure. In this study, we proposed a novel end-to-end FPGA-based acceleration system that supports native SQL statements and storage engine. We defined a callback process to reload the database query logic and customize the scanning method for database queries. Through middleware process development, we optimized offloading efficiency on PCIe bus by scheduling data transmission and computation in a pipeline workflow. Additionally, we designed a novel five-stage FPGA microarchitecture module that achieves optimal clock frequency, further enhancing offloading efficiency. Results from systematic evaluations indicate that our solution allows a single FPGA card to perform as well as 8 CPU query processes, while reducing CPU load by 34%. Compared to using 4 CPU cores, our FPGA-based acceleration system reduces query latency by 1.7 times without increasing CPU load. Furthermore, our proposed solution achieves 2.1 times computation speedup for data filtering compared with the software baseline in a single core environment. Overall, our work presents a valuable end-to-end hardware acceleration system for OLAP databases.</p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"119 ","pages":"Article 103064"},"PeriodicalIF":1.4,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0167819124000024/pdfft?md5=d270aeec859768a5bff3f5d4988863f9&pid=1-s2.0-S0167819124000024-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139825507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OF-WFBP: A near-optimal communication mechanism for tensor fusion in distributed deep learning 分布式深度学习中张量融合的近最优通信机制
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-11-01 Epub Date: 2023-11-09 DOI: 10.1016/j.parco.2023.103053
Yunqi Gao , Zechao Zhang , Bing Hu , A-Long Jin , Chunming Wu

The communication bottleneck has severely restricted the scalability of distributed deep learning. Tensor fusion improves the scalability of data parallelism by overlapping computation and communication tasks. However, existing tensor fusion schemes only result in suboptimal training performance. In this paper, we propose an efficient communication mechanism (OF-WFBP) to find the optimal tensor fusion scheme for synchronous data parallelism. We present the mathematical model of OF-WFBP and prove it is an NP-hard problem. We mathematically solve the mathematical model of OF-WFBP in two cases. We propose an improved sparrow search algorithm (GradSSA) to find the near-optimal tensor fusion scheme efficiently in other cases. Experimental results on two different GPU clusters show that OF-WFBP achieves up to 1.43x speedup compared to the state-of-the-art tensor fusion mechanisms.

通信瓶颈严重制约了分布式深度学习的可扩展性。张量融合通过重叠计算和通信任务,提高了数据并行性的可扩展性。然而,现有的张量融合方案只会导致次优的训练性能。本文提出了一种有效的通信机制(OF-WFBP)来寻找同步数据并行的最佳张量融合方案。我们建立了of - wfbp的数学模型,并证明了它是一个np困难问题。在两种情况下,对of - wfbp的数学模型进行了数学求解。我们提出了一种改进的麻雀搜索算法(GradSSA),以便在其他情况下有效地找到接近最优的张量融合方案。在两种不同GPU集群上的实验结果表明,与目前最先进的张量融合机制相比,OF-WFBP的速度提高了1.43倍。
{"title":"OF-WFBP: A near-optimal communication mechanism for tensor fusion in distributed deep learning","authors":"Yunqi Gao ,&nbsp;Zechao Zhang ,&nbsp;Bing Hu ,&nbsp;A-Long Jin ,&nbsp;Chunming Wu","doi":"10.1016/j.parco.2023.103053","DOIUrl":"https://doi.org/10.1016/j.parco.2023.103053","url":null,"abstract":"<div><p><span>The communication bottleneck has severely restricted the scalability of distributed deep learning<span>. Tensor fusion improves the scalability of data parallelism by overlapping computation and communication tasks. However, existing tensor fusion schemes only result in suboptimal training performance. In this paper, we propose an efficient communication mechanism (OF-WFBP) to find the optimal tensor fusion scheme for synchronous data parallelism. We present the mathematical model of OF-WFBP and prove it is an NP-hard problem. We mathematically solve the mathematical model of OF-WFBP in two cases. We propose an improved sparrow search algorithm (GradSSA) to find the near-optimal tensor fusion scheme efficiently in other cases. Experimental results on two different </span></span>GPU clusters show that OF-WFBP achieves up to 1.43x speedup compared to the state-of-the-art tensor fusion mechanisms.</p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"118 ","pages":"Article 103053"},"PeriodicalIF":1.4,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134656640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Targeting performance and user-friendliness: GPU-accelerated finite element computation with automated code generation in FEniCS 目标性能和用户友好性:gpu加速有限元计算与fenic中的自动代码生成
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-11-01 Epub Date: 2023-10-06 DOI: 10.1016/j.parco.2023.103051
James D. Trotter , Johannes Langguth , Xing Cai

This paper studies the use of automated code generation to provide user-friendly GPU acceleration for solving partial differential equations (PDEs) with finite element methods. By extending the FEniCS framework and its automated compiler, we have achieved that a high-level description of finite element computations written in the Unified Form Language is auto-translated to parallelised CUDA C++ code. The auto-generated code provides GPU offloading for the finite element assembly of linear equation systems which are then solved by a GPU-supported linear algebra backend.

Specifically, we explore several auto-generated optimisations of the resulting CUDA C++ code. Numerical experiments show that GPU-based linear system assembly for a typical PDE with first-order elements can benefit from using a lookup table to avoid repeatedly carrying out numerous binary searches, and that further performance gains can be obtained by assembling a sparse matrix row by row. More importantly, the extended FEniCS compiler is able to seamlessly couple the assembly and solution phases for GPU acceleration, so that all unnecessary CPU–GPU data transfers are eliminated. Detailed experiments are used to quantify the negative impact of these data transfers, which can entirely destroy the potential of GPU acceleration if the assembly and solution phases are offloaded to GPU separately. Finally, a complete, auto-generated GPU-based PDE solver for a nonlinear solid mechanics application is used to demonstrate a substantial speedup over running on dual-socket multi-core CPUs, including GPU acceleration of algebraic multigrid as the preconditioner.

本文研究了使用自动代码生成为用有限元方法求解偏微分方程(PDEs)提供用户友好的GPU加速。通过扩展FEniCS框架及其自动编译器,我们已经实现了用统一形式语言编写的有限元计算的高级描述自动翻译为并行CUDA c++代码。自动生成的代码为线性方程系统的有限元装配提供GPU卸载,然后由GPU支持的线性代数后端进行求解。具体来说,我们探索了CUDA c++代码的几个自动生成的优化。数值实验表明,对于典型的一阶元素PDE,基于gpu的线性系统装配可以通过使用查找表来避免重复进行大量的二进制搜索,并且通过逐行组装稀疏矩阵可以获得进一步的性能提升。更重要的是,扩展的FEniCS编译器能够无缝地耦合GPU加速的组装和解决方案阶段,从而消除了所有不必要的CPU-GPU数据传输。详细的实验用于量化这些数据传输的负面影响,如果将组装和解决阶段分别卸载到GPU,则可能完全破坏GPU加速的潜力。最后,使用一个完整的、自动生成的基于GPU的非线性固体力学PDE求解器来演示在双插槽多核cpu上运行时的显著加速,包括GPU对代数多网格的加速作为前置条件。
{"title":"Targeting performance and user-friendliness: GPU-accelerated finite element computation with automated code generation in FEniCS","authors":"James D. Trotter ,&nbsp;Johannes Langguth ,&nbsp;Xing Cai","doi":"10.1016/j.parco.2023.103051","DOIUrl":"https://doi.org/10.1016/j.parco.2023.103051","url":null,"abstract":"<div><p>This paper studies the use of automated code generation to provide user-friendly GPU acceleration for solving partial differential equations (PDEs) with finite element methods. By extending the FEniCS framework and its automated compiler, we have achieved that a high-level description of finite element computations written in the Unified Form Language is auto-translated to parallelised CUDA C++ code. The auto-generated code provides GPU offloading for the finite element assembly of linear equation systems which are then solved by a GPU-supported linear algebra backend.</p><p>Specifically, we explore several auto-generated optimisations of the resulting CUDA C++ code. Numerical experiments show that GPU-based linear system assembly for a typical PDE with first-order elements can benefit from using a lookup table to avoid repeatedly carrying out numerous binary searches, and that further performance gains can be obtained by assembling a sparse matrix row by row. More importantly, the extended FEniCS compiler is able to seamlessly couple the assembly and solution phases for GPU acceleration, so that all unnecessary CPU–GPU data transfers are eliminated. Detailed experiments are used to quantify the negative impact of these data transfers, which can entirely destroy the potential of GPU acceleration if the assembly and solution phases are offloaded to GPU separately. Finally, a complete, auto-generated GPU-based PDE solver for a nonlinear solid mechanics application is used to demonstrate a substantial speedup over running on dual-socket multi-core CPUs, including GPU acceleration of algebraic multigrid as the preconditioner.</p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"118 ","pages":"Article 103051"},"PeriodicalIF":1.4,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49881777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Task graph-based performance analysis of parallel-in-time methods 基于任务图的并行实时性能分析方法
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-11-01 Epub Date: 2023-09-14 DOI: 10.1016/j.parco.2023.103050
Matthias Bolten, Stephanie Friedhoff, Jens Hahne

In this paper, we present a performance model based on task graphs for various iterative parallel-in-time (PinT) methods. PinT methods have been developed to speed up the simulation time of time-dependent problems using modern parallel supercomputers. The performance model is based on a data-driven notation of the methods, from which a task graph is generated. Based on this task graph and a distribution of time points across processes typical for PinT methods, a theoretical lower runtime bound for the method can be obtained, as well as a prediction of the runtime for a given number of processes. In particular, the model is able to cover the large parameter space of PinT methods and make predictions for arbitrary parameter settings. Here, we describe a general procedure for generating task graphs based on three iterative PinT methods, namely, Parareal, multigrid-reduction-in-time (MGRIT), and the parallel full approximation scheme in space and time (PFASST). Furthermore, we discuss how these task graphs can be used to analyze the performance of the methods. In addition, we compare the predictions of the model with parallel simulation times using five different PinT libraries.

在本文中,我们提出了一个基于任务图的各种迭代并行实时(PinT)方法的性能模型。在现代并行超级计算机上,为了加快时间相关问题的模拟速度,发展了PinT方法。性能模型基于方法的数据驱动表示法,从中生成任务图。基于此任务图和典型的PinT方法的跨进程时间点分布,可以获得该方法的理论运行时下限,以及对给定数量进程的运行时的预测。特别是,该模型能够覆盖PinT方法的大参数空间,并对任意参数设置进行预测。在这里,我们描述了一种基于三种迭代的PinT方法生成任务图的一般过程,即Parareal, multi - grid-reduction-in-time (MGRIT)和parallel full approximation in space and time (PFASST)。此外,我们还讨论了如何使用这些任务图来分析方法的性能。此外,我们使用五个不同的PinT库将模型的预测与并行模拟时间进行比较。
{"title":"Task graph-based performance analysis of parallel-in-time methods","authors":"Matthias Bolten,&nbsp;Stephanie Friedhoff,&nbsp;Jens Hahne","doi":"10.1016/j.parco.2023.103050","DOIUrl":"https://doi.org/10.1016/j.parco.2023.103050","url":null,"abstract":"<div><p>In this paper, we present a performance model based on task graphs for various iterative parallel-in-time (PinT) methods. PinT methods have been developed to speed up the simulation time of time-dependent problems using modern parallel supercomputers<span>. The performance model is based on a data-driven notation of the methods, from which a task graph is generated. Based on this task graph and a distribution of time points across processes typical for PinT methods, a theoretical lower runtime bound for the method can be obtained, as well as a prediction of the runtime for a given number of processes. In particular, the model is able to cover the large parameter space of PinT methods and make predictions for arbitrary parameter settings. Here, we describe a general procedure for generating task graphs based on three iterative PinT methods, namely, Parareal, multigrid-reduction-in-time (MGRIT), and the parallel full approximation scheme in space and time (PFASST). Furthermore, we discuss how these task graphs can be used to analyze the performance of the methods. In addition, we compare the predictions of the model with parallel simulation times using five different PinT libraries.</span></p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"118 ","pages":"Article 103050"},"PeriodicalIF":1.4,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49881776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low consumption automatic discovery protocol for DDS-based large-scale distributed parallel computing 基于dds的大规模分布式并行计算低消耗自动发现协议
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-11-01 Epub Date: 2023-11-09 DOI: 10.1016/j.parco.2023.103052
Zhexu Liu , Shaofeng Liu , Zhiyong Fan , Zhen Zhao

DDS (Data Distribution Service) is an efficient communication specification for distributed parallel computing. However, as the scale of computation expands, high network load and memory consumption consistently limit its performance. This paper proposes a low consumption automatic discovery protocol to improve DDS in large-scale distributed parallel computing. Firstly, an improved Bloom Filter called TBF (Threshold Bloom Filter) is presented to compress the data topic. Then it is combined with the SDP(Simple Discovery Protocol) to reduce the consumption of the automatic discovery process in DDS. On this basis, data publication and subscription between the distributed computing nodes are matched using binarization threshold θ and decision threshold T , which can be obtained through iterative optimization algorithms. Experiment results show that the SDPTBF can guarantee higher transmission accuracy while reducing network load and memory consumption, and therefore improve the performance of DDS-based large-scale distributed parallel computing.

DDS (Data Distribution Service)是一种高效的分布式并行计算通信规范。然而,随着计算规模的扩大,高网络负载和内存消耗不断限制其性能。针对大规模分布式并行计算中的DDS问题,提出了一种低消耗的自动发现协议。首先,提出了一种改进的布隆过滤器TBF (Threshold Bloom Filter)来压缩数据主题;然后将其与SDP(Simple Discovery Protocol)协议相结合,减少了DDS中自动发现过程的消耗。在此基础上,采用二值化阈值θ和决策阈值T对分布式计算节点之间的数据发布和订阅进行匹配,并通过迭代优化算法获得。实验结果表明,SDPTBF在保证更高的传输精度的同时,减少了网络负载和内存消耗,从而提高了基于dds的大规模分布式并行计算的性能。
{"title":"Low consumption automatic discovery protocol for DDS-based large-scale distributed parallel computing","authors":"Zhexu Liu ,&nbsp;Shaofeng Liu ,&nbsp;Zhiyong Fan ,&nbsp;Zhen Zhao","doi":"10.1016/j.parco.2023.103052","DOIUrl":"https://doi.org/10.1016/j.parco.2023.103052","url":null,"abstract":"<div><p><span><span>DDS (Data Distribution Service) is an efficient communication specification for distributed parallel computing. However, as the scale of computation expands, high network load and memory consumption consistently limit its performance. This paper proposes a low consumption automatic discovery protocol to improve DDS in large-scale distributed parallel computing. Firstly, an improved Bloom Filter called TBF (Threshold Bloom Filter) is presented to compress the data topic. Then it is combined with the SDP(Simple Discovery Protocol) to reduce the consumption of the automatic discovery process in DDS. On this basis, data publication and subscription between the </span>distributed computing<span> nodes are matched using binarization threshold </span></span><span><math><mi>θ</mi></math></span> and decision threshold <span><math><mi>T</mi></math></span><span> , which can be obtained through iterative optimization algorithms. Experiment results show that the SDPTBF can guarantee higher transmission accuracy while reducing network load and memory consumption, and therefore improve the performance of DDS-based large-scale distributed parallel computing.</span></p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"118 ","pages":"Article 103052"},"PeriodicalIF":1.4,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"109182030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing massively parallel sparse matrix computing on ARM many-core processor ARM多核处理器上大规模并行稀疏矩阵计算优化
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-09-01 Epub Date: 2023-06-26 DOI: 10.1016/j.parco.2023.103035
Jiang Zheng , Jiazhi Jiang , Jiangsu Du, Dan Huang, Yutong Lu

Sparse matrix multiplication is ubiquitous in many applications such as graph processing and numerical simulation. In recent years, numerous efficient sparse matrix multiplication algorithms and computational libraries have been proposed. However, most of them are oriented to x86 or GPU platforms, while the optimization on ARM many-core platforms has not been well investigated. Our experiments show that existing sparse matrix multiplication libraries for ARM many-core CPU cannot achieve expected parallel performance. Compared with traditional multi-core CPU, ARM many-core CPU has far more cores and often adopts NUMA techniques to scale the memory bandwidth. Its parallel efficiency tends to be restricted by NUMA configuration, memory bandwidth cache contention, etc.

In this paper, we propose optimized implementations for sparse matrix computing on ARM many-core CPU. We propose various optimization techniques for several routines of sparse matrix multiplication to ensure coalesced access of matrix elements in the memory. In detail, the optimization techniques include a fine-tuned CSR-based format for ARM architecture, co-optimization of Gustavson’s algorithm with hierarchical cache and dense array strategy to mitigate performance loss caused by handling compressed storage formats. We exploit the coarse-grained NUMA-aware strategy for inter-node parallelism and the fine-grained cache-aware strategy for intra-node parallelism to improve the parallel efficiency of sparse matrix multiplication. The evaluation shows that our implementation consistently outperforms the existing library on ARM many-core processor.

稀疏矩阵乘法在图形处理和数值模拟等应用中无处不在。近年来,人们提出了许多高效的稀疏矩阵乘法算法和计算库。然而,它们大多面向x86或GPU平台,而ARM多核平台的优化尚未得到很好的研究。实验表明,现有的用于ARM多核CPU的稀疏矩阵乘法库不能达到预期的并行性能。与传统的多核CPU相比,ARM多核CPU拥有更多的内核,并且经常采用NUMA技术来扩展内存带宽。它的并行效率往往受到NUMA配置、内存带宽缓存争用等的限制。本文提出了在ARM多核CPU上稀疏矩阵计算的优化实现。针对稀疏矩阵乘法的几种例程,提出了各种优化技术,以保证对内存中矩阵元素的合并访问。具体而言,优化技术包括针对ARM架构的基于csr的优化格式、Gustavson算法与分层缓存和密集数组策略的协同优化,以减轻处理压缩存储格式造成的性能损失。利用节点间并行的粗粒度numa感知策略和节点内并行的细粒度缓存感知策略来提高稀疏矩阵乘法的并行效率。评估表明,我们的实现始终优于现有的ARM多核处理器上的库。
{"title":"Optimizing massively parallel sparse matrix computing on ARM many-core processor","authors":"Jiang Zheng ,&nbsp;Jiazhi Jiang ,&nbsp;Jiangsu Du,&nbsp;Dan Huang,&nbsp;Yutong Lu","doi":"10.1016/j.parco.2023.103035","DOIUrl":"https://doi.org/10.1016/j.parco.2023.103035","url":null,"abstract":"<div><p><span><span>Sparse matrix multiplication is ubiquitous in many applications such as graph processing and numerical simulation. In recent years, numerous efficient sparse matrix multiplication algorithms and computational libraries have been proposed. However, most of them are oriented to x86 or GPU platforms, while the optimization on ARM many-core platforms has not been well investigated. Our experiments show that existing sparse matrix multiplication libraries for ARM many-core CPU cannot achieve expected parallel performance. Compared with traditional multi-core CPU, ARM many-core CPU has far more cores and often adopts </span>NUMA techniques to scale the </span>memory bandwidth. Its parallel efficiency tends to be restricted by NUMA configuration, memory bandwidth cache contention, etc.</p><p>In this paper, we propose optimized implementations for sparse matrix computing on ARM many-core CPU. We propose various optimization techniques for several routines of sparse matrix multiplication to ensure coalesced access<span> of matrix elements in the memory. In detail, the optimization techniques include a fine-tuned CSR-based format for ARM architecture, co-optimization of Gustavson’s algorithm with hierarchical cache and dense array strategy to mitigate performance loss caused by handling compressed storage formats. We exploit the coarse-grained NUMA-aware strategy for inter-node parallelism and the fine-grained cache-aware strategy for intra-node parallelism to improve the parallel efficiency of sparse matrix multiplication. The evaluation shows that our implementation consistently outperforms the existing library on ARM many-core processor.</span></p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"117 ","pages":"Article 103035"},"PeriodicalIF":1.4,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49877864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distributed software defined network-based fog to fog collaboration scheme 分布式软件定义的基于网络的雾对雾协作方案
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-09-01 Epub Date: 2023-07-29 DOI: 10.1016/j.parco.2023.103040
Muhammad Kabeer , Ibrahim Yusuf , Nasir Ahmad Sufi

Fog computing was created to supplement the cloud in bridging the communication delay gap by deploying fog nodes nearer to Internet of Things (IoT) devices. Depending on the geographical location, computational resource and rate of IoT requests, fog nodes can be idle or saturated. The latter requires special mechanism to enable collaboration with other nodes through service offloading to improve resource utilization. Software Defined Network (SDN) comes with improved bandwidth, latency and understanding of network topology, which recently attracted researchers attention and delivers promising results in service offloading. In this study, a Hierarchical Distributed Software Defined Network-based (DSDN) fog to fog collaboration model is proposed; the scheme considers computational resources such as available CPU and network resources such as communication hops of a prospective offloading node. Fog nodes having limited resources coupled with the projected high demand for fog services in the near future, the model also accounts for extreme cases in which all nearby nodes in a fog domain are saturated, employing a supervisor controller to scale the collaboration to other domains. The results of the simulations carried out on Mininet shows that the proposed multi-controller DSDN solution outperforms the traditional single controller SDN solution, it also further demonstrate that increase in the number of fog nodes does not affect service offloading performance significantly when multiple controllers are used.

雾计算的创建是为了通过在更靠近物联网(IoT)设备的地方部署雾节点来弥补云的通信延迟差距。根据地理位置、计算资源和物联网请求的速率,雾节点可能处于空闲状态或饱和状态。后者需要特殊的机制,通过服务卸载来实现与其他节点的协作,从而提高资源利用率。软件定义网络(SDN)具有改进的带宽、延迟和对网络拓扑的理解,最近引起了研究人员的关注,并在业务卸载方面取得了可喜的成果。本文提出了一种基于分层分布式软件定义网络(DSDN)的雾对雾协作模型;该方案考虑了可用CPU等计算资源和预期卸载节点的通信跳数等网络资源。雾节点资源有限,加上在不久的将来对雾服务的预计高需求,该模型还考虑了雾域中所有附近节点饱和的极端情况,采用监督控制器将协作扩展到其他域。在Mininet上进行的仿真结果表明,所提出的多控制器DSDN解决方案优于传统的单控制器SDN解决方案,并进一步证明了当使用多个控制器时,雾节点数量的增加不会显著影响业务卸载性能。
{"title":"Distributed software defined network-based fog to fog collaboration scheme","authors":"Muhammad Kabeer ,&nbsp;Ibrahim Yusuf ,&nbsp;Nasir Ahmad Sufi","doi":"10.1016/j.parco.2023.103040","DOIUrl":"https://doi.org/10.1016/j.parco.2023.103040","url":null,"abstract":"<div><p><span><span>Fog computing was created to supplement the cloud in bridging the communication delay gap by deploying fog nodes nearer to </span>Internet of Things<span> (IoT) devices. Depending on the geographical location, computational resource and rate of IoT requests, fog nodes can be idle or saturated. The latter requires special mechanism to enable collaboration with other nodes through service offloading to improve resource utilization. Software Defined Network (SDN) comes with improved bandwidth, latency and understanding of </span></span>network topology<span>, which recently attracted researchers attention and delivers promising results in service offloading. In this study, a Hierarchical Distributed Software Defined Network-based (DSDN) fog to fog collaboration model is proposed; the scheme considers computational resources such as available CPU and network resources such as communication hops of a prospective offloading node. Fog nodes having limited resources coupled with the projected high demand for fog services in the near future, the model also accounts for extreme cases in which all nearby nodes in a fog domain are saturated, employing a supervisor controller to scale the collaboration to other domains. The results of the simulations carried out on Mininet shows that the proposed multi-controller DSDN solution outperforms the traditional single controller SDN solution, it also further demonstrate that increase in the number of fog nodes does not affect service offloading performance significantly when multiple controllers are used.</span></p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"117 ","pages":"Article 103040"},"PeriodicalIF":1.4,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49877856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Finding inputs that trigger floating-point exceptions in heterogeneous computing via Bayesian optimization 通过贝叶斯优化查找异构计算中触发浮点异常的输入
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-09-01 Epub Date: 2023-08-02 DOI: 10.1016/j.parco.2023.103042
Ignacio Laguna , Anh Tran , Ganesh Gopalakrishnan

Testing code for floating-point exceptions is crucial as exceptions can quickly propagate and produce unreliable numerical answers. The state-of-the-art to test for floating-point exceptions in heterogeneous systems is quite limited and solutions require the application’s source code, which precludes their use in accelerated libraries where the source is not publicly available. We present an approach to find inputs that trigger floating-point exceptions in black-box CPU or GPU functions, i.e., functions where the source code and information about input bounds are unavailable. Our approach is the first to use Bayesian optimization (BO) to identify such inputs and uses novel strategies to overcome the challenges that arise in applying BO to this problem. We implement our approach in the Xscope framework and demonstrate it on 58 functions from the CUDA Math Library and 81 functions from the Intel Math Library. Xscope is able to identify inputs that trigger exceptions in about 73% of the tested functions.

测试浮点异常的代码是至关重要的,因为异常可以快速传播并产生不可靠的数值答案。在异构系统中测试浮点异常的技术非常有限,而且解决方案需要应用程序的源代码,这就排除了在源代码不公开的加速库中使用它们的可能性。我们提出了一种方法来查找在黑箱CPU或GPU函数中触发浮点异常的输入,即,关于输入边界的源代码和信息不可用的函数。我们的方法是第一个使用贝叶斯优化(BO)来识别这些输入,并使用新颖的策略来克服将BO应用于该问题时出现的挑战。我们在Xscope框架中实现了我们的方法,并在CUDA数学库中的58个函数和Intel数学库中的81个函数上进行了演示。Xscope能够识别在大约73%的测试函数中触发异常的输入。
{"title":"Finding inputs that trigger floating-point exceptions in heterogeneous computing via Bayesian optimization","authors":"Ignacio Laguna ,&nbsp;Anh Tran ,&nbsp;Ganesh Gopalakrishnan","doi":"10.1016/j.parco.2023.103042","DOIUrl":"https://doi.org/10.1016/j.parco.2023.103042","url":null,"abstract":"<div><p><span><span>Testing code for floating-point exceptions is crucial as exceptions can quickly propagate and produce unreliable numerical answers. The state-of-the-art to test for floating-point exceptions in heterogeneous systems<span> is quite limited and solutions require the application’s source code, which precludes their use in accelerated libraries where the source is not publicly available. We present an approach to find inputs that trigger floating-point exceptions in black-box CPU or </span></span>GPU functions, i.e., functions where the source code and information about input bounds are unavailable. Our approach is the first to use Bayesian optimization (BO) to identify such inputs and uses novel strategies to overcome the challenges that arise in applying BO to this problem. We implement our approach in the </span><span><span>Xscope</span></span> framework and demonstrate it on 58 functions from the CUDA Math Library and 81 functions from the Intel Math Library. <span><span>Xscope</span></span> is able to identify inputs that trigger exceptions in about 73% of the tested functions.</p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"117 ","pages":"Article 103042"},"PeriodicalIF":1.4,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49877859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Parallelizable efficient large order multiple recursive generators 并行化高效大阶多重递归生成器
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-09-01 Epub Date: 2023-06-26 DOI: 10.1016/j.parco.2023.103036
Lih-Yuan Deng , Bryan R. Winter , Jyh-Jen Horng Shiau , Henry Horng-Shing Lu , Nirman Kumar , Ching-Chi Yang

The general multiple recursive generator (MRG) of maximum period has been thought of as an excellent source of pseudo random numbers. Based on a kth order linear recurrence modulo p, this generator produces the next pseudo random number based on a linear combination of the previous k numbers. General maximum period MRGs of order k have excellent empirical performance, and their strong mathematical foundations have been studied extensively.

For computing efficiency, it is common to consider special MRGs with some simple structure with few non-zero terms which requires fewer costly multiplications. However, such MRGs will not have a good “spectral test” property when compared with general MRGs with many non-zero terms. On the other hand, there are two potential problems of using general MRGs with many non-zero terms: (1) its efficient implementation (2) its efficient scheme for its parallelization. Efficient implementation of general MRGs of larger order k can be difficult because the kth order linear recurrence requires many costly multiplications to produce the next number. For its parallelization scheme, for a large k, the traditional scheme like “jump-ahead parallelization method” for general MRGs becomes highly computationally inefficient. We proposed implementing maximum period MRGs with many nonzero terms efficiently and in parallel by using a MCG constructed from the MRG. In particular, we propose a special class of large order MRGs with many nonzero terms that also have an efficient and parallel implementation.

最大周期的通用多重递归发生器(MRG)被认为是伪随机数的一个很好的来源。基于k阶线性递归模p,该生成器基于前k个数字的线性组合产生下一个伪随机数。一般k阶最大周期磁振子具有良好的经验性能,其强大的数学基础得到了广泛的研究。为了提高计算效率,通常考虑具有一些简单结构的特殊mrg,其非零项较少,需要较少的昂贵乘法。然而,与具有许多非零项的一般mrg相比,这种mrg将不具有良好的“光谱测试”性能。另一方面,使用具有许多非零项的通用mrg存在两个潜在问题:(1)其有效实现;(2)其并行化的有效方案。有效地实现大阶k的一般mrg可能是困难的,因为第k阶线性递归需要许多昂贵的乘法来产生下一个数字。对于其并行化方案,当k较大时,一般mrg的“超前并行化法”等传统方案的计算效率非常低。我们提出了使用由MRG构造的MCG来高效并行地实现具有许多非零项的最大周期MRG。特别地,我们提出了一类特殊的具有许多非零项的大阶mrg,它们也具有高效和并行的实现。
{"title":"Parallelizable efficient large order multiple recursive generators","authors":"Lih-Yuan Deng ,&nbsp;Bryan R. Winter ,&nbsp;Jyh-Jen Horng Shiau ,&nbsp;Henry Horng-Shing Lu ,&nbsp;Nirman Kumar ,&nbsp;Ching-Chi Yang","doi":"10.1016/j.parco.2023.103036","DOIUrl":"https://doi.org/10.1016/j.parco.2023.103036","url":null,"abstract":"<div><p>The general multiple recursive generator (MRG) of maximum period has been thought of as an excellent source of pseudo random numbers. Based on a <span><math><mi>k</mi></math></span>th order linear recurrence modulo <span><math><mi>p</mi></math></span><span>, this generator produces the next pseudo random number based on a linear combination of the previous </span><span><math><mi>k</mi></math></span> numbers. General maximum period MRGs of order <span><math><mi>k</mi></math></span> have excellent empirical performance, and their strong mathematical foundations have been studied extensively.</p><p><span>For computing efficiency, it is common to consider special MRGs with some simple structure with few non-zero terms which requires fewer costly multiplications. However, such MRGs will not have a good “spectral test” property when compared with general MRGs with many non-zero terms. On the other hand, there are two potential problems of using general MRGs with many non-zero terms: (1) its efficient implementation (2) its efficient scheme for its parallelization. Efficient implementation of general MRGs of larger order </span><span><math><mi>k</mi></math></span> can be difficult because the <span><math><mi>k</mi></math></span>th order linear recurrence requires many costly multiplications to produce the next number. For its parallelization scheme, for a large <span><math><mi>k</mi></math></span>, the traditional scheme like “jump-ahead parallelization method” for general MRGs becomes highly computationally inefficient. We proposed implementing maximum period MRGs with many nonzero terms efficiently and in parallel by using a MCG constructed from the MRG. In particular, we propose a special class of large order MRGs with many nonzero terms that also have an efficient and parallel implementation.</p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"117 ","pages":"Article 103036"},"PeriodicalIF":1.4,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49877863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial on Advances in High Performance Programming 关于高性能编程进展的社论
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-09-01 DOI: 10.1016/j.parco.2023.103037
A. Marowka, Przemysław Stpiczyński
{"title":"Editorial on Advances in High Performance Programming","authors":"A. Marowka, Przemysław Stpiczyński","doi":"10.1016/j.parco.2023.103037","DOIUrl":"https://doi.org/10.1016/j.parco.2023.103037","url":null,"abstract":"","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"117 1","pages":"103037"},"PeriodicalIF":1.4,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"55107714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Parallel Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1