首页 > 最新文献

Parallel Computing最新文献

英文 中文
Targeting performance and user-friendliness: GPU-accelerated finite element computation with automated code generation in FEniCS 目标性能和用户友好性:gpu加速有限元计算与fenic中的自动代码生成
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-10-06 DOI: 10.1016/j.parco.2023.103051
James D. Trotter , Johannes Langguth , Xing Cai

This paper studies the use of automated code generation to provide user-friendly GPU acceleration for solving partial differential equations (PDEs) with finite element methods. By extending the FEniCS framework and its automated compiler, we have achieved that a high-level description of finite element computations written in the Unified Form Language is auto-translated to parallelised CUDA C++ code. The auto-generated code provides GPU offloading for the finite element assembly of linear equation systems which are then solved by a GPU-supported linear algebra backend.

Specifically, we explore several auto-generated optimisations of the resulting CUDA C++ code. Numerical experiments show that GPU-based linear system assembly for a typical PDE with first-order elements can benefit from using a lookup table to avoid repeatedly carrying out numerous binary searches, and that further performance gains can be obtained by assembling a sparse matrix row by row. More importantly, the extended FEniCS compiler is able to seamlessly couple the assembly and solution phases for GPU acceleration, so that all unnecessary CPU–GPU data transfers are eliminated. Detailed experiments are used to quantify the negative impact of these data transfers, which can entirely destroy the potential of GPU acceleration if the assembly and solution phases are offloaded to GPU separately. Finally, a complete, auto-generated GPU-based PDE solver for a nonlinear solid mechanics application is used to demonstrate a substantial speedup over running on dual-socket multi-core CPUs, including GPU acceleration of algebraic multigrid as the preconditioner.

本文研究了使用自动代码生成为用有限元方法求解偏微分方程(PDEs)提供用户友好的GPU加速。通过扩展FEniCS框架及其自动编译器,我们已经实现了用统一形式语言编写的有限元计算的高级描述自动翻译为并行CUDA c++代码。自动生成的代码为线性方程系统的有限元装配提供GPU卸载,然后由GPU支持的线性代数后端进行求解。具体来说,我们探索了CUDA c++代码的几个自动生成的优化。数值实验表明,对于典型的一阶元素PDE,基于gpu的线性系统装配可以通过使用查找表来避免重复进行大量的二进制搜索,并且通过逐行组装稀疏矩阵可以获得进一步的性能提升。更重要的是,扩展的FEniCS编译器能够无缝地耦合GPU加速的组装和解决方案阶段,从而消除了所有不必要的CPU-GPU数据传输。详细的实验用于量化这些数据传输的负面影响,如果将组装和解决阶段分别卸载到GPU,则可能完全破坏GPU加速的潜力。最后,使用一个完整的、自动生成的基于GPU的非线性固体力学PDE求解器来演示在双插槽多核cpu上运行时的显著加速,包括GPU对代数多网格的加速作为前置条件。
{"title":"Targeting performance and user-friendliness: GPU-accelerated finite element computation with automated code generation in FEniCS","authors":"James D. Trotter ,&nbsp;Johannes Langguth ,&nbsp;Xing Cai","doi":"10.1016/j.parco.2023.103051","DOIUrl":"https://doi.org/10.1016/j.parco.2023.103051","url":null,"abstract":"<div><p>This paper studies the use of automated code generation to provide user-friendly GPU acceleration for solving partial differential equations (PDEs) with finite element methods. By extending the FEniCS framework and its automated compiler, we have achieved that a high-level description of finite element computations written in the Unified Form Language is auto-translated to parallelised CUDA C++ code. The auto-generated code provides GPU offloading for the finite element assembly of linear equation systems which are then solved by a GPU-supported linear algebra backend.</p><p>Specifically, we explore several auto-generated optimisations of the resulting CUDA C++ code. Numerical experiments show that GPU-based linear system assembly for a typical PDE with first-order elements can benefit from using a lookup table to avoid repeatedly carrying out numerous binary searches, and that further performance gains can be obtained by assembling a sparse matrix row by row. More importantly, the extended FEniCS compiler is able to seamlessly couple the assembly and solution phases for GPU acceleration, so that all unnecessary CPU–GPU data transfers are eliminated. Detailed experiments are used to quantify the negative impact of these data transfers, which can entirely destroy the potential of GPU acceleration if the assembly and solution phases are offloaded to GPU separately. Finally, a complete, auto-generated GPU-based PDE solver for a nonlinear solid mechanics application is used to demonstrate a substantial speedup over running on dual-socket multi-core CPUs, including GPU acceleration of algebraic multigrid as the preconditioner.</p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"118 ","pages":"Article 103051"},"PeriodicalIF":1.4,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49881777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Task graph-based performance analysis of parallel-in-time methods 基于任务图的并行实时性能分析方法
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-09-14 DOI: 10.1016/j.parco.2023.103050
Matthias Bolten, Stephanie Friedhoff, Jens Hahne

In this paper, we present a performance model based on task graphs for various iterative parallel-in-time (PinT) methods. PinT methods have been developed to speed up the simulation time of time-dependent problems using modern parallel supercomputers. The performance model is based on a data-driven notation of the methods, from which a task graph is generated. Based on this task graph and a distribution of time points across processes typical for PinT methods, a theoretical lower runtime bound for the method can be obtained, as well as a prediction of the runtime for a given number of processes. In particular, the model is able to cover the large parameter space of PinT methods and make predictions for arbitrary parameter settings. Here, we describe a general procedure for generating task graphs based on three iterative PinT methods, namely, Parareal, multigrid-reduction-in-time (MGRIT), and the parallel full approximation scheme in space and time (PFASST). Furthermore, we discuss how these task graphs can be used to analyze the performance of the methods. In addition, we compare the predictions of the model with parallel simulation times using five different PinT libraries.

在本文中,我们提出了一个基于任务图的各种迭代并行实时(PinT)方法的性能模型。在现代并行超级计算机上,为了加快时间相关问题的模拟速度,发展了PinT方法。性能模型基于方法的数据驱动表示法,从中生成任务图。基于此任务图和典型的PinT方法的跨进程时间点分布,可以获得该方法的理论运行时下限,以及对给定数量进程的运行时的预测。特别是,该模型能够覆盖PinT方法的大参数空间,并对任意参数设置进行预测。在这里,我们描述了一种基于三种迭代的PinT方法生成任务图的一般过程,即Parareal, multi - grid-reduction-in-time (MGRIT)和parallel full approximation in space and time (PFASST)。此外,我们还讨论了如何使用这些任务图来分析方法的性能。此外,我们使用五个不同的PinT库将模型的预测与并行模拟时间进行比较。
{"title":"Task graph-based performance analysis of parallel-in-time methods","authors":"Matthias Bolten,&nbsp;Stephanie Friedhoff,&nbsp;Jens Hahne","doi":"10.1016/j.parco.2023.103050","DOIUrl":"https://doi.org/10.1016/j.parco.2023.103050","url":null,"abstract":"<div><p>In this paper, we present a performance model based on task graphs for various iterative parallel-in-time (PinT) methods. PinT methods have been developed to speed up the simulation time of time-dependent problems using modern parallel supercomputers<span>. The performance model is based on a data-driven notation of the methods, from which a task graph is generated. Based on this task graph and a distribution of time points across processes typical for PinT methods, a theoretical lower runtime bound for the method can be obtained, as well as a prediction of the runtime for a given number of processes. In particular, the model is able to cover the large parameter space of PinT methods and make predictions for arbitrary parameter settings. Here, we describe a general procedure for generating task graphs based on three iterative PinT methods, namely, Parareal, multigrid-reduction-in-time (MGRIT), and the parallel full approximation scheme in space and time (PFASST). Furthermore, we discuss how these task graphs can be used to analyze the performance of the methods. In addition, we compare the predictions of the model with parallel simulation times using five different PinT libraries.</span></p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"118 ","pages":"Article 103050"},"PeriodicalIF":1.4,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49881776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distributed software defined network-based fog to fog collaboration scheme 分布式软件定义的基于网络的雾对雾协作方案
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-09-01 DOI: 10.1016/j.parco.2023.103040
Muhammad Kabeer , Ibrahim Yusuf , Nasir Ahmad Sufi

Fog computing was created to supplement the cloud in bridging the communication delay gap by deploying fog nodes nearer to Internet of Things (IoT) devices. Depending on the geographical location, computational resource and rate of IoT requests, fog nodes can be idle or saturated. The latter requires special mechanism to enable collaboration with other nodes through service offloading to improve resource utilization. Software Defined Network (SDN) comes with improved bandwidth, latency and understanding of network topology, which recently attracted researchers attention and delivers promising results in service offloading. In this study, a Hierarchical Distributed Software Defined Network-based (DSDN) fog to fog collaboration model is proposed; the scheme considers computational resources such as available CPU and network resources such as communication hops of a prospective offloading node. Fog nodes having limited resources coupled with the projected high demand for fog services in the near future, the model also accounts for extreme cases in which all nearby nodes in a fog domain are saturated, employing a supervisor controller to scale the collaboration to other domains. The results of the simulations carried out on Mininet shows that the proposed multi-controller DSDN solution outperforms the traditional single controller SDN solution, it also further demonstrate that increase in the number of fog nodes does not affect service offloading performance significantly when multiple controllers are used.

雾计算的创建是为了通过在更靠近物联网(IoT)设备的地方部署雾节点来弥补云的通信延迟差距。根据地理位置、计算资源和物联网请求的速率,雾节点可能处于空闲状态或饱和状态。后者需要特殊的机制,通过服务卸载来实现与其他节点的协作,从而提高资源利用率。软件定义网络(SDN)具有改进的带宽、延迟和对网络拓扑的理解,最近引起了研究人员的关注,并在业务卸载方面取得了可喜的成果。本文提出了一种基于分层分布式软件定义网络(DSDN)的雾对雾协作模型;该方案考虑了可用CPU等计算资源和预期卸载节点的通信跳数等网络资源。雾节点资源有限,加上在不久的将来对雾服务的预计高需求,该模型还考虑了雾域中所有附近节点饱和的极端情况,采用监督控制器将协作扩展到其他域。在Mininet上进行的仿真结果表明,所提出的多控制器DSDN解决方案优于传统的单控制器SDN解决方案,并进一步证明了当使用多个控制器时,雾节点数量的增加不会显著影响业务卸载性能。
{"title":"Distributed software defined network-based fog to fog collaboration scheme","authors":"Muhammad Kabeer ,&nbsp;Ibrahim Yusuf ,&nbsp;Nasir Ahmad Sufi","doi":"10.1016/j.parco.2023.103040","DOIUrl":"https://doi.org/10.1016/j.parco.2023.103040","url":null,"abstract":"<div><p><span><span>Fog computing was created to supplement the cloud in bridging the communication delay gap by deploying fog nodes nearer to </span>Internet of Things<span> (IoT) devices. Depending on the geographical location, computational resource and rate of IoT requests, fog nodes can be idle or saturated. The latter requires special mechanism to enable collaboration with other nodes through service offloading to improve resource utilization. Software Defined Network (SDN) comes with improved bandwidth, latency and understanding of </span></span>network topology<span>, which recently attracted researchers attention and delivers promising results in service offloading. In this study, a Hierarchical Distributed Software Defined Network-based (DSDN) fog to fog collaboration model is proposed; the scheme considers computational resources such as available CPU and network resources such as communication hops of a prospective offloading node. Fog nodes having limited resources coupled with the projected high demand for fog services in the near future, the model also accounts for extreme cases in which all nearby nodes in a fog domain are saturated, employing a supervisor controller to scale the collaboration to other domains. The results of the simulations carried out on Mininet shows that the proposed multi-controller DSDN solution outperforms the traditional single controller SDN solution, it also further demonstrate that increase in the number of fog nodes does not affect service offloading performance significantly when multiple controllers are used.</span></p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"117 ","pages":"Article 103040"},"PeriodicalIF":1.4,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49877856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing massively parallel sparse matrix computing on ARM many-core processor ARM多核处理器上大规模并行稀疏矩阵计算优化
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-09-01 DOI: 10.1016/j.parco.2023.103035
Jiang Zheng , Jiazhi Jiang , Jiangsu Du, Dan Huang, Yutong Lu

Sparse matrix multiplication is ubiquitous in many applications such as graph processing and numerical simulation. In recent years, numerous efficient sparse matrix multiplication algorithms and computational libraries have been proposed. However, most of them are oriented to x86 or GPU platforms, while the optimization on ARM many-core platforms has not been well investigated. Our experiments show that existing sparse matrix multiplication libraries for ARM many-core CPU cannot achieve expected parallel performance. Compared with traditional multi-core CPU, ARM many-core CPU has far more cores and often adopts NUMA techniques to scale the memory bandwidth. Its parallel efficiency tends to be restricted by NUMA configuration, memory bandwidth cache contention, etc.

In this paper, we propose optimized implementations for sparse matrix computing on ARM many-core CPU. We propose various optimization techniques for several routines of sparse matrix multiplication to ensure coalesced access of matrix elements in the memory. In detail, the optimization techniques include a fine-tuned CSR-based format for ARM architecture, co-optimization of Gustavson’s algorithm with hierarchical cache and dense array strategy to mitigate performance loss caused by handling compressed storage formats. We exploit the coarse-grained NUMA-aware strategy for inter-node parallelism and the fine-grained cache-aware strategy for intra-node parallelism to improve the parallel efficiency of sparse matrix multiplication. The evaluation shows that our implementation consistently outperforms the existing library on ARM many-core processor.

稀疏矩阵乘法在图形处理和数值模拟等应用中无处不在。近年来,人们提出了许多高效的稀疏矩阵乘法算法和计算库。然而,它们大多面向x86或GPU平台,而ARM多核平台的优化尚未得到很好的研究。实验表明,现有的用于ARM多核CPU的稀疏矩阵乘法库不能达到预期的并行性能。与传统的多核CPU相比,ARM多核CPU拥有更多的内核,并且经常采用NUMA技术来扩展内存带宽。它的并行效率往往受到NUMA配置、内存带宽缓存争用等的限制。本文提出了在ARM多核CPU上稀疏矩阵计算的优化实现。针对稀疏矩阵乘法的几种例程,提出了各种优化技术,以保证对内存中矩阵元素的合并访问。具体而言,优化技术包括针对ARM架构的基于csr的优化格式、Gustavson算法与分层缓存和密集数组策略的协同优化,以减轻处理压缩存储格式造成的性能损失。利用节点间并行的粗粒度numa感知策略和节点内并行的细粒度缓存感知策略来提高稀疏矩阵乘法的并行效率。评估表明,我们的实现始终优于现有的ARM多核处理器上的库。
{"title":"Optimizing massively parallel sparse matrix computing on ARM many-core processor","authors":"Jiang Zheng ,&nbsp;Jiazhi Jiang ,&nbsp;Jiangsu Du,&nbsp;Dan Huang,&nbsp;Yutong Lu","doi":"10.1016/j.parco.2023.103035","DOIUrl":"https://doi.org/10.1016/j.parco.2023.103035","url":null,"abstract":"<div><p><span><span>Sparse matrix multiplication is ubiquitous in many applications such as graph processing and numerical simulation. In recent years, numerous efficient sparse matrix multiplication algorithms and computational libraries have been proposed. However, most of them are oriented to x86 or GPU platforms, while the optimization on ARM many-core platforms has not been well investigated. Our experiments show that existing sparse matrix multiplication libraries for ARM many-core CPU cannot achieve expected parallel performance. Compared with traditional multi-core CPU, ARM many-core CPU has far more cores and often adopts </span>NUMA techniques to scale the </span>memory bandwidth. Its parallel efficiency tends to be restricted by NUMA configuration, memory bandwidth cache contention, etc.</p><p>In this paper, we propose optimized implementations for sparse matrix computing on ARM many-core CPU. We propose various optimization techniques for several routines of sparse matrix multiplication to ensure coalesced access<span> of matrix elements in the memory. In detail, the optimization techniques include a fine-tuned CSR-based format for ARM architecture, co-optimization of Gustavson’s algorithm with hierarchical cache and dense array strategy to mitigate performance loss caused by handling compressed storage formats. We exploit the coarse-grained NUMA-aware strategy for inter-node parallelism and the fine-grained cache-aware strategy for intra-node parallelism to improve the parallel efficiency of sparse matrix multiplication. The evaluation shows that our implementation consistently outperforms the existing library on ARM many-core processor.</span></p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"117 ","pages":"Article 103035"},"PeriodicalIF":1.4,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49877864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial on Advances in High Performance Programming 关于高性能编程进展的社论
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-09-01 DOI: 10.1016/j.parco.2023.103037
A. Marowka, Przemysław Stpiczyński
{"title":"Editorial on Advances in High Performance Programming","authors":"A. Marowka, Przemysław Stpiczyński","doi":"10.1016/j.parco.2023.103037","DOIUrl":"https://doi.org/10.1016/j.parco.2023.103037","url":null,"abstract":"","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"117 1","pages":"103037"},"PeriodicalIF":1.4,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"55107714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parallelizable efficient large order multiple recursive generators 并行化高效大阶多重递归生成器
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-09-01 DOI: 10.2139/ssrn.4344139
L. Deng, Bryan R. Winter, J. H. Shiau, Henry Horng-Shing Lu, Nirman Kumar, Ching-Chi Yang
{"title":"Parallelizable efficient large order multiple recursive generators","authors":"L. Deng, Bryan R. Winter, J. H. Shiau, Henry Horng-Shing Lu, Nirman Kumar, Ching-Chi Yang","doi":"10.2139/ssrn.4344139","DOIUrl":"https://doi.org/10.2139/ssrn.4344139","url":null,"abstract":"","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"78 1","pages":"103036"},"PeriodicalIF":1.4,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73726981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Finding inputs that trigger floating-point exceptions in heterogeneous computing via Bayesian optimization 通过贝叶斯优化查找异构计算中触发浮点异常的输入
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-09-01 DOI: 10.1016/j.parco.2023.103042
Ignacio Laguna , Anh Tran , Ganesh Gopalakrishnan

Testing code for floating-point exceptions is crucial as exceptions can quickly propagate and produce unreliable numerical answers. The state-of-the-art to test for floating-point exceptions in heterogeneous systems is quite limited and solutions require the application’s source code, which precludes their use in accelerated libraries where the source is not publicly available. We present an approach to find inputs that trigger floating-point exceptions in black-box CPU or GPU functions, i.e., functions where the source code and information about input bounds are unavailable. Our approach is the first to use Bayesian optimization (BO) to identify such inputs and uses novel strategies to overcome the challenges that arise in applying BO to this problem. We implement our approach in the Xscope framework and demonstrate it on 58 functions from the CUDA Math Library and 81 functions from the Intel Math Library. Xscope is able to identify inputs that trigger exceptions in about 73% of the tested functions.

测试浮点异常的代码是至关重要的,因为异常可以快速传播并产生不可靠的数值答案。在异构系统中测试浮点异常的技术非常有限,而且解决方案需要应用程序的源代码,这就排除了在源代码不公开的加速库中使用它们的可能性。我们提出了一种方法来查找在黑箱CPU或GPU函数中触发浮点异常的输入,即,关于输入边界的源代码和信息不可用的函数。我们的方法是第一个使用贝叶斯优化(BO)来识别这些输入,并使用新颖的策略来克服将BO应用于该问题时出现的挑战。我们在Xscope框架中实现了我们的方法,并在CUDA数学库中的58个函数和Intel数学库中的81个函数上进行了演示。Xscope能够识别在大约73%的测试函数中触发异常的输入。
{"title":"Finding inputs that trigger floating-point exceptions in heterogeneous computing via Bayesian optimization","authors":"Ignacio Laguna ,&nbsp;Anh Tran ,&nbsp;Ganesh Gopalakrishnan","doi":"10.1016/j.parco.2023.103042","DOIUrl":"https://doi.org/10.1016/j.parco.2023.103042","url":null,"abstract":"<div><p><span><span>Testing code for floating-point exceptions is crucial as exceptions can quickly propagate and produce unreliable numerical answers. The state-of-the-art to test for floating-point exceptions in heterogeneous systems<span> is quite limited and solutions require the application’s source code, which precludes their use in accelerated libraries where the source is not publicly available. We present an approach to find inputs that trigger floating-point exceptions in black-box CPU or </span></span>GPU functions, i.e., functions where the source code and information about input bounds are unavailable. Our approach is the first to use Bayesian optimization (BO) to identify such inputs and uses novel strategies to overcome the challenges that arise in applying BO to this problem. We implement our approach in the </span><span><span>Xscope</span></span> framework and demonstrate it on 58 functions from the CUDA Math Library and 81 functions from the Intel Math Library. <span><span>Xscope</span></span> is able to identify inputs that trigger exceptions in about 73% of the tested functions.</p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"117 ","pages":"Article 103042"},"PeriodicalIF":1.4,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49877859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Parallelizable efficient large order multiple recursive generators 并行化高效大阶多重递归生成器
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-09-01 DOI: 10.1016/j.parco.2023.103036
Lih-Yuan Deng , Bryan R. Winter , Jyh-Jen Horng Shiau , Henry Horng-Shing Lu , Nirman Kumar , Ching-Chi Yang

The general multiple recursive generator (MRG) of maximum period has been thought of as an excellent source of pseudo random numbers. Based on a kth order linear recurrence modulo p, this generator produces the next pseudo random number based on a linear combination of the previous k numbers. General maximum period MRGs of order k have excellent empirical performance, and their strong mathematical foundations have been studied extensively.

For computing efficiency, it is common to consider special MRGs with some simple structure with few non-zero terms which requires fewer costly multiplications. However, such MRGs will not have a good “spectral test” property when compared with general MRGs with many non-zero terms. On the other hand, there are two potential problems of using general MRGs with many non-zero terms: (1) its efficient implementation (2) its efficient scheme for its parallelization. Efficient implementation of general MRGs of larger order k can be difficult because the kth order linear recurrence requires many costly multiplications to produce the next number. For its parallelization scheme, for a large k, the traditional scheme like “jump-ahead parallelization method” for general MRGs becomes highly computationally inefficient. We proposed implementing maximum period MRGs with many nonzero terms efficiently and in parallel by using a MCG constructed from the MRG. In particular, we propose a special class of large order MRGs with many nonzero terms that also have an efficient and parallel implementation.

最大周期的通用多重递归发生器(MRG)被认为是伪随机数的一个很好的来源。基于k阶线性递归模p,该生成器基于前k个数字的线性组合产生下一个伪随机数。一般k阶最大周期磁振子具有良好的经验性能,其强大的数学基础得到了广泛的研究。为了提高计算效率,通常考虑具有一些简单结构的特殊mrg,其非零项较少,需要较少的昂贵乘法。然而,与具有许多非零项的一般mrg相比,这种mrg将不具有良好的“光谱测试”性能。另一方面,使用具有许多非零项的通用mrg存在两个潜在问题:(1)其有效实现;(2)其并行化的有效方案。有效地实现大阶k的一般mrg可能是困难的,因为第k阶线性递归需要许多昂贵的乘法来产生下一个数字。对于其并行化方案,当k较大时,一般mrg的“超前并行化法”等传统方案的计算效率非常低。我们提出了使用由MRG构造的MCG来高效并行地实现具有许多非零项的最大周期MRG。特别地,我们提出了一类特殊的具有许多非零项的大阶mrg,它们也具有高效和并行的实现。
{"title":"Parallelizable efficient large order multiple recursive generators","authors":"Lih-Yuan Deng ,&nbsp;Bryan R. Winter ,&nbsp;Jyh-Jen Horng Shiau ,&nbsp;Henry Horng-Shing Lu ,&nbsp;Nirman Kumar ,&nbsp;Ching-Chi Yang","doi":"10.1016/j.parco.2023.103036","DOIUrl":"https://doi.org/10.1016/j.parco.2023.103036","url":null,"abstract":"<div><p>The general multiple recursive generator (MRG) of maximum period has been thought of as an excellent source of pseudo random numbers. Based on a <span><math><mi>k</mi></math></span>th order linear recurrence modulo <span><math><mi>p</mi></math></span><span>, this generator produces the next pseudo random number based on a linear combination of the previous </span><span><math><mi>k</mi></math></span> numbers. General maximum period MRGs of order <span><math><mi>k</mi></math></span> have excellent empirical performance, and their strong mathematical foundations have been studied extensively.</p><p><span>For computing efficiency, it is common to consider special MRGs with some simple structure with few non-zero terms which requires fewer costly multiplications. However, such MRGs will not have a good “spectral test” property when compared with general MRGs with many non-zero terms. On the other hand, there are two potential problems of using general MRGs with many non-zero terms: (1) its efficient implementation (2) its efficient scheme for its parallelization. Efficient implementation of general MRGs of larger order </span><span><math><mi>k</mi></math></span> can be difficult because the <span><math><mi>k</mi></math></span>th order linear recurrence requires many costly multiplications to produce the next number. For its parallelization scheme, for a large <span><math><mi>k</mi></math></span>, the traditional scheme like “jump-ahead parallelization method” for general MRGs becomes highly computationally inefficient. We proposed implementing maximum period MRGs with many nonzero terms efficiently and in parallel by using a MCG constructed from the MRG. In particular, we propose a special class of large order MRGs with many nonzero terms that also have an efficient and parallel implementation.</p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"117 ","pages":"Article 103036"},"PeriodicalIF":1.4,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49877863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing massively parallel sparse matrix computing on ARM many-core processor ARM多核处理器上大规模并行稀疏矩阵计算优化
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-09-01 DOI: 10.1016/j.parco.2023.103035
Jiang Zheng, Jiazhi Jiang, Jiangsu Du, Dan-E Huang, Yutong Lu
{"title":"Optimizing massively parallel sparse matrix computing on ARM many-core processor","authors":"Jiang Zheng, Jiazhi Jiang, Jiangsu Du, Dan-E Huang, Yutong Lu","doi":"10.1016/j.parco.2023.103035","DOIUrl":"https://doi.org/10.1016/j.parco.2023.103035","url":null,"abstract":"","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"117 1","pages":"103035"},"PeriodicalIF":1.4,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"55107333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An optimal scheduling algorithm considering the transactions worst-case delay for multi-channel hyperledger fabric network 多通道超级账本网络中考虑事务最坏延迟的最优调度算法
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-09-01 DOI: 10.1016/j.parco.2023.103041
Ou Wu , Shanshan Li , He Zhang , Liwen Liu , Haoming Li , Yanze Wang , Ziyi Zhang

As the most popular consortium blockchain platform, Hyperledger Fabric (Fabric for short) has released multiple versions that support different consensus protocols to address the risks faced in current and future network transactions. For example, Fabric v1.4 and v2.0 use Kafka and Raft mechanisms to complete consensus and ensure that the system can withstand failures such as crashes, network partitions, or network shutdowns. In a multi-channel Fabric network architecture, the system structure cannot guarantee the behavior of malicious nodes. Complex cooperation between peer groups on different channels can greatly affect the security and efficiency of the entire network architecture, which is challenging to estimate and optimize.

To address this challenge, we designed a Drift Plus Penalty Algorithm (DPPA) and a Transaction Worst-case Delay Algorithm (TWDA) based on peer node random scheduling using the Lyapunov optimization framework. The DPPA ensures the stability of the system and provides the maximum transaction processing rate under the minimum safety probability. The numerical results show that this algorithm can achieve a good balance between system security probability and queue accumulation. The TWDA considers discarding transactions with excessively long delay time by setting a worst-case transaction delay threshold. When considering both the security probability and queue accumulation of the Fabric system, the optimal scheduling of peer nodes is given. Numerical simulations were conducted on two types of algorithms, and the results showed that the security of the TWDA was slightly worse than that of the DPPA, but the system queue accumulation was significantly smaller. Therefore, the simulation results not only validate the effectiveness of the two types of algorithms but also provide operators with operational strategies that consider different factors.

作为最受欢迎的联盟区块链平台,Hyperledger Fabric(简称Fabric)发布了多个版本,支持不同的共识协议,以解决当前和未来网络交易面临的风险。例如,Fabric v1.4和v2.0使用Kafka和Raft机制来完成共识,并确保系统能够承受崩溃、网络分区或网络关闭等故障。在多通道Fabric网络架构中,系统结构无法保证恶意节点的行为。不同信道上的对等组之间的复杂协作会极大地影响整个网络架构的安全性和效率,这是一个难以估计和优化的问题。为了解决这一挑战,我们使用Lyapunov优化框架设计了基于对等节点随机调度的漂移加惩罚算法(DPPA)和事务最坏情况延迟算法(TWDA)。DPPA保证了系统的稳定性,在最小的安全概率下提供最大的事务处理速率。数值结果表明,该算法能很好地平衡系统安全概率和队列积累。TWDA通过设置最坏情况的事务延迟阈值,考虑丢弃延迟时间过长的事务。在考虑Fabric系统的安全概率和队列积累的情况下,给出了对节点的最优调度。对两种算法进行了数值模拟,结果表明,TWDA算法的安全性略差于DPPA算法,但系统队列累积量明显小于DPPA算法。因此,仿真结果不仅验证了两种算法的有效性,而且为操作员提供了考虑不同因素的操作策略。
{"title":"An optimal scheduling algorithm considering the transactions worst-case delay for multi-channel hyperledger fabric network","authors":"Ou Wu ,&nbsp;Shanshan Li ,&nbsp;He Zhang ,&nbsp;Liwen Liu ,&nbsp;Haoming Li ,&nbsp;Yanze Wang ,&nbsp;Ziyi Zhang","doi":"10.1016/j.parco.2023.103041","DOIUrl":"https://doi.org/10.1016/j.parco.2023.103041","url":null,"abstract":"<div><p><span><span>As the most popular consortium blockchain platform, Hyperledger Fabric (Fabric for short) has released multiple versions that support different consensus protocols to address the risks faced in current and future network transactions. For example, Fabric v1.4 and v2.0 use Kafka and Raft mechanisms to complete consensus and ensure that the system can withstand failures such as crashes, </span>network partitions, or network shutdowns. In a multi-channel Fabric </span>network architecture, the system structure cannot guarantee the behavior of malicious nodes. Complex cooperation between peer groups on different channels can greatly affect the security and efficiency of the entire network architecture, which is challenging to estimate and optimize.</p><p><span><span>To address this challenge, we designed a Drift Plus Penalty Algorithm (DPPA) and a Transaction Worst-case Delay Algorithm (TWDA) based on peer node random scheduling using the Lyapunov optimization framework. The DPPA ensures the stability of the system and provides the maximum </span>transaction processing rate under the minimum safety probability. The numerical results show that this algorithm can achieve a good balance between system security probability and queue accumulation. The TWDA considers discarding transactions with excessively long </span>delay time by setting a worst-case transaction delay threshold. When considering both the security probability and queue accumulation of the Fabric system, the optimal scheduling of peer nodes is given. Numerical simulations were conducted on two types of algorithms, and the results showed that the security of the TWDA was slightly worse than that of the DPPA, but the system queue accumulation was significantly smaller. Therefore, the simulation results not only validate the effectiveness of the two types of algorithms but also provide operators with operational strategies that consider different factors.</p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"117 ","pages":"Article 103041"},"PeriodicalIF":1.4,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49877860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Parallel Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1