首页 > 最新文献

Parallel Computing最新文献

英文 中文
Program partitioning and deadlock analysis for MPI based on logical clocks 基于逻辑时钟的MPI程序分区和死锁分析
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-12-04 DOI: 10.1016/j.parco.2023.103061
Shushan Li , Meng Wang , Hong Zhang , Yao Liu

The message passing interface (MPI) has become a standard for programming models in the field of high performance computing. It is of great importance to ensure the reliability of MPI programs by detecting whether there exist errors in them. However, as one of the most common errors in MPI programs, deadlock is difficult to detect due to the non-determinism and the asynchronous communication supported by MPI. Existing approaches mainly focus on detecting deadlocks by traversing all possible execution paths in an MPI program. But in this way the detection efficiency is always limited since the number of execution paths increases exponentially with the number of wildcard receives and processes in the program.

In order to alleviate the path explosion problem for single-path MPI programs, we propose a program partitioning approach based on logical clocks to detecting deadlocks. In the approach, the program is first divided into several preliminary partitions based on the matching detection rule. Then to obtain the dependency relationships of partitions, the Binary Lazy Clocks algorithm is raised to mark clocks for communication operations. Based on the clocks, the completion orders of communication operations in each process of the program are tracked. Further, we get the dependency relationships of the preliminary partitions by analyzing these completion orders and merge the preliminary partitions with the dependency relationships for generating the final partitions. Finally, deadlocks are detected by traversing all possible execution paths of each final partition. We have implemented our method in a tool called PDMPI and performed experimental evaluation on 14 programs. The experimental results indicate that PDMPI is more effective for detecting deadlocks in MPI programs than two most related tools ISP and SAMPI, especially in programs with numerous interleavings.

消息传递接口(MPI)已经成为高性能计算领域编程模型的标准。检测MPI程序是否存在错误,对保证MPI程序的可靠性具有重要意义。然而,死锁是MPI程序中最常见的错误之一,由于其不确定性和MPI所支持的异步通信,死锁很难被检测出来。现有的方法主要侧重于通过遍历MPI程序中所有可能的执行路径来检测死锁。但是,这种方法的检测效率总是有限的,因为执行路径的数量随着程序中通配符接收和进程的数量呈指数增长。为了缓解单路径MPI程序的路径爆炸问题,我们提出了一种基于逻辑时钟的程序分区方法来检测死锁。在该方法中,首先根据匹配检测规则将程序划分为几个初步分区。然后,为了获得分区间的依赖关系,提出了二进制延迟时钟算法来标记时钟以进行通信操作。基于时钟,跟踪程序各进程中通信操作的完成顺序。进一步,我们通过分析这些完成顺序得到了初始分区的依赖关系,并将初始分区与依赖关系合并以生成最终分区。最后,通过遍历每个最终分区的所有可能的执行路径来检测死锁。我们已经在PDMPI工具中实现了我们的方法,并对14个程序进行了实验评估。实验结果表明,在MPI程序中,PDMPI比ISP和SAMPI两种相关工具更有效地检测死锁,特别是在有大量交错的程序中。
{"title":"Program partitioning and deadlock analysis for MPI based on logical clocks","authors":"Shushan Li ,&nbsp;Meng Wang ,&nbsp;Hong Zhang ,&nbsp;Yao Liu","doi":"10.1016/j.parco.2023.103061","DOIUrl":"10.1016/j.parco.2023.103061","url":null,"abstract":"<div><p>The message passing interface (MPI) has become a standard for programming models in the field of high performance computing. It is of great importance to ensure the reliability of MPI programs by detecting whether there exist errors in them. However, as one of the most common errors in MPI programs, deadlock is difficult to detect due to the non-determinism and the asynchronous communication supported by MPI. Existing approaches mainly focus on detecting deadlocks by traversing all possible execution paths in an MPI program. But in this way the detection efficiency is always limited since the number of execution paths increases exponentially with the number of wildcard receives and processes in the program.</p><p>In order to alleviate the path explosion problem for single-path MPI programs, we propose a program partitioning approach based on logical clocks to detecting deadlocks. In the approach, the program is first divided into several preliminary partitions based on the matching detection rule. Then to obtain the dependency relationships of partitions, the Binary Lazy Clocks algorithm is raised to mark clocks for communication operations. Based on the clocks, the completion orders of communication operations in each process of the program are tracked. Further, we get the dependency relationships of the preliminary partitions by analyzing these completion orders and merge the preliminary partitions with the dependency relationships for generating the final partitions. Finally, deadlocks are detected by traversing all possible execution paths of each final partition. We have implemented our method in a tool called PDMPI and performed experimental evaluation on 14 programs. The experimental results indicate that PDMPI is more effective for detecting deadlocks in MPI programs than two most related tools ISP and SAMPI, especially in programs with numerous interleavings.</p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"119 ","pages":"Article 103061"},"PeriodicalIF":1.4,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0167819123000674/pdfft?md5=544d7a7d482400a8b6dab8a9d68a3fba&pid=1-s2.0-S0167819123000674-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138537331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OF-WFBP: A near-optimal communication mechanism for tensor fusion in distributed deep learning 分布式深度学习中张量融合的近最优通信机制
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-11-01 DOI: 10.1016/j.parco.2023.103053
Yunqi Gao , Zechao Zhang , Bing Hu , A-Long Jin , Chunming Wu

The communication bottleneck has severely restricted the scalability of distributed deep learning. Tensor fusion improves the scalability of data parallelism by overlapping computation and communication tasks. However, existing tensor fusion schemes only result in suboptimal training performance. In this paper, we propose an efficient communication mechanism (OF-WFBP) to find the optimal tensor fusion scheme for synchronous data parallelism. We present the mathematical model of OF-WFBP and prove it is an NP-hard problem. We mathematically solve the mathematical model of OF-WFBP in two cases. We propose an improved sparrow search algorithm (GradSSA) to find the near-optimal tensor fusion scheme efficiently in other cases. Experimental results on two different GPU clusters show that OF-WFBP achieves up to 1.43x speedup compared to the state-of-the-art tensor fusion mechanisms.

通信瓶颈严重制约了分布式深度学习的可扩展性。张量融合通过重叠计算和通信任务,提高了数据并行性的可扩展性。然而,现有的张量融合方案只会导致次优的训练性能。本文提出了一种有效的通信机制(OF-WFBP)来寻找同步数据并行的最佳张量融合方案。我们建立了of - wfbp的数学模型,并证明了它是一个np困难问题。在两种情况下,对of - wfbp的数学模型进行了数学求解。我们提出了一种改进的麻雀搜索算法(GradSSA),以便在其他情况下有效地找到接近最优的张量融合方案。在两种不同GPU集群上的实验结果表明,与目前最先进的张量融合机制相比,OF-WFBP的速度提高了1.43倍。
{"title":"OF-WFBP: A near-optimal communication mechanism for tensor fusion in distributed deep learning","authors":"Yunqi Gao ,&nbsp;Zechao Zhang ,&nbsp;Bing Hu ,&nbsp;A-Long Jin ,&nbsp;Chunming Wu","doi":"10.1016/j.parco.2023.103053","DOIUrl":"https://doi.org/10.1016/j.parco.2023.103053","url":null,"abstract":"<div><p><span>The communication bottleneck has severely restricted the scalability of distributed deep learning<span>. Tensor fusion improves the scalability of data parallelism by overlapping computation and communication tasks. However, existing tensor fusion schemes only result in suboptimal training performance. In this paper, we propose an efficient communication mechanism (OF-WFBP) to find the optimal tensor fusion scheme for synchronous data parallelism. We present the mathematical model of OF-WFBP and prove it is an NP-hard problem. We mathematically solve the mathematical model of OF-WFBP in two cases. We propose an improved sparrow search algorithm (GradSSA) to find the near-optimal tensor fusion scheme efficiently in other cases. Experimental results on two different </span></span>GPU clusters show that OF-WFBP achieves up to 1.43x speedup compared to the state-of-the-art tensor fusion mechanisms.</p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"118 ","pages":"Article 103053"},"PeriodicalIF":1.4,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134656640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low consumption automatic discovery protocol for DDS-based large-scale distributed parallel computing 基于dds的大规模分布式并行计算低消耗自动发现协议
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-11-01 DOI: 10.1016/j.parco.2023.103052
Zhexu Liu , Shaofeng Liu , Zhiyong Fan , Zhen Zhao

DDS (Data Distribution Service) is an efficient communication specification for distributed parallel computing. However, as the scale of computation expands, high network load and memory consumption consistently limit its performance. This paper proposes a low consumption automatic discovery protocol to improve DDS in large-scale distributed parallel computing. Firstly, an improved Bloom Filter called TBF (Threshold Bloom Filter) is presented to compress the data topic. Then it is combined with the SDP(Simple Discovery Protocol) to reduce the consumption of the automatic discovery process in DDS. On this basis, data publication and subscription between the distributed computing nodes are matched using binarization threshold θ and decision threshold T , which can be obtained through iterative optimization algorithms. Experiment results show that the SDPTBF can guarantee higher transmission accuracy while reducing network load and memory consumption, and therefore improve the performance of DDS-based large-scale distributed parallel computing.

DDS (Data Distribution Service)是一种高效的分布式并行计算通信规范。然而,随着计算规模的扩大,高网络负载和内存消耗不断限制其性能。针对大规模分布式并行计算中的DDS问题,提出了一种低消耗的自动发现协议。首先,提出了一种改进的布隆过滤器TBF (Threshold Bloom Filter)来压缩数据主题;然后将其与SDP(Simple Discovery Protocol)协议相结合,减少了DDS中自动发现过程的消耗。在此基础上,采用二值化阈值θ和决策阈值T对分布式计算节点之间的数据发布和订阅进行匹配,并通过迭代优化算法获得。实验结果表明,SDPTBF在保证更高的传输精度的同时,减少了网络负载和内存消耗,从而提高了基于dds的大规模分布式并行计算的性能。
{"title":"Low consumption automatic discovery protocol for DDS-based large-scale distributed parallel computing","authors":"Zhexu Liu ,&nbsp;Shaofeng Liu ,&nbsp;Zhiyong Fan ,&nbsp;Zhen Zhao","doi":"10.1016/j.parco.2023.103052","DOIUrl":"https://doi.org/10.1016/j.parco.2023.103052","url":null,"abstract":"<div><p><span><span>DDS (Data Distribution Service) is an efficient communication specification for distributed parallel computing. However, as the scale of computation expands, high network load and memory consumption consistently limit its performance. This paper proposes a low consumption automatic discovery protocol to improve DDS in large-scale distributed parallel computing. Firstly, an improved Bloom Filter called TBF (Threshold Bloom Filter) is presented to compress the data topic. Then it is combined with the SDP(Simple Discovery Protocol) to reduce the consumption of the automatic discovery process in DDS. On this basis, data publication and subscription between the </span>distributed computing<span> nodes are matched using binarization threshold </span></span><span><math><mi>θ</mi></math></span> and decision threshold <span><math><mi>T</mi></math></span><span> , which can be obtained through iterative optimization algorithms. Experiment results show that the SDPTBF can guarantee higher transmission accuracy while reducing network load and memory consumption, and therefore improve the performance of DDS-based large-scale distributed parallel computing.</span></p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"118 ","pages":"Article 103052"},"PeriodicalIF":1.4,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"109182030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Targeting performance and user-friendliness: GPU-accelerated finite element computation with automated code generation in FEniCS 目标性能和用户友好性:gpu加速有限元计算与fenic中的自动代码生成
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-10-06 DOI: 10.1016/j.parco.2023.103051
James D. Trotter , Johannes Langguth , Xing Cai

This paper studies the use of automated code generation to provide user-friendly GPU acceleration for solving partial differential equations (PDEs) with finite element methods. By extending the FEniCS framework and its automated compiler, we have achieved that a high-level description of finite element computations written in the Unified Form Language is auto-translated to parallelised CUDA C++ code. The auto-generated code provides GPU offloading for the finite element assembly of linear equation systems which are then solved by a GPU-supported linear algebra backend.

Specifically, we explore several auto-generated optimisations of the resulting CUDA C++ code. Numerical experiments show that GPU-based linear system assembly for a typical PDE with first-order elements can benefit from using a lookup table to avoid repeatedly carrying out numerous binary searches, and that further performance gains can be obtained by assembling a sparse matrix row by row. More importantly, the extended FEniCS compiler is able to seamlessly couple the assembly and solution phases for GPU acceleration, so that all unnecessary CPU–GPU data transfers are eliminated. Detailed experiments are used to quantify the negative impact of these data transfers, which can entirely destroy the potential of GPU acceleration if the assembly and solution phases are offloaded to GPU separately. Finally, a complete, auto-generated GPU-based PDE solver for a nonlinear solid mechanics application is used to demonstrate a substantial speedup over running on dual-socket multi-core CPUs, including GPU acceleration of algebraic multigrid as the preconditioner.

本文研究了使用自动代码生成为用有限元方法求解偏微分方程(PDEs)提供用户友好的GPU加速。通过扩展FEniCS框架及其自动编译器,我们已经实现了用统一形式语言编写的有限元计算的高级描述自动翻译为并行CUDA c++代码。自动生成的代码为线性方程系统的有限元装配提供GPU卸载,然后由GPU支持的线性代数后端进行求解。具体来说,我们探索了CUDA c++代码的几个自动生成的优化。数值实验表明,对于典型的一阶元素PDE,基于gpu的线性系统装配可以通过使用查找表来避免重复进行大量的二进制搜索,并且通过逐行组装稀疏矩阵可以获得进一步的性能提升。更重要的是,扩展的FEniCS编译器能够无缝地耦合GPU加速的组装和解决方案阶段,从而消除了所有不必要的CPU-GPU数据传输。详细的实验用于量化这些数据传输的负面影响,如果将组装和解决阶段分别卸载到GPU,则可能完全破坏GPU加速的潜力。最后,使用一个完整的、自动生成的基于GPU的非线性固体力学PDE求解器来演示在双插槽多核cpu上运行时的显著加速,包括GPU对代数多网格的加速作为前置条件。
{"title":"Targeting performance and user-friendliness: GPU-accelerated finite element computation with automated code generation in FEniCS","authors":"James D. Trotter ,&nbsp;Johannes Langguth ,&nbsp;Xing Cai","doi":"10.1016/j.parco.2023.103051","DOIUrl":"https://doi.org/10.1016/j.parco.2023.103051","url":null,"abstract":"<div><p>This paper studies the use of automated code generation to provide user-friendly GPU acceleration for solving partial differential equations (PDEs) with finite element methods. By extending the FEniCS framework and its automated compiler, we have achieved that a high-level description of finite element computations written in the Unified Form Language is auto-translated to parallelised CUDA C++ code. The auto-generated code provides GPU offloading for the finite element assembly of linear equation systems which are then solved by a GPU-supported linear algebra backend.</p><p>Specifically, we explore several auto-generated optimisations of the resulting CUDA C++ code. Numerical experiments show that GPU-based linear system assembly for a typical PDE with first-order elements can benefit from using a lookup table to avoid repeatedly carrying out numerous binary searches, and that further performance gains can be obtained by assembling a sparse matrix row by row. More importantly, the extended FEniCS compiler is able to seamlessly couple the assembly and solution phases for GPU acceleration, so that all unnecessary CPU–GPU data transfers are eliminated. Detailed experiments are used to quantify the negative impact of these data transfers, which can entirely destroy the potential of GPU acceleration if the assembly and solution phases are offloaded to GPU separately. Finally, a complete, auto-generated GPU-based PDE solver for a nonlinear solid mechanics application is used to demonstrate a substantial speedup over running on dual-socket multi-core CPUs, including GPU acceleration of algebraic multigrid as the preconditioner.</p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"118 ","pages":"Article 103051"},"PeriodicalIF":1.4,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49881777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Task graph-based performance analysis of parallel-in-time methods 基于任务图的并行实时性能分析方法
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-09-14 DOI: 10.1016/j.parco.2023.103050
Matthias Bolten, Stephanie Friedhoff, Jens Hahne

In this paper, we present a performance model based on task graphs for various iterative parallel-in-time (PinT) methods. PinT methods have been developed to speed up the simulation time of time-dependent problems using modern parallel supercomputers. The performance model is based on a data-driven notation of the methods, from which a task graph is generated. Based on this task graph and a distribution of time points across processes typical for PinT methods, a theoretical lower runtime bound for the method can be obtained, as well as a prediction of the runtime for a given number of processes. In particular, the model is able to cover the large parameter space of PinT methods and make predictions for arbitrary parameter settings. Here, we describe a general procedure for generating task graphs based on three iterative PinT methods, namely, Parareal, multigrid-reduction-in-time (MGRIT), and the parallel full approximation scheme in space and time (PFASST). Furthermore, we discuss how these task graphs can be used to analyze the performance of the methods. In addition, we compare the predictions of the model with parallel simulation times using five different PinT libraries.

在本文中,我们提出了一个基于任务图的各种迭代并行实时(PinT)方法的性能模型。在现代并行超级计算机上,为了加快时间相关问题的模拟速度,发展了PinT方法。性能模型基于方法的数据驱动表示法,从中生成任务图。基于此任务图和典型的PinT方法的跨进程时间点分布,可以获得该方法的理论运行时下限,以及对给定数量进程的运行时的预测。特别是,该模型能够覆盖PinT方法的大参数空间,并对任意参数设置进行预测。在这里,我们描述了一种基于三种迭代的PinT方法生成任务图的一般过程,即Parareal, multi - grid-reduction-in-time (MGRIT)和parallel full approximation in space and time (PFASST)。此外,我们还讨论了如何使用这些任务图来分析方法的性能。此外,我们使用五个不同的PinT库将模型的预测与并行模拟时间进行比较。
{"title":"Task graph-based performance analysis of parallel-in-time methods","authors":"Matthias Bolten,&nbsp;Stephanie Friedhoff,&nbsp;Jens Hahne","doi":"10.1016/j.parco.2023.103050","DOIUrl":"https://doi.org/10.1016/j.parco.2023.103050","url":null,"abstract":"<div><p>In this paper, we present a performance model based on task graphs for various iterative parallel-in-time (PinT) methods. PinT methods have been developed to speed up the simulation time of time-dependent problems using modern parallel supercomputers<span>. The performance model is based on a data-driven notation of the methods, from which a task graph is generated. Based on this task graph and a distribution of time points across processes typical for PinT methods, a theoretical lower runtime bound for the method can be obtained, as well as a prediction of the runtime for a given number of processes. In particular, the model is able to cover the large parameter space of PinT methods and make predictions for arbitrary parameter settings. Here, we describe a general procedure for generating task graphs based on three iterative PinT methods, namely, Parareal, multigrid-reduction-in-time (MGRIT), and the parallel full approximation scheme in space and time (PFASST). Furthermore, we discuss how these task graphs can be used to analyze the performance of the methods. In addition, we compare the predictions of the model with parallel simulation times using five different PinT libraries.</span></p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"118 ","pages":"Article 103050"},"PeriodicalIF":1.4,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49881776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distributed software defined network-based fog to fog collaboration scheme 分布式软件定义的基于网络的雾对雾协作方案
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-09-01 DOI: 10.1016/j.parco.2023.103040
Muhammad Kabeer , Ibrahim Yusuf , Nasir Ahmad Sufi

Fog computing was created to supplement the cloud in bridging the communication delay gap by deploying fog nodes nearer to Internet of Things (IoT) devices. Depending on the geographical location, computational resource and rate of IoT requests, fog nodes can be idle or saturated. The latter requires special mechanism to enable collaboration with other nodes through service offloading to improve resource utilization. Software Defined Network (SDN) comes with improved bandwidth, latency and understanding of network topology, which recently attracted researchers attention and delivers promising results in service offloading. In this study, a Hierarchical Distributed Software Defined Network-based (DSDN) fog to fog collaboration model is proposed; the scheme considers computational resources such as available CPU and network resources such as communication hops of a prospective offloading node. Fog nodes having limited resources coupled with the projected high demand for fog services in the near future, the model also accounts for extreme cases in which all nearby nodes in a fog domain are saturated, employing a supervisor controller to scale the collaboration to other domains. The results of the simulations carried out on Mininet shows that the proposed multi-controller DSDN solution outperforms the traditional single controller SDN solution, it also further demonstrate that increase in the number of fog nodes does not affect service offloading performance significantly when multiple controllers are used.

雾计算的创建是为了通过在更靠近物联网(IoT)设备的地方部署雾节点来弥补云的通信延迟差距。根据地理位置、计算资源和物联网请求的速率,雾节点可能处于空闲状态或饱和状态。后者需要特殊的机制,通过服务卸载来实现与其他节点的协作,从而提高资源利用率。软件定义网络(SDN)具有改进的带宽、延迟和对网络拓扑的理解,最近引起了研究人员的关注,并在业务卸载方面取得了可喜的成果。本文提出了一种基于分层分布式软件定义网络(DSDN)的雾对雾协作模型;该方案考虑了可用CPU等计算资源和预期卸载节点的通信跳数等网络资源。雾节点资源有限,加上在不久的将来对雾服务的预计高需求,该模型还考虑了雾域中所有附近节点饱和的极端情况,采用监督控制器将协作扩展到其他域。在Mininet上进行的仿真结果表明,所提出的多控制器DSDN解决方案优于传统的单控制器SDN解决方案,并进一步证明了当使用多个控制器时,雾节点数量的增加不会显著影响业务卸载性能。
{"title":"Distributed software defined network-based fog to fog collaboration scheme","authors":"Muhammad Kabeer ,&nbsp;Ibrahim Yusuf ,&nbsp;Nasir Ahmad Sufi","doi":"10.1016/j.parco.2023.103040","DOIUrl":"https://doi.org/10.1016/j.parco.2023.103040","url":null,"abstract":"<div><p><span><span>Fog computing was created to supplement the cloud in bridging the communication delay gap by deploying fog nodes nearer to </span>Internet of Things<span> (IoT) devices. Depending on the geographical location, computational resource and rate of IoT requests, fog nodes can be idle or saturated. The latter requires special mechanism to enable collaboration with other nodes through service offloading to improve resource utilization. Software Defined Network (SDN) comes with improved bandwidth, latency and understanding of </span></span>network topology<span>, which recently attracted researchers attention and delivers promising results in service offloading. In this study, a Hierarchical Distributed Software Defined Network-based (DSDN) fog to fog collaboration model is proposed; the scheme considers computational resources such as available CPU and network resources such as communication hops of a prospective offloading node. Fog nodes having limited resources coupled with the projected high demand for fog services in the near future, the model also accounts for extreme cases in which all nearby nodes in a fog domain are saturated, employing a supervisor controller to scale the collaboration to other domains. The results of the simulations carried out on Mininet shows that the proposed multi-controller DSDN solution outperforms the traditional single controller SDN solution, it also further demonstrate that increase in the number of fog nodes does not affect service offloading performance significantly when multiple controllers are used.</span></p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"117 ","pages":"Article 103040"},"PeriodicalIF":1.4,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49877856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing massively parallel sparse matrix computing on ARM many-core processor ARM多核处理器上大规模并行稀疏矩阵计算优化
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-09-01 DOI: 10.1016/j.parco.2023.103035
Jiang Zheng , Jiazhi Jiang , Jiangsu Du, Dan Huang, Yutong Lu

Sparse matrix multiplication is ubiquitous in many applications such as graph processing and numerical simulation. In recent years, numerous efficient sparse matrix multiplication algorithms and computational libraries have been proposed. However, most of them are oriented to x86 or GPU platforms, while the optimization on ARM many-core platforms has not been well investigated. Our experiments show that existing sparse matrix multiplication libraries for ARM many-core CPU cannot achieve expected parallel performance. Compared with traditional multi-core CPU, ARM many-core CPU has far more cores and often adopts NUMA techniques to scale the memory bandwidth. Its parallel efficiency tends to be restricted by NUMA configuration, memory bandwidth cache contention, etc.

In this paper, we propose optimized implementations for sparse matrix computing on ARM many-core CPU. We propose various optimization techniques for several routines of sparse matrix multiplication to ensure coalesced access of matrix elements in the memory. In detail, the optimization techniques include a fine-tuned CSR-based format for ARM architecture, co-optimization of Gustavson’s algorithm with hierarchical cache and dense array strategy to mitigate performance loss caused by handling compressed storage formats. We exploit the coarse-grained NUMA-aware strategy for inter-node parallelism and the fine-grained cache-aware strategy for intra-node parallelism to improve the parallel efficiency of sparse matrix multiplication. The evaluation shows that our implementation consistently outperforms the existing library on ARM many-core processor.

稀疏矩阵乘法在图形处理和数值模拟等应用中无处不在。近年来,人们提出了许多高效的稀疏矩阵乘法算法和计算库。然而,它们大多面向x86或GPU平台,而ARM多核平台的优化尚未得到很好的研究。实验表明,现有的用于ARM多核CPU的稀疏矩阵乘法库不能达到预期的并行性能。与传统的多核CPU相比,ARM多核CPU拥有更多的内核,并且经常采用NUMA技术来扩展内存带宽。它的并行效率往往受到NUMA配置、内存带宽缓存争用等的限制。本文提出了在ARM多核CPU上稀疏矩阵计算的优化实现。针对稀疏矩阵乘法的几种例程,提出了各种优化技术,以保证对内存中矩阵元素的合并访问。具体而言,优化技术包括针对ARM架构的基于csr的优化格式、Gustavson算法与分层缓存和密集数组策略的协同优化,以减轻处理压缩存储格式造成的性能损失。利用节点间并行的粗粒度numa感知策略和节点内并行的细粒度缓存感知策略来提高稀疏矩阵乘法的并行效率。评估表明,我们的实现始终优于现有的ARM多核处理器上的库。
{"title":"Optimizing massively parallel sparse matrix computing on ARM many-core processor","authors":"Jiang Zheng ,&nbsp;Jiazhi Jiang ,&nbsp;Jiangsu Du,&nbsp;Dan Huang,&nbsp;Yutong Lu","doi":"10.1016/j.parco.2023.103035","DOIUrl":"https://doi.org/10.1016/j.parco.2023.103035","url":null,"abstract":"<div><p><span><span>Sparse matrix multiplication is ubiquitous in many applications such as graph processing and numerical simulation. In recent years, numerous efficient sparse matrix multiplication algorithms and computational libraries have been proposed. However, most of them are oriented to x86 or GPU platforms, while the optimization on ARM many-core platforms has not been well investigated. Our experiments show that existing sparse matrix multiplication libraries for ARM many-core CPU cannot achieve expected parallel performance. Compared with traditional multi-core CPU, ARM many-core CPU has far more cores and often adopts </span>NUMA techniques to scale the </span>memory bandwidth. Its parallel efficiency tends to be restricted by NUMA configuration, memory bandwidth cache contention, etc.</p><p>In this paper, we propose optimized implementations for sparse matrix computing on ARM many-core CPU. We propose various optimization techniques for several routines of sparse matrix multiplication to ensure coalesced access<span> of matrix elements in the memory. In detail, the optimization techniques include a fine-tuned CSR-based format for ARM architecture, co-optimization of Gustavson’s algorithm with hierarchical cache and dense array strategy to mitigate performance loss caused by handling compressed storage formats. We exploit the coarse-grained NUMA-aware strategy for inter-node parallelism and the fine-grained cache-aware strategy for intra-node parallelism to improve the parallel efficiency of sparse matrix multiplication. The evaluation shows that our implementation consistently outperforms the existing library on ARM many-core processor.</span></p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"117 ","pages":"Article 103035"},"PeriodicalIF":1.4,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49877864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial on Advances in High Performance Programming 关于高性能编程进展的社论
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-09-01 DOI: 10.1016/j.parco.2023.103037
A. Marowka, Przemysław Stpiczyński
{"title":"Editorial on Advances in High Performance Programming","authors":"A. Marowka, Przemysław Stpiczyński","doi":"10.1016/j.parco.2023.103037","DOIUrl":"https://doi.org/10.1016/j.parco.2023.103037","url":null,"abstract":"","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"117 1","pages":"103037"},"PeriodicalIF":1.4,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"55107714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parallelizable efficient large order multiple recursive generators 并行化高效大阶多重递归生成器
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-09-01 DOI: 10.2139/ssrn.4344139
L. Deng, Bryan R. Winter, J. H. Shiau, Henry Horng-Shing Lu, Nirman Kumar, Ching-Chi Yang
{"title":"Parallelizable efficient large order multiple recursive generators","authors":"L. Deng, Bryan R. Winter, J. H. Shiau, Henry Horng-Shing Lu, Nirman Kumar, Ching-Chi Yang","doi":"10.2139/ssrn.4344139","DOIUrl":"https://doi.org/10.2139/ssrn.4344139","url":null,"abstract":"","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"78 1","pages":"103036"},"PeriodicalIF":1.4,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73726981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Finding inputs that trigger floating-point exceptions in heterogeneous computing via Bayesian optimization 通过贝叶斯优化查找异构计算中触发浮点异常的输入
IF 1.4 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-09-01 DOI: 10.1016/j.parco.2023.103042
Ignacio Laguna , Anh Tran , Ganesh Gopalakrishnan

Testing code for floating-point exceptions is crucial as exceptions can quickly propagate and produce unreliable numerical answers. The state-of-the-art to test for floating-point exceptions in heterogeneous systems is quite limited and solutions require the application’s source code, which precludes their use in accelerated libraries where the source is not publicly available. We present an approach to find inputs that trigger floating-point exceptions in black-box CPU or GPU functions, i.e., functions where the source code and information about input bounds are unavailable. Our approach is the first to use Bayesian optimization (BO) to identify such inputs and uses novel strategies to overcome the challenges that arise in applying BO to this problem. We implement our approach in the Xscope framework and demonstrate it on 58 functions from the CUDA Math Library and 81 functions from the Intel Math Library. Xscope is able to identify inputs that trigger exceptions in about 73% of the tested functions.

测试浮点异常的代码是至关重要的,因为异常可以快速传播并产生不可靠的数值答案。在异构系统中测试浮点异常的技术非常有限,而且解决方案需要应用程序的源代码,这就排除了在源代码不公开的加速库中使用它们的可能性。我们提出了一种方法来查找在黑箱CPU或GPU函数中触发浮点异常的输入,即,关于输入边界的源代码和信息不可用的函数。我们的方法是第一个使用贝叶斯优化(BO)来识别这些输入,并使用新颖的策略来克服将BO应用于该问题时出现的挑战。我们在Xscope框架中实现了我们的方法,并在CUDA数学库中的58个函数和Intel数学库中的81个函数上进行了演示。Xscope能够识别在大约73%的测试函数中触发异常的输入。
{"title":"Finding inputs that trigger floating-point exceptions in heterogeneous computing via Bayesian optimization","authors":"Ignacio Laguna ,&nbsp;Anh Tran ,&nbsp;Ganesh Gopalakrishnan","doi":"10.1016/j.parco.2023.103042","DOIUrl":"https://doi.org/10.1016/j.parco.2023.103042","url":null,"abstract":"<div><p><span><span>Testing code for floating-point exceptions is crucial as exceptions can quickly propagate and produce unreliable numerical answers. The state-of-the-art to test for floating-point exceptions in heterogeneous systems<span> is quite limited and solutions require the application’s source code, which precludes their use in accelerated libraries where the source is not publicly available. We present an approach to find inputs that trigger floating-point exceptions in black-box CPU or </span></span>GPU functions, i.e., functions where the source code and information about input bounds are unavailable. Our approach is the first to use Bayesian optimization (BO) to identify such inputs and uses novel strategies to overcome the challenges that arise in applying BO to this problem. We implement our approach in the </span><span><span>Xscope</span></span> framework and demonstrate it on 58 functions from the CUDA Math Library and 81 functions from the Intel Math Library. <span><span>Xscope</span></span> is able to identify inputs that trigger exceptions in about 73% of the tested functions.</p></div>","PeriodicalId":54642,"journal":{"name":"Parallel Computing","volume":"117 ","pages":"Article 103042"},"PeriodicalIF":1.4,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49877859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Parallel Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1