首页 > 最新文献

IEEE Transactions on Parallel and Distributed Systems最新文献

英文 中文
Reducing Cross-Pod Communication Overhead for MoE Model Training With Hybrid Parallelism in Multi-Tenant Clusters 减少多租户集群中混合并行MoE模型训练的跨pod通信开销
IF 6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-02-27 DOI: 10.1109/TPDS.2026.3668417
Huihuang Qin;Shuangwu Chen;Zian Wang;Tao Zhang;Ziyang Zou;Xiaobin Tan;Shiyin Zhu;Jian Yang
The massive parameter scale of sparsely-activated Mixture-of-Experts (MoE) models necessitates distributed training with hybrid parallelism. Placing such training tasks, i.e. mapping the logical partitions of an MoE model to available physical NPUs, is challenging. Due to the bandwidth and latency discrepancies between intra- and inter- Pods, the cross-Pod communication usually becomes a bottleneck. The high dispersion of NPUs in multi-tenant clusters exacerbates this issue further. However, a paucity of studies has considered the cross-Pod model placement problem. To address this challenge, we propose a novel model placement scheme tailored for MoE model training with hybrid parallelism in multi-tenant clusters. By quantifying the cross-Pod communication overhead incurred during MoE model training, the model placement is formulated as a 0-1 integer quadratic problem, which is NP hard. Motivated by the traffic difference between different parallelism, we decompose this problem into two subproblems. To solve the subproblems, we propose a lightweight two-stage algorithm based on Best-Fit strategy and neighborhood search. Experiments under different models and network topologies show that our model placement scheme can reduce cross-Pod traffic by 35.9% and cut communication time by 18.7% compared to state-of-the-art methods.
稀疏激活的专家混合模型具有庞大的参数规模,需要混合并行的分布式训练。放置这样的训练任务,即将MoE模型的逻辑分区映射到可用的物理npu,是具有挑战性的。由于pod内部和pod之间的带宽和延迟差异,跨pod通信通常成为瓶颈。多租户集群中npu的高度分散进一步加剧了这个问题。然而,很少有研究考虑到跨pod模型的放置问题。为了解决这一挑战,我们提出了一种新的模型放置方案,该方案针对多租户集群中具有混合并行性的MoE模型训练量身定制。通过量化MoE模型训练过程中产生的跨pod通信开销,将模型放置问题表述为一个0-1整数二次问题,该问题具有NP困难。基于不同并行度之间的流量差异,我们将该问题分解为两个子问题。为了解决子问题,我们提出了一种基于最佳拟合策略和邻域搜索的轻量级两阶段算法。在不同模型和网络拓扑下的实验表明,与现有方法相比,我们的模型放置方案可以减少35.9%的跨pod流量和18.7%的通信时间。
{"title":"Reducing Cross-Pod Communication Overhead for MoE Model Training With Hybrid Parallelism in Multi-Tenant Clusters","authors":"Huihuang Qin;Shuangwu Chen;Zian Wang;Tao Zhang;Ziyang Zou;Xiaobin Tan;Shiyin Zhu;Jian Yang","doi":"10.1109/TPDS.2026.3668417","DOIUrl":"https://doi.org/10.1109/TPDS.2026.3668417","url":null,"abstract":"The massive parameter scale of sparsely-activated Mixture-of-Experts (MoE) models necessitates distributed training with hybrid parallelism. Placing such training tasks, i.e. mapping the logical partitions of an MoE model to available physical NPUs, is challenging. Due to the bandwidth and latency discrepancies between intra- and inter- Pods, the cross-Pod communication usually becomes a bottleneck. The high dispersion of NPUs in multi-tenant clusters exacerbates this issue further. However, a paucity of studies has considered the cross-Pod model placement problem. To address this challenge, we propose a novel model placement scheme tailored for MoE model training with hybrid parallelism in multi-tenant clusters. By quantifying the cross-Pod communication overhead incurred during MoE model training, the model placement is formulated as a 0-1 integer quadratic problem, which is NP hard. Motivated by the traffic difference between different parallelism, we decompose this problem into two subproblems. To solve the subproblems, we propose a lightweight two-stage algorithm based on Best-Fit strategy and neighborhood search. Experiments under different models and network topologies show that our model placement scheme can reduce cross-Pod traffic by 35.9% and cut communication time by 18.7% compared to state-of-the-art methods.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"37 5","pages":"1062-1078"},"PeriodicalIF":6.0,"publicationDate":"2026-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147440670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Block-Wise Mapping With Intra-Block Resource Allocation for Multi-DNN Workloads on Heterogeneous Accelerator Systems 基于块内资源分配的多dnn负载自适应块映射
IF 6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-02-23 DOI: 10.1109/TPDS.2026.3667207
Zhenyu Nie;Haotian Wang;Anthony Theodore Chronopoulos;Zhuo Tang;Kenli Li;Chubo Liu;Zheng Xiao
Deep neural networks (DNNs) dominate workloads on cloud and edge platforms. Meanwhile, the hardware platform towards the heterogeneous system with various accelerators. By mapping layers to their differentpreferred accelerators, the computation cost of each layer can be reduced. While mapping these layers on the same accelerator can reduce the inter-accelerator communication cost. These two costs are often competing and difficult to optimize simultaneously. Therefore, the core challenge in achieving efficient execution of DNN workloads on heterogeneous systems is: how to map layers to achieve the best trade-off between computation and communication costs. Existing works group layers into blocks and perform block-wise mapping to reduce inter-layer communication within blocks. However, when grouping layers, they typically rely on model-agnostic rules, which fail to hide critical inter-layer communication within blocks for diverse DNNs. Moreover, after block mapping, the lack of intra-block resource allocation further increases computation cost of block. In this paper, we propose GHCoM, a novel block-wise mapping framework for exploring the effective cost trade-offs. GHCoM employs an adaptive grouping strategy to guide layer grouping based on the topology of DNNs and dynamically adjust the grouping according to the trade-off target. Furthermore, GHCoM considers the fine-grained allocation of computation (i.e., processing elements) and communication (i.e., on-chip bandwidth) resources within each block to mitigate inter-layer resource contention. To jointly optimize layer grouping, block-wise mapping and intra-block resource allocation, GHCoM leverages a two-level genetic algorithm (GA) with tailored encodings and operators that capture the interdependence across the entire design space. Experiments across various workloads and system configurations show that GHCoM consistently outperforms state-of-the-art baselines, achieving 1.08× to 4.79× speedup in execution latency and reducing energy consumption by 1.83% to 87.71%.
深度神经网络(dnn)在云和边缘平台上主导着工作负载。同时,硬件平台向异构系统发展,具有多种加速器。通过将各层映射到不同的首选加速器,可以减少每层的计算成本。而将这些层映射到同一个加速器上可以降低加速器间的通信成本。这两种成本通常是相互竞争的,很难同时优化。因此,在异构系统上实现DNN工作负载高效执行的核心挑战是:如何映射层以实现计算成本和通信成本之间的最佳权衡。现有的作品将层分组成块,并执行块映射,以减少块内的层间通信。然而,当对层进行分组时,它们通常依赖于模型无关规则,这无法在不同dnn的块内隐藏关键的层间通信。此外,在块映射后,由于缺乏块内资源分配,进一步增加了块的计算成本。在本文中,我们提出了一种新的块映射框架GHCoM,用于探索有效的成本权衡。GHCoM采用自适应分组策略,根据dnn的拓扑结构引导分层分组,并根据权衡目标动态调整分组。此外,GHCoM考虑了每个块内计算(即处理元素)和通信(即片上带宽)资源的细粒度分配,以减轻层间资源争用。为了共同优化层分组、块映射和块内资源分配,GHCoM利用了一种两级遗传算法(GA),该算法具有定制的编码和操作符,可以捕获整个设计空间的相互依赖性。各种工作负载和系统配置的实验表明,GHCoM始终优于最先进的基线,在执行延迟方面实现了1.08到4.79倍的加速,并将能耗降低了1.83%到87.71%。
{"title":"Adaptive Block-Wise Mapping With Intra-Block Resource Allocation for Multi-DNN Workloads on Heterogeneous Accelerator Systems","authors":"Zhenyu Nie;Haotian Wang;Anthony Theodore Chronopoulos;Zhuo Tang;Kenli Li;Chubo Liu;Zheng Xiao","doi":"10.1109/TPDS.2026.3667207","DOIUrl":"https://doi.org/10.1109/TPDS.2026.3667207","url":null,"abstract":"Deep neural networks (DNNs) dominate workloads on cloud and edge platforms. Meanwhile, the hardware platform towards the heterogeneous system with various accelerators. By mapping layers to their differentpreferred accelerators, the computation cost of each layer can be reduced. While mapping these layers on the same accelerator can reduce the inter-accelerator communication cost. These two costs are often competing and difficult to optimize simultaneously. Therefore, the core challenge in achieving efficient execution of DNN workloads on heterogeneous systems is: how to map layers to achieve the best trade-off between computation and communication costs. Existing works group layers into blocks and perform block-wise mapping to reduce inter-layer communication within blocks. However, when grouping layers, they typically rely on model-agnostic rules, which fail to hide critical inter-layer communication within blocks for diverse DNNs. Moreover, after block mapping, the lack of intra-block resource allocation further increases computation cost of block. In this paper, we propose <italic>GHCoM</i>, a novel block-wise mapping framework for exploring the effective cost trade-offs. <italic>GHCoM</i> employs an adaptive grouping strategy to guide layer grouping based on the topology of DNNs and dynamically adjust the grouping according to the trade-off target. Furthermore, <italic>GHCoM</i> considers the fine-grained allocation of computation (i.e., processing elements) and communication (i.e., on-chip bandwidth) resources within each block to mitigate inter-layer resource contention. To jointly optimize layer grouping, block-wise mapping and intra-block resource allocation, <italic>GHCoM</i> leverages a two-level genetic algorithm (GA) with tailored encodings and operators that capture the interdependence across the entire design space. Experiments across various workloads and system configurations show that <italic>GHCoM</i> consistently outperforms state-of-the-art baselines, achieving 1.08× to 4.79× speedup in execution latency and reducing energy consumption by 1.83% to 87.71%.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"37 4","pages":"1015-1031"},"PeriodicalIF":6.0,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fed-Grow: Federating to Grow Transformers for Resource-Constrained Users Without Model Sharing feed -Grow:在没有模型共享的情况下,为资源受限的用户联合到Grow变压器
IF 6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-02-19 DOI: 10.1109/TPDS.2026.3666309
Shikun Shen;Yifei Zou;Yuan Yuan;Hanlin Gu;Peng Li;Xiuzhen Cheng;Falko Dressler;Dongxiao Yu
The growing resource demands of large-scale transformer models pose significant challenges for resource-constrained users, particularly in distributed environments. To address this issue, we propose a federated learning framework called Fed-Grow, which enables multiple participants to collaboratively learn a lightweight scaling operation that transfers knowledge from pretrained small models to a large transformer model. In Fed-Grow, we introduce the Dual-LiGO (Dual Linear Growth Operator) architecture, consisting of Local-LiGO and Global-LiGO components. Local-LiGO addresses model heterogeneity by adapting each participant’s pre-trained model to a common intermediate form, while Global-LiGO facilitates knowledge sharing across participants without sharing local models or raw data, ensuring privacy preservation. This federated approach offers a scalable solution for growing large transformers in a distributed manner, where only the Global-LiGO is shared, significantly reducing communication overhead while maintaining comparable model performance under the same communication constraints. Experimental results demonstrate that Fed-Grow outperforms state-of-the-art methods in terms of accuracy and precision, while reducing the number of trainable parameters by 59.25% and communication costs by 73.01% . These improvements allow for higher efficiency in training large models in distributed environments, without sacrificing performance. To the best of our knowledge, Fed-Grow is the first method that enables cooperative transformer scaling in a distributed setting, making it a practical solution for resource-constrained users.
大型变压器模型日益增长的资源需求对资源受限的用户提出了重大挑战,特别是在分布式环境中。为了解决这个问题,我们提出了一个称为Fed-Grow的联邦学习框架,它使多个参与者能够协作学习轻量级缩放操作,将知识从预训练的小型模型转移到大型变压器模型。在feed - grow中,我们引入了Dual- ligo (Dual Linear Growth Operator)架构,包括Local-LiGO和Global-LiGO组件。local - ligo通过将每个参与者的预训练模型适应为通用的中间形式来解决模型异质性问题,而Global-LiGO促进了参与者之间的知识共享,而无需共享本地模型或原始数据,从而确保了隐私保护。这种联合方法为以分布式方式增长的大型变压器提供了可扩展的解决方案,其中只有Global-LiGO是共享的,在相同的通信约束下显着降低了通信开销,同时保持了可比较的模型性能。实验结果表明,Fed-Grow在准确率和精密度上都优于现有的方法,同时可训练参数的数量减少了59.25%,通信成本减少了73.01%。这些改进允许在不牺牲性能的情况下,在分布式环境中训练大型模型时提高效率。据我们所知,Fed-Grow是第一个在分布式设置中实现变压器协作扩展的方法,使其成为资源受限用户的实用解决方案。
{"title":"Fed-Grow: Federating to Grow Transformers for Resource-Constrained Users Without Model Sharing","authors":"Shikun Shen;Yifei Zou;Yuan Yuan;Hanlin Gu;Peng Li;Xiuzhen Cheng;Falko Dressler;Dongxiao Yu","doi":"10.1109/TPDS.2026.3666309","DOIUrl":"https://doi.org/10.1109/TPDS.2026.3666309","url":null,"abstract":"The growing resource demands of large-scale transformer models pose significant challenges for resource-constrained users, particularly in distributed environments. To address this issue, we propose a federated learning framework called Fed-Grow, which enables multiple participants to collaboratively learn a lightweight scaling operation that transfers knowledge from pretrained small models to a large transformer model. In Fed-Grow, we introduce the Dual-LiGO (Dual Linear Growth Operator) architecture, consisting of Local-LiGO and Global-LiGO components. Local-LiGO addresses model heterogeneity by adapting each participant’s pre-trained model to a common intermediate form, while Global-LiGO facilitates knowledge sharing across participants without sharing local models or raw data, ensuring privacy preservation. This federated approach offers a scalable solution for growing large transformers in a distributed manner, where only the Global-LiGO is shared, significantly reducing communication overhead while maintaining comparable model performance under the same communication constraints. Experimental results demonstrate that Fed-Grow outperforms state-of-the-art methods in terms of accuracy and precision, while reducing the number of trainable parameters by 59.25% and communication costs by 73.01% . These improvements allow for higher efficiency in training large models in distributed environments, without sacrificing performance. To the best of our knowledge, Fed-Grow is the first method that enables cooperative transformer scaling in a distributed setting, making it a practical solution for resource-constrained users.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"37 5","pages":"1048-1061"},"PeriodicalIF":6.0,"publicationDate":"2026-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147440669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Styx: An Efficient Workflow Engine for Serverless Platforms Styx:用于无服务器平台的高效工作流引擎
IF 6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-02-17 DOI: 10.1109/TPDS.2026.3665533
Abhisek Panda;Smruti R. Sarangi
Serverless platforms are widely adopted for deploying applications due to their autoscaling capabilities and pay-as-you-go billing models. These platforms execute an application’s functions inside ephemeral containers and scale the number of containers based on incoming request rates. To meet service level objectives (SLOs), they often over-provision resources by maintaining warm containers or rapidly spawning new ones during traffic bursts. However, this strategy frequently leads to inefficient resource utilization, especially during periods of low activity. Prior research addresses this issue through intelligent scheduling, lightweight virtualization, and container-sharing mechanisms. More recent work aims to improve resource utilization by remodeling the execution of a function within a container to better separate compute and I/O stages. Despite these improvements, existing approaches often introduce delays during execution and induce memory pressure under traffic bursts. In this paper, we present Styx, a novel workflow engine that enhances resource utilization by intelligently decoupling compute and I/O stages. Styx employs a fetch latency predictor that uses real-time system metrics from both the serverless node and the remote storage server to accurately estimate prefetch operations, ensuring input data is available exactly when needed. Furthermore, it offloads the output data upload operation from a container to a host-side data service, thereby efficiently managing provisioned memory. Our approach improves the overall memory allocation by 32.6% when running all the serverless workflows simultaneously when compared to Dataflower + Truffle. Additionally, this method improves the tail latency and the mean latency of a workflow by an average of 26.3% and 21%, respectively.
由于无服务器平台具有自动伸缩功能和按需付费计费模型,因此被广泛用于部署应用程序。这些平台在临时容器内执行应用程序的功能,并根据传入请求率缩放容器的数量。为了满足服务水平目标(slo),他们经常通过维护热容器或在流量爆发期间快速生成新容器来过度提供资源。但是,这种策略经常导致资源利用效率低下,特别是在活动低的时期。先前的研究通过智能调度、轻量级虚拟化和容器共享机制解决了这个问题。最近的工作旨在通过重塑容器内函数的执行来更好地分离计算和I/O阶段,从而提高资源利用率。尽管有这些改进,但现有的方法经常在执行期间引入延迟,并在流量突发时引起内存压力。在本文中,我们提出了一个新的工作流引擎Styx,它通过智能解耦计算和I/O阶段来提高资源利用率。Styx使用一个读取延迟预测器,它使用来自无服务器节点和远程存储服务器的实时系统指标来准确估计预取操作,确保输入数据在需要时准确可用。此外,它将输出数据上传操作从容器卸载到主机端数据服务,从而有效地管理已配置的内存。与Dataflower + Truffle相比,当同时运行所有无服务器工作流时,我们的方法将总体内存分配提高了32.6%。此外,该方法将工作流的尾部延迟和平均延迟分别提高了26.3%和21%。
{"title":"Styx: An Efficient Workflow Engine for Serverless Platforms","authors":"Abhisek Panda;Smruti R. Sarangi","doi":"10.1109/TPDS.2026.3665533","DOIUrl":"https://doi.org/10.1109/TPDS.2026.3665533","url":null,"abstract":"Serverless platforms are widely adopted for deploying applications due to their autoscaling capabilities and pay-as-you-go billing models. These platforms execute an application’s functions inside ephemeral containers and scale the number of containers based on incoming request rates. To meet service level objectives (SLOs), they often over-provision resources by maintaining warm containers or rapidly spawning new ones during traffic bursts. However, this strategy frequently leads to inefficient resource utilization, especially during periods of low activity. Prior research addresses this issue through intelligent scheduling, lightweight virtualization, and container-sharing mechanisms. More recent work aims to improve resource utilization by remodeling the execution of a function within a container to better separate compute and I/O stages. Despite these improvements, existing approaches often introduce delays during execution and induce memory pressure under traffic bursts. In this paper, we present Styx, a novel workflow engine that enhances resource utilization by intelligently decoupling compute and I/O stages. Styx employs a fetch latency predictor that uses real-time system metrics from both the serverless node and the remote storage server to accurately estimate prefetch operations, ensuring input data is available exactly when needed. Furthermore, it offloads the output data upload operation from a container to a host-side data service, thereby efficiently managing provisioned memory. Our approach improves the overall memory allocation by 32.6% when running all the serverless workflows simultaneously when compared to Dataflower + Truffle. Additionally, this method improves the tail latency and the mean latency of a workflow by an average of 26.3% and 21%, respectively.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"37 4","pages":"982-996"},"PeriodicalIF":6.0,"publicationDate":"2026-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
mtGEMM: An Efficient GEMM Library for Modern Multi-Core DSPs mtGEMM:现代多核dsp的高效gem库
IF 6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-02-12 DOI: 10.1109/TPDS.2026.3664114
Jianbin Fang;Kainan Yu;Peng Zhang;Dezun Dong;Xinxin Qi;Xingyu Hou;Ruibo Wang;Kai Lu
The General Matrix Multiplication (GEMM) is a crucial subprogram in high-performance computing (HPC). With the increasing importance of power and energy consumption, modern Digital Signal Processors (DSPs) are being integrated into general-purpose HPC systems. However, due to architecture disparities, traditional optimizations for CPUs and GPUs are not easily applicable to modern DSPs. This paper shares our experience of optimizing the GEMM operation using a CPU-DSP platform as a case study. Our work employs a set of strategies to improve the performance and scalability of GEMM. These strategies focus on developing micro-kernels based on heterogeneous on-chip memory, addressing the memory access bottleneck in multi-core parallelism, and facilitating efficient transpose-GEMM. These approaches, collectively referred to as an efficient and practical library (a.k.a. mtGEMM), maximize computational capabilities and bandwidth utilization of multi-core DSPs, while achieving high performance for variously-shaped GEMMs. Our experimental results demonstrate that mtGEMM can attain between 92% and 96% of the hardware peak, with the multi-core scalability being almost linear.
通用矩阵乘法(GEMM)是高性能计算(HPC)中的一个重要子程序。随着对功率和能耗的日益重视,现代数字信号处理器(dsp)正被集成到通用的高性能计算系统中。然而,由于架构的差异,传统的cpu和gpu优化不容易适用于现代dsp。本文以CPU-DSP平台为例,分享了我们优化GEMM操作的经验。我们的工作采用了一组策略来提高GEMM的性能和可伸缩性。这些策略侧重于开发基于异构片上存储器的微内核,解决多核并行的存储器访问瓶颈,并实现高效的转置gem。这些方法统称为高效实用的库(又名mtGEMM),最大限度地提高了多核dsp的计算能力和带宽利用率,同时实现了各种形状gemm的高性能。我们的实验结果表明,mtGEMM可以达到92%到96%的硬件峰值,多核可扩展性几乎是线性的。
{"title":"mtGEMM: An Efficient GEMM Library for Modern Multi-Core DSPs","authors":"Jianbin Fang;Kainan Yu;Peng Zhang;Dezun Dong;Xinxin Qi;Xingyu Hou;Ruibo Wang;Kai Lu","doi":"10.1109/TPDS.2026.3664114","DOIUrl":"https://doi.org/10.1109/TPDS.2026.3664114","url":null,"abstract":"The General Matrix Multiplication (GEMM) is a crucial subprogram in high-performance computing (HPC). With the increasing importance of power and energy consumption, modern Digital Signal Processors (DSPs) are being integrated into general-purpose HPC systems. However, due to architecture disparities, traditional optimizations for CPUs and GPUs are not easily applicable to modern DSPs. This paper shares our experience of optimizing the GEMM operation using a CPU-DSP platform as a case study. Our work employs a set of strategies to improve the performance and scalability of GEMM. These strategies focus on developing micro-kernels based on heterogeneous on-chip memory, addressing the memory access bottleneck in multi-core parallelism, and facilitating efficient transpose-GEMM. These approaches, collectively referred to as an efficient and practical library (a.k.a. <sc>mtGEMM</small>), maximize computational capabilities and bandwidth utilization of multi-core DSPs, while achieving high performance for variously-shaped GEMMs. Our experimental results demonstrate that <sc>mtGEMM</small> can attain between 92% and 96% of the hardware peak, with the multi-core scalability being almost linear.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"37 4","pages":"905-919"},"PeriodicalIF":6.0,"publicationDate":"2026-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HarmonyCache: Scalable In-Network Cache With Read-Write Separation HarmonyCache:可扩展的网络内Cache与读写分离
IF 6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-02-12 DOI: 10.1109/TPDS.2026.3664186
Jiangyuan Chen;Xiaohua Xu;Wenfei Wu
"In key-value storage systems, a small number of hot items account for most traffic. Skewed workloads lead to load imbalance among servers, and those holding hotspots become system bottlenecks, degrading overall performance. Recent studies show that load imbalance can be eliminated by deploying a small, fast cache node in front of back-end servers, i.e., in-network cache. Programmable switches enable placing such caches on switches where traffic must pass. Although existing in-network cache schemes effectively balance loads in large-scale storage systems, they perform poorly under write-intensive workloads and lose scalability with growing clients due to imbalanced cache nodes. This paper introduces HarmonyCache, a scalable, high-performance in-network cache system that supports write-back. HarmonyCache employs cache replication and read-write separation: only one cache node handles write requests, while others serve reads only. To achieve scalability and minimize coherence overhead, HarmonyCache proposes an adaptive cache replication scheme to determine where and how many replicas to deploy. In addition, we design heterogeneous in-network caches using different switch resources and propose a hybrid caching scheme. Prototype and extensive experiments show that HarmonyCache significantly improves throughput under various access patterns (read/write-intensive), achieving up to 7.6× throughput gain over state-of-the-art solutions under skewed write-intensive workloads.
在键值存储系统中,少量的热点项占据了大部分流量。倾斜的工作负载导致服务器之间的负载不平衡,持有热点的服务器成为系统瓶颈,降低整体性能。最近的研究表明,通过在后端服务器前部署一个小而快速的缓存节点,即网络内缓存,可以消除负载不平衡。可编程交换机允许在流量必须通过的交换机上放置这样的缓存。虽然现有的网络内缓存方案在大规模存储系统中可以有效地平衡负载,但在写密集型工作负载下,由于缓存节点不平衡,它们的性能很差,并且随着客户端数量的增加,它们的可扩展性也会下降。本文介绍了HarmonyCache,一个可扩展的、高性能的支持回写的网络缓存系统。HarmonyCache采用缓存复制和读写分离:只有一个缓存节点处理写请求,而其他缓存节点只处理读请求。为了实现可伸缩性和最小化一致性开销,HarmonyCache提出了一种自适应缓存复制方案,以确定部署的位置和数量。此外,我们设计了使用不同交换机资源的异构网络缓存,并提出了一种混合缓存方案。原型和广泛的实验表明,HarmonyCache在各种访问模式(读/写密集型)下显著提高了吞吐量,在倾斜的写密集型工作负载下,比最先进的解决方案实现了7.6倍的吞吐量增益。
{"title":"HarmonyCache: Scalable In-Network Cache With Read-Write Separation","authors":"Jiangyuan Chen;Xiaohua Xu;Wenfei Wu","doi":"10.1109/TPDS.2026.3664186","DOIUrl":"https://doi.org/10.1109/TPDS.2026.3664186","url":null,"abstract":"\"In key-value storage systems, a small number of hot items account for most traffic. Skewed workloads lead to load imbalance among servers, and those holding hotspots become system bottlenecks, degrading overall performance. Recent studies show that load imbalance can be eliminated by deploying a small, fast cache node in front of back-end servers, i.e., in-network cache. Programmable switches enable placing such caches on switches where traffic must pass. Although existing in-network cache schemes effectively balance loads in large-scale storage systems, they perform poorly under write-intensive workloads and lose scalability with growing clients due to imbalanced cache nodes. This paper introduces HarmonyCache, a scalable, high-performance in-network cache system that supports write-back. HarmonyCache employs cache replication and read-write separation: only one cache node handles write requests, while others serve reads only. To achieve scalability and minimize coherence overhead, HarmonyCache proposes an adaptive cache replication scheme to determine where and how many replicas to deploy. In addition, we design heterogeneous in-network caches using different switch resources and propose a hybrid caching scheme. Prototype and extensive experiments show that HarmonyCache significantly improves throughput under various access patterns (read/write-intensive), achieving up to 7.6× throughput gain over state-of-the-art solutions under skewed write-intensive workloads.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"37 4","pages":"920-933"},"PeriodicalIF":6.0,"publicationDate":"2026-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ComStar: Compression-Aware Stream Query for Heterogeneous Hybrid Architecture ComStar:异构混合架构的压缩感知流查询
IF 6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-02-06 DOI: 10.1109/TPDS.2026.3662253
Yani Liu;Feng Zhang;Yu Zhang;Shuhao Zhang;Bingsheng He;Jianhua Wang;Jidong Zhai;Xiaoyong Du
The exponential increase of stream data in the Big Data era poses critical challenges for SQL queries on compressed streams. These challenges are exacerbated by diverse computational demands and varying application scenarios in stream processing, which lead to increased hardware requirements. Hybrid computing architectures provide a transformative solution in this context by integrating heterogeneous processing units, such as discrete GPUs, CPU-GPU integrated architectures, and edge computing devices to enhance performance. In this paper, we introduce ComStar, a novel compression-aware stream SQL query system that leverages the capabilities of hybrid computing architectures to execute direct queries on compressed stream data without decompression, greatly improving query performance. ComStar incorporates nine lightweight compression algorithms and features an adaptive compression algorithm selector, which optimally chooses the appropriate algorithm based on data characteristics and network conditions. Additionally, ComStar implements a hierarchical multi-tier execution to select the optimal architecture and specific devices for compressed stream SQL queries, enabling fine-grained and efficient execution across the hybrid architecture. Our experiments demonstrate that ComStar achieves an average throughput improvement of 75.6% under 100 Mbps network conditions, leveraging its unique compression-aware query capabilities to outperform contemporary solutions. At a higher network speed of 1 Gbps, ComStar improves throughput by an average of 47.4%. Additionally, ComStar achieves a 28.6% improvement in the throughput/price ratio compared to traditional methods, and a 71.4% enhancement in the throughput/power ratio. Furthermore, the ComStar’s adaptive compression algorithm selector achieves 95.6% accuracy. These results underscore the effectiveness of our system in addressing the challenges posed by the increasing volume of stream data.
在大数据时代,流数据呈指数级增长,对压缩流上的SQL查询提出了严峻的挑战。流处理中不同的计算需求和不同的应用场景加剧了这些挑战,从而导致硬件需求的增加。混合计算架构通过集成异构处理单元(如离散gpu、CPU-GPU集成架构和边缘计算设备)来提高性能,为这种情况提供了一种变革性的解决方案。在本文中,我们介绍了一种新的压缩感知流SQL查询系统ComStar,它利用混合计算架构的能力对压缩流数据执行直接查询,而无需解压缩,从而大大提高了查询性能。ComStar集成了9种轻量级压缩算法,并具有自适应压缩算法选择器,可根据数据特征和网络条件优化选择合适的算法。此外,ComStar还实现了分层多层执行,为压缩流SQL查询选择最佳架构和特定设备,从而在混合架构中实现细粒度和高效的执行。我们的实验表明,在100 Mbps的网络条件下,ComStar的平均吞吐量提高了75.6%,利用其独特的压缩感知查询能力,优于当前的解决方案。在1 Gbps的更高网络速度下,ComStar平均提高了47.4%的吞吐量。此外,与传统方法相比,ComStar的吞吐量/价格比提高了28.6%,吞吐量/功率比提高了71.4%。此外,ComStar的自适应压缩算法选择器准确率达到95.6%。这些结果强调了我们的系统在应对流数据量不断增加所带来的挑战方面的有效性。
{"title":"ComStar: Compression-Aware Stream Query for Heterogeneous Hybrid Architecture","authors":"Yani Liu;Feng Zhang;Yu Zhang;Shuhao Zhang;Bingsheng He;Jianhua Wang;Jidong Zhai;Xiaoyong Du","doi":"10.1109/TPDS.2026.3662253","DOIUrl":"https://doi.org/10.1109/TPDS.2026.3662253","url":null,"abstract":"The exponential increase of stream data in the Big Data era poses critical challenges for SQL queries on compressed streams. These challenges are exacerbated by diverse computational demands and varying application scenarios in stream processing, which lead to increased hardware requirements. Hybrid computing architectures provide a transformative solution in this context by integrating heterogeneous processing units, such as discrete GPUs, CPU-GPU integrated architectures, and edge computing devices to enhance performance. In this paper, we introduce ComStar, a novel compression-aware stream SQL query system that leverages the capabilities of hybrid computing architectures to execute direct queries on compressed stream data without decompression, greatly improving query performance. ComStar incorporates nine lightweight compression algorithms and features an adaptive compression algorithm selector, which optimally chooses the appropriate algorithm based on data characteristics and network conditions. Additionally, ComStar implements a hierarchical multi-tier execution to select the optimal architecture and specific devices for compressed stream SQL queries, enabling fine-grained and efficient execution across the hybrid architecture. Our experiments demonstrate that ComStar achieves an average throughput improvement of 75.6% under 100 Mbps network conditions, leveraging its unique compression-aware query capabilities to outperform contemporary solutions. At a higher network speed of 1 Gbps, ComStar improves throughput by an average of 47.4%. Additionally, ComStar achieves a 28.6% improvement in the throughput/price ratio compared to traditional methods, and a 71.4% enhancement in the throughput/power ratio. Furthermore, the ComStar’s adaptive compression algorithm selector achieves 95.6% accuracy. These results underscore the effectiveness of our system in addressing the challenges posed by the increasing volume of stream data.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"37 4","pages":"948-965"},"PeriodicalIF":6.0,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accelerating Molecular Dynamics Simulations on ARM Multi-Core Processors 基于ARM多核处理器的加速分子动力学模拟
IF 6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-02-03 DOI: 10.1109/TPDS.2026.3660861
Ran Chen;Huihai An;Zhihua Sa;Ping Gao;Xiaohui Duan;Bertil Schmidt;Yang Yang;Yizhen Chen;Lin Gan;Guangwen Yang;Weiguo Liu
LAMMPS is a widely used molecular dynamics (MD) software package in materials science, computational chemistry, and biophysics, supporting parallel computing from a single CPU core to large supercomputers. The Kunpeng processor features both high memory bandwidth and core density and is therefore an interesting candidate for accelerating compute-intensive workloads. In this article, we target the Kunpeng multi-core architecture and focus on optimizing LAMMPS for modern ARM-based platforms by using the Lennard-Jones (L-J) and Tersoff potentials as representative case studies. We investigate both common and specific optimization challenges, and present a comprehensive performance analysis addressing four key aspects: neighbor list algorithm design, force computation optimization, efficient vectorization, and multi-thread parallelization. Experimental results show that the optimized potentials achieve speedups of approximately $2 times$ and $5 times$, reaching $4.55 times$ and $7.04times$ the performance of the original Intel version for L-J and Tersoff, respectively. Both potentials outperform Intel’s acceleration library, with a peak performance up to $2.9times$$-3.5times$. In terms of parallel efficiency, we evaluate scalability both within a single CPU (small-scale) and across multiple nodes (large-scale). Strong and weak scaling tests within a single CPU show that when the expansion factor is 32 times, parallel efficiency remains above 90%. Large-scale weak scaling across multiple nodes achieves up to 86% efficiency when the expansion factor is 32. Using 32 nodes (18,432 processes), our implementation enables billion-atom simulations with L-J and Tersoff potentials. This work achieves breakthrough performance and provides critical support for large-scale molecular dynamics in engineering applications.
LAMMPS是一个广泛应用于材料科学、计算化学和生物物理学的分子动力学(MD)软件包,支持从单个CPU内核到大型超级计算机的并行计算。鲲鹏处理器具有高内存带宽和核密度的特点,因此是加速计算密集型工作负载的有趣候选者。在本文中,我们以鲲鹏多核架构为目标,并通过使用Lennard-Jones (L-J)和Tersoff势作为代表性案例研究,重点关注针对现代arm平台优化LAMMPS。我们研究了常见的和特定的优化挑战,并提出了一个全面的性能分析,涉及四个关键方面:邻居列表算法设计,力计算优化,高效矢量化和多线程并行化。实验结果表明,优化后的电位实现了大约$2 $和$5 $ $的加速,分别达到了原始Intel版本L-J和Tersoff的$4.55 $和$7.04 $ $的性能。这两种潜力都优于英特尔的加速库,峰值性能高达$2.9倍$$- $ 3.5倍$。在并行效率方面,我们评估了单个CPU内(小规模)和跨多个节点(大规模)的可伸缩性。在单个CPU内的强弱缩放测试表明,当扩展因子为32倍时,并行效率保持在90%以上。当扩展因子为32时,跨多个节点的大规模弱扩展可达到86%的效率。使用32个节点(18,432个过程),我们的实现实现了具有L-J和Tersoff势的十亿原子模拟。这项工作取得了突破性的进展,为大规模分子动力学在工程上的应用提供了重要的支持。
{"title":"Accelerating Molecular Dynamics Simulations on ARM Multi-Core Processors","authors":"Ran Chen;Huihai An;Zhihua Sa;Ping Gao;Xiaohui Duan;Bertil Schmidt;Yang Yang;Yizhen Chen;Lin Gan;Guangwen Yang;Weiguo Liu","doi":"10.1109/TPDS.2026.3660861","DOIUrl":"https://doi.org/10.1109/TPDS.2026.3660861","url":null,"abstract":"LAMMPS is a widely used molecular dynamics (MD) software package in materials science, computational chemistry, and biophysics, supporting parallel computing from a single CPU core to large supercomputers. The Kunpeng processor features both high memory bandwidth and core density and is therefore an interesting candidate for accelerating compute-intensive workloads. In this article, we target the Kunpeng multi-core architecture and focus on optimizing LAMMPS for modern ARM-based platforms by using the Lennard-Jones (L-J) and Tersoff potentials as representative case studies. We investigate both common and specific optimization challenges, and present a comprehensive performance analysis addressing four key aspects: neighbor list algorithm design, force computation optimization, efficient vectorization, and multi-thread parallelization. Experimental results show that the optimized potentials achieve speedups of approximately <inline-formula><tex-math>$2 times$</tex-math></inline-formula> and <inline-formula><tex-math>$5 times$</tex-math></inline-formula>, reaching <inline-formula><tex-math>$4.55 times$</tex-math></inline-formula> and <inline-formula><tex-math>$7.04times$</tex-math></inline-formula> the performance of the original Intel version for L-J and Tersoff, respectively. Both potentials outperform Intel’s acceleration library, with a peak performance up to <inline-formula><tex-math>$2.9times$</tex-math></inline-formula><inline-formula><tex-math>$-3.5times$</tex-math></inline-formula>. In terms of parallel efficiency, we evaluate scalability both within a single CPU (small-scale) and across multiple nodes (large-scale). Strong and weak scaling tests within a single CPU show that when the expansion factor is 32 times, parallel efficiency remains above 90%. Large-scale weak scaling across multiple nodes achieves up to 86% efficiency when the expansion factor is 32. Using 32 nodes (18,432 processes), our implementation enables billion-atom simulations with L-J and Tersoff potentials. This work achieves breakthrough performance and provides critical support for large-scale molecular dynamics in engineering applications.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"37 4","pages":"805-821"},"PeriodicalIF":6.0,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146176021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FLARE: Efficient Distributed Large-Scale Graph Neural Networks Training With Adaptive Latency-Aware Probabilistic Caching 具有自适应延迟感知概率缓存的高效分布式大规模图神经网络训练
IF 6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-02-03 DOI: 10.1109/TPDS.2026.3660379
Muhammad Numan Khan;Young-Koo Lee
Since the emergence of Graph Neural Networks (GNNs), researchers have extensively investigated training on large-scale GNN training because of their success and wide usage in various domains including biological networks, finance, and recommendation systems. This work focuses on training large-scale distributed GNNs, where partitioning massive graphs across multiple machines creates remote communication overhead that becomes a major scalability bottleneck. We introduce a policy-driven caching mechanism that prioritizes node features and embeddings based on access frequency and cross-partition fetch cost, significantly minimizing communication overhead without sacrificing accuracy. Our policies are based on analysis of Node Affinities (NAFs) during multi-hop neighborhood sampling that extend substantially beyond the graph partition boundaries. Analyzing NAFs not only alleviates the communication bottleneck but also provides a systematic mechanism to manage in-memory data effectively, prioritizing GPU storage for node features with high fetch latency. We present FLARE, a system designed to handle partitioned feature data while leveraging the NAF-based caching policy. FLARE substantially reduces both communication overhead and training convergence time. Extensive experiments on benchmark datasets show that training FLARE on a three-layer GCN, GAT, and GraphSAGE across eight GPU machines achieves up to 12.04× (8.12× on average) speedup over DistDGLv2, demonstrating substantial performance gains compared to state-of-the-art methods.
自图神经网络(GNN)出现以来,由于其在生物网络、金融和推荐系统等各个领域的成功和广泛应用,研究人员广泛研究了大规模GNN训练的训练。这项工作的重点是训练大规模分布式gnn,其中跨多台机器划分大量图会产生远程通信开销,成为主要的可扩展性瓶颈。我们引入了一种策略驱动的缓存机制,该机制根据访问频率和跨分区获取成本对节点特征和嵌入进行优先级排序,在不牺牲准确性的情况下显著降低通信开销。我们的策略是基于在多跳邻居采样期间对节点亲和力(NAFs)的分析,这些采样大大超出了图分区边界。分析NAFs不仅可以缓解通信瓶颈,还提供了一种系统的机制来有效地管理内存中的数据,为具有高获取延迟的节点特征优先分配GPU存储。我们提出了FLARE,一个旨在处理分区特征数据的系统,同时利用基于naff的缓存策略。FLARE大大减少了通信开销和训练收敛时间。在基准数据集上进行的大量实验表明,在跨8台GPU机器的三层GCN、GAT和GraphSAGE上训练FLARE比DistDGLv2实现了高达12.04倍(平均8.12倍)的加速,与最先进的方法相比,显示了显著的性能提升。
{"title":"FLARE: Efficient Distributed Large-Scale Graph Neural Networks Training With Adaptive Latency-Aware Probabilistic Caching","authors":"Muhammad Numan Khan;Young-Koo Lee","doi":"10.1109/TPDS.2026.3660379","DOIUrl":"https://doi.org/10.1109/TPDS.2026.3660379","url":null,"abstract":"Since the emergence of Graph Neural Networks (GNNs), researchers have extensively investigated training on large-scale GNN training because of their success and wide usage in various domains including biological networks, finance, and recommendation systems. This work focuses on training large-scale distributed GNNs, where partitioning massive graphs across multiple machines creates remote communication overhead that becomes a major scalability bottleneck. We introduce a policy-driven caching mechanism that prioritizes node features and embeddings based on access frequency and cross-partition fetch cost, significantly minimizing communication overhead without sacrificing accuracy. Our policies are based on analysis of Node Affinities (NAFs) during multi-hop neighborhood sampling that extend substantially beyond the graph partition boundaries. Analyzing NAFs not only alleviates the communication bottleneck but also provides a systematic mechanism to manage in-memory data effectively, prioritizing GPU storage for node features with high fetch latency. We present FLARE, a system designed to handle partitioned feature data while leveraging the NAF-based caching policy. FLARE substantially reduces both communication overhead and training convergence time. Extensive experiments on benchmark datasets show that training FLARE on a three-layer GCN, GAT, and GraphSAGE across eight GPU machines achieves up to 12.04× (8.12× on average) speedup over DistDGLv2, demonstrating substantial performance gains compared to state-of-the-art methods.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"37 4","pages":"849-866"},"PeriodicalIF":6.0,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146176023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A 590-Nanosecond 757-Gbps FPGA Lossy Compressed Network 一个590纳秒757 gbps FPGA有损压缩网络
IF 6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-02-02 DOI: 10.1109/TPDS.2026.3659817
Michihiro Koibuchi;Takumi Honda;Naoto Fukumoto;Shoichi Hirasawa;Koji Nakano
Inter-FPGA communication bandwidth has become a limiting factor in scaling memory-intensive workloads on FPGA-based systems. While modern FPGAs integrate high-bandwidth memory (HBM) to increase local memory throughput, network interfaces often lag behind, creating an imbalance between computation and communication resources. Data compression is a technique to increase effective communication bandwidth by reducing the amount of data transferred, but existing solutions struggle to meet the performance and operation latency requirements of FPGA-based platforms. This paper presents a high-throughput lossy compression framework that enables sub-microsecond latency communication in FPGA clusters. The proposed design addresses the challenge of aligning variable-length compressed data with fixed-width network channels by using transpose circuits, memory-bank reordering, and word-wise operations. A run-length encoding scheme with bounded error is employed to compress floating-point and fixed-point data without relying on complex fine-grained bit-level manipulations, enabling low-latency and scalable implementation. The proposed architecture is implemented on a custom Stratix 10 MX2100 FPGA card equipped with eight 50 Gbps network ports and silicon photonics transceivers. The system achieves up to 757 Gbps of aggregate bandwidth per FPGA in collective communication operations. Compression and decompression are performed within 590 ns total latency, while maintaining the quality of results in a GradAllReduce workload for deep learning.
在基于fpga的系统中,fpga间通信带宽已经成为扩展内存密集型工作负载的限制因素。虽然现代fpga集成了高带宽内存(HBM)来增加本地内存吞吐量,但网络接口往往滞后,造成计算和通信资源之间的不平衡。数据压缩是一种通过减少数据传输量来增加有效通信带宽的技术,但现有的解决方案难以满足基于fpga平台的性能和操作延迟要求。本文提出了一种高吞吐量有损压缩框架,可在FPGA集群中实现亚微秒延迟通信。提出的设计通过使用转置电路、内存库重新排序和逐字操作来解决将可变长度压缩数据与固定宽度网络通道对齐的挑战。采用有界错误的运行长度编码方案来压缩浮点和定点数据,而不依赖于复杂的细粒度位级操作,从而实现低延迟和可扩展的实现。所提出的架构是在一个定制的Stratix 10 MX2100 FPGA卡上实现的,该FPGA卡配备了8个50 Gbps网络端口和硅光子收发器。在集体通信操作中,每个FPGA的总带宽可达757gbps。压缩和解压缩在590 ns的总延迟内执行,同时在GradAllReduce工作负载中保持深度学习的结果质量。
{"title":"A 590-Nanosecond 757-Gbps FPGA Lossy Compressed Network","authors":"Michihiro Koibuchi;Takumi Honda;Naoto Fukumoto;Shoichi Hirasawa;Koji Nakano","doi":"10.1109/TPDS.2026.3659817","DOIUrl":"https://doi.org/10.1109/TPDS.2026.3659817","url":null,"abstract":"Inter-FPGA communication bandwidth has become a limiting factor in scaling memory-intensive workloads on FPGA-based systems. While modern FPGAs integrate high-bandwidth memory (HBM) to increase local memory throughput, network interfaces often lag behind, creating an imbalance between computation and communication resources. Data compression is a technique to increase effective communication bandwidth by reducing the amount of data transferred, but existing solutions struggle to meet the performance and operation latency requirements of FPGA-based platforms. This paper presents a high-throughput lossy compression framework that enables sub-microsecond latency communication in FPGA clusters. The proposed design addresses the challenge of aligning variable-length compressed data with fixed-width network channels by using transpose circuits, memory-bank reordering, and word-wise operations. A run-length encoding scheme with bounded error is employed to compress floating-point and fixed-point data without relying on complex fine-grained bit-level manipulations, enabling low-latency and scalable implementation. The proposed architecture is implemented on a custom Stratix 10 MX2100 FPGA card equipped with eight 50 Gbps network ports and silicon photonics transceivers. The system achieves up to 757 Gbps of aggregate bandwidth per FPGA in collective communication operations. Compression and decompression are performed within 590 ns total latency, while maintaining the quality of results in a GradAllReduce workload for deep learning.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"37 4","pages":"836-848"},"PeriodicalIF":6.0,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11370288","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Parallel and Distributed Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1