首页 > 最新文献

IEEE Transactions on Parallel and Distributed Systems最新文献

英文 中文
Spread+: Scalable Model Aggregation in Federated Learning With Non-IID Data
IF 5.6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-02-18 DOI: 10.1109/TPDS.2025.3539738
Huanghuang Liang;Xin Yang;Xiaoming Han;Boan Liu;Chuang Hu;Dan Wang;Xiaobo Zhou;Dazhao Cheng
Federated learning (FL) addresses privacy concerns by training models without sharing raw data, overcoming the limitations of traditional machine learning paradigms. However, the rise of smart applications has accentuated the heterogeneity in data and devices, which presents significant challenges for FL. In particular, data skewness among participants can compromise model accuracy, while diverse device capabilities lead to aggregation bottlenecks, causing severe model congestion. In this article, we introduce Spread+, a hierarchical system that enhances FL by organizing clients into clusters and delegating model aggregation to edge devices, thus mitigating these challenges. Spread+ leverages hedonic coalition formation game to optimize customer organization and adaptive algorithms to regulate aggregation intervals within and across clusters. Moreover, it refines the aggregation algorithm to boost model accuracy. Our experiments demonstrate that Spread+ significantly alleviates the central aggregation bottleneck and surpasses mainstream benchmarks, achieving performance improvements of 49.58% over FAVG and 22.78% over Ring-allreduce.
{"title":"Spread+: Scalable Model Aggregation in Federated Learning With Non-IID Data","authors":"Huanghuang Liang;Xin Yang;Xiaoming Han;Boan Liu;Chuang Hu;Dan Wang;Xiaobo Zhou;Dazhao Cheng","doi":"10.1109/TPDS.2025.3539738","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3539738","url":null,"abstract":"Federated learning (FL) addresses privacy concerns by training models without sharing raw data, overcoming the limitations of traditional machine learning paradigms. However, the rise of smart applications has accentuated the heterogeneity in data and devices, which presents significant challenges for FL. In particular, data skewness among participants can compromise model accuracy, while diverse device capabilities lead to aggregation bottlenecks, causing severe model congestion. In this article, we introduce Spread+, a hierarchical system that enhances FL by organizing clients into clusters and delegating model aggregation to edge devices, thus mitigating these challenges. Spread+ leverages hedonic coalition formation game to optimize customer organization and adaptive algorithms to regulate aggregation intervals within and across clusters. Moreover, it refines the aggregation algorithm to boost model accuracy. Our experiments demonstrate that Spread+ significantly alleviates the central aggregation bottleneck and surpasses mainstream benchmarks, achieving performance improvements of 49.58% over FAVG and 22.78% over Ring-allreduce.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 4","pages":"701-716"},"PeriodicalIF":5.6,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143535516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FedTune-SGM: A Stackelberg-Driven Personalized Federated Learning Strategy for Edge Networks
IF 5.6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-02-18 DOI: 10.1109/TPDS.2025.3543368
Neha Singh;Mainak Adhikari
Federated Learning (FL) has emerged as a prominent solution for distributed learning environments, enabling collaborative model training without centralized data collection. However, FL faces significant challenges such as data heterogeneity and resource-constraint edge devices for model training and analysis, leading to accuracy degradation and bias in model performance. To address these critical issues, we propose a novel FL strategy named FedTune-SGM, designed to optimize model training in decentralized settings. In this strategy, a cloud-based model is initially trained and fine-tuned on the edge devices with additional layers tailored to the specific data characteristics. This fine-tuning process effectively mitigates the impact of data heterogeneity, enhancing the robustness and generalization capability of the model. FedTune-SGM employs a strategic weighting mechanism that ensures a balanced and equitable contribution from participating edge devices to prevent dominant influences from resource-rich devices and promote a fairer and more accurate aggregated model. Additionally, the proposed strategy integrates a Stackelberg Game model to foster an interactive and dynamic cloud-edge setup that motivates edge devices to invest more effort in model training and ensures the effectiveness of resource-constraint edge devices. Extensive experiments conducted on three diverse datasets highlight the superior performance of the proposed FedTune-SGM strategy compared to state-of-the-art FL techniques in terms of accuracy and robustness while meeting the critical challenges of data heterogeneity and resource limitations in FL environments. Through these innovations, FedTune-SGM paves the way for more reliable and efficient distributed learning systems, unlocking the full potential of FL in practical applications.
{"title":"FedTune-SGM: A Stackelberg-Driven Personalized Federated Learning Strategy for Edge Networks","authors":"Neha Singh;Mainak Adhikari","doi":"10.1109/TPDS.2025.3543368","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3543368","url":null,"abstract":"Federated Learning (FL) has emerged as a prominent solution for distributed learning environments, enabling collaborative model training without centralized data collection. However, FL faces significant challenges such as data heterogeneity and resource-constraint edge devices for model training and analysis, leading to accuracy degradation and bias in model performance. To address these critical issues, we propose a novel FL strategy named FedTune-SGM, designed to optimize model training in decentralized settings. In this strategy, a cloud-based model is initially trained and fine-tuned on the edge devices with additional layers tailored to the specific data characteristics. This fine-tuning process effectively mitigates the impact of data heterogeneity, enhancing the robustness and generalization capability of the model. FedTune-SGM employs a strategic weighting mechanism that ensures a balanced and equitable contribution from participating edge devices to prevent dominant influences from resource-rich devices and promote a fairer and more accurate aggregated model. Additionally, the proposed strategy integrates a Stackelberg Game model to foster an interactive and dynamic cloud-edge setup that motivates edge devices to invest more effort in model training and ensures the effectiveness of resource-constraint edge devices. Extensive experiments conducted on three diverse datasets highlight the superior performance of the proposed FedTune-SGM strategy compared to state-of-the-art FL techniques in terms of accuracy and robustness while meeting the critical challenges of data heterogeneity and resource limitations in FL environments. Through these innovations, FedTune-SGM paves the way for more reliable and efficient distributed learning systems, unlocking the full potential of FL in practical applications.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 4","pages":"791-802"},"PeriodicalIF":5.6,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143553185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Tail Latency SLO Guaranteed Task Scheduling Scheme for User-Facing Services
IF 5.6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-02-14 DOI: 10.1109/TPDS.2025.3542638
Zhijun Wang;Huiyang Li;Lin Sun;Stoddard Rosenkrantz;Hao Che;Hong Jiang
A primary design objective for user-facing services for cloud and edge computing is to maximize query throughput, while meeting query tail latency Service Level Objectives (SLOs) for individual queries. Unfortunately, the existing solutions fall short of achieving this design objective, which we argue, is largely attributed to the fact that they fail to take the query fanout explicitly into account. In this paper, we propose TailGuard based on a Tail-latency-SLO-and-Fanout-aware Earliest-Deadline-First Queuing policy (TF-EDFQ) for task queuing at individual task servers the query tasks are fanned out to. With the task pre-dequeuing time deadline for each task being derived based on both query tail latency SLO and query fanout, TailGuard takes an important first step towards achieving the design objective. A query admission control scheme is also developed to provide tail latency SLO guarantee in the presence of resource shortages. TailGuard is evaluated against First-In-First-Out (FIFO) task queuing, task PRIority Queuing (PRIQ) and Tail-latency-SLO-aware EDFQ (T-EDFQ) policies by both simulation and testing in the Amazon EC2 cloud. It is driven by three types of applications in the Tailbench benchmark suite, featuring web search, in-memory key-value store, and transactional database applications. The results demonstrate that TailGuard can significantly improve resource utilization (e.g., up to 80% compared to FIFO), while also meeting the targeted tail latency SLOs, as compared with the other three policies. TailGuard is also implemented and tested in a highly heterogeneous Sensing-$a$s-a-Service (SaS) testbed for a data sensing service, demonstrating performance gains of up to 33% . These results are consistent with both the simulation and Amazon EC2 results.
{"title":"A Tail Latency SLO Guaranteed Task Scheduling Scheme for User-Facing Services","authors":"Zhijun Wang;Huiyang Li;Lin Sun;Stoddard Rosenkrantz;Hao Che;Hong Jiang","doi":"10.1109/TPDS.2025.3542638","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3542638","url":null,"abstract":"A primary design objective for user-facing services for cloud and edge computing is to maximize query throughput, while meeting query tail latency Service Level Objectives (SLOs) for individual queries. Unfortunately, the existing solutions fall short of achieving this design objective, which we argue, is largely attributed to the fact that they fail to take the query fanout explicitly into account. In this paper, we propose TailGuard based on a Tail-latency-SLO-and-Fanout-aware Earliest-Deadline-First Queuing policy (TF-EDFQ) for task queuing at individual task servers the query tasks are fanned out to. With the task pre-dequeuing time deadline for each task being derived based on both query tail latency SLO and query fanout, TailGuard takes an important first step towards achieving the design objective. A query admission control scheme is also developed to provide tail latency SLO guarantee in the presence of resource shortages. TailGuard is evaluated against First-In-First-Out (FIFO) task queuing, task PRIority Queuing (PRIQ) and Tail-latency-SLO-aware EDFQ (T-EDFQ) policies by both simulation and testing in the Amazon EC2 cloud. It is driven by three types of applications in the Tailbench benchmark suite, featuring web search, in-memory key-value store, and transactional database applications. The results demonstrate that TailGuard can significantly improve resource utilization (e.g., up to 80% compared to FIFO), while also meeting the targeted tail latency SLOs, as compared with the other three policies. TailGuard is also implemented and tested in a highly heterogeneous Sensing-<inline-formula><tex-math>$a$</tex-math></inline-formula>s-a-Service (SaS) testbed for a data sensing service, demonstrating performance gains of up to 33% . These results are consistent with both the simulation and Amazon EC2 results.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 4","pages":"759-774"},"PeriodicalIF":5.6,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143553350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beelog: Online Log Compaction for Dependable Systems
IF 5.6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-02-13 DOI: 10.1109/TPDS.2025.3541628
Luiz Gustavo C. Xavier;Cristina Meinhardt;Odorico Machado Mendizabal
Logs are a known abstraction used to develop dependable and secure distributed systems. By logging entries on a sequential global log, systems can synchronize updates over replicas and provide a consistent state recovery in the presence of faults. However, their usage incurs a non-negligible overhead on the application's performance. This article presents Beelog, an approach to reduce logging impact and accelerate recovery on log-based protocols by safely discarding entries from logs. The technique involves executing a log compaction during run-time concurrently with the persistence and execution of commands. Besides compacting logging information, the proposed technique splits the log file and incorporates strategies to reduce logging overhead, such as batching and parallel I/O. We evaluate the proposed approach by implementing it as a new feature of the etcd key-value store and comparing it against etcd's standard logging. Utilizing workloads from the YCSB benchmark and experimenting with different configurations for batch size and number of storage devices, our results indicate that Beelog can reduce application recovery time, especially in write-intensive workloads with a small number of keys and a probability favoring the most recent keys to be updated. In such scenarios, we observed up to a 50% compaction in the log file size and a 65% improvement in recovery time compared to etcd's standard recovery protocol. As a side effect, batching results in higher command execution latency, ranging from $ text{100 ms}$ to $ text{350 ms}$ with Beelog, compared to the default etcd's $ text{90 ms}$. Except for the latency increase, the proposed technique does not impose other significant performance costs, making it a practical solution for systems where fast recovery and reduced storage are priorities.
日志是一种已知的抽象概念,用于开发可靠、安全的分布式系统。通过在顺序全局日志中记录条目,系统可以在副本上同步更新,并在出现故障时提供一致的状态恢复。然而,使用全局日志会给应用程序的性能带来不可忽略的开销。本文介绍的 Beelog 是一种通过安全地丢弃日志条目来减少日志影响并加速基于日志协议的恢复的方法。该技术包括在运行时与命令的持久化和执行同时执行日志压缩。除了压缩日志信息外,该技术还能分割日志文件,并采用批处理和并行 I/O 等策略来减少日志开销。我们将所提出的方法作为 etcd 键值存储的一项新功能加以实施,并与 etcd 的标准日志记录进行比较,从而对其进行评估。我们利用 YCSB 基准的工作负载,并尝试了不同的批处理大小和存储设备数量配置,结果表明 Beelog 可以缩短应用程序的恢复时间,尤其是在键数较少且可能偏向于更新最新键的写密集型工作负载中。在这种情况下,与 etcd 的标准恢复协议相比,我们观察到日志文件大小压缩了 50%,恢复时间缩短了 65%。作为副作用,批处理会导致更高的命令执行延迟,与 etcd 默认的 $ text{90 ms}$ 相比,Beelog 的延迟从 $ text{100 ms}$ 到 $ text{350 ms}$ 不等。除了延迟增加外,所提出的技术不会带来其他显著的性能代价,因此对于优先考虑快速恢复和减少存储的系统来说,它是一种实用的解决方案。
{"title":"Beelog: Online Log Compaction for Dependable Systems","authors":"Luiz Gustavo C. Xavier;Cristina Meinhardt;Odorico Machado Mendizabal","doi":"10.1109/TPDS.2025.3541628","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3541628","url":null,"abstract":"Logs are a known abstraction used to develop dependable and secure distributed systems. By logging entries on a sequential global log, systems can synchronize updates over replicas and provide a consistent state recovery in the presence of faults. However, their usage incurs a non-negligible overhead on the application's performance. This article presents Beelog, an approach to reduce logging impact and accelerate recovery on log-based protocols by safely discarding entries from logs. The technique involves executing a log compaction during run-time concurrently with the persistence and execution of commands. Besides compacting logging information, the proposed technique splits the log file and incorporates strategies to reduce logging overhead, such as batching and parallel I/O. We evaluate the proposed approach by implementing it as a new feature of the etcd key-value store and comparing it against etcd's standard logging. Utilizing workloads from the YCSB benchmark and experimenting with different configurations for batch size and number of storage devices, our results indicate that Beelog can reduce application recovery time, especially in write-intensive workloads with a small number of keys and a probability favoring the most recent keys to be updated. In such scenarios, we observed up to a 50% compaction in the log file size and a 65% improvement in recovery time compared to etcd's standard recovery protocol. As a side effect, batching results in higher command execution latency, ranging from <inline-formula><tex-math>$ text{100 ms}$</tex-math></inline-formula> to <inline-formula><tex-math>$ text{350 ms}$</tex-math></inline-formula> with Beelog, compared to the default etcd's <inline-formula><tex-math>$ text{90 ms}$</tex-math></inline-formula>. Except for the latency increase, the proposed technique does not impose other significant performance costs, making it a practical solution for systems where fast recovery and reduced storage are priorities.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 4","pages":"689-700"},"PeriodicalIF":5.6,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143521353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Task-Aware Service Placement for Distributed Learning in Wireless Edge Networks
IF 5.6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-02-10 DOI: 10.1109/TPDS.2025.3539620
Rong Cong;Zhiwei Zhao;Mengfan Wang;Geyong Min;Jiangshu Liu;Jiwei Mo
Machine learning has been a driving force in the evolution of tremendous computing services and applications in the past decade. Traditional learning systems rely on centralized training and inference, which poses serious privacy and security concerns. To solve this problem, distributed learning over wireless edge networks (DLWENs) emerges as a trending solution and has attracted increasing research interests. In DLWENs, corresponding services need to be placed onto the edge servers to process the distributed tasks. Apparently, different placement of training services can significantly affect the performance of all distributed learning tasks. In this article, we propose TASP, a task-aware service placement scheme for distributed learning in wireless edge networks. By carefully considering the structures (directed acyclic graphs) of the distributed learning tasks, the fine-grained task requests and inter-task dependencies are incorporated into the placement strategies to realize the parallel computation of learning services. We also exploit queuing theory to characterize the dynamics caused by task uncertainties. Extensive experiments based on the Alibaba ML dataset show that, compared to the state-of-the-art schemes, the proposed work reduces the overall delay of distributed learning tasks by 38.6% on average.
{"title":"Task-Aware Service Placement for Distributed Learning in Wireless Edge Networks","authors":"Rong Cong;Zhiwei Zhao;Mengfan Wang;Geyong Min;Jiangshu Liu;Jiwei Mo","doi":"10.1109/TPDS.2025.3539620","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3539620","url":null,"abstract":"Machine learning has been a driving force in the evolution of tremendous computing services and applications in the past decade. Traditional learning systems rely on centralized training and inference, which poses serious privacy and security concerns. To solve this problem, distributed learning over wireless edge networks (DLWENs) emerges as a trending solution and has attracted increasing research interests. In DLWENs, corresponding services need to be placed onto the edge servers to process the distributed tasks. Apparently, different placement of training services can significantly affect the performance of all distributed learning tasks. In this article, we propose TASP, a task-aware service placement scheme for distributed learning in wireless edge networks. By carefully considering the structures (directed acyclic graphs) of the distributed learning tasks, the fine-grained task requests and inter-task dependencies are incorporated into the placement strategies to realize the parallel computation of learning services. We also exploit queuing theory to characterize the dynamics caused by task uncertainties. Extensive experiments based on the Alibaba ML dataset show that, compared to the state-of-the-art schemes, the proposed work reduces the overall delay of distributed learning tasks by 38.6% on average.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 4","pages":"731-744"},"PeriodicalIF":5.6,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143535414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EfficientMoE: Optimizing Mixture-of-Experts Model Training With Adaptive Load Balance
IF 5.6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-02-06 DOI: 10.1109/TPDS.2025.3539297
Yan Zeng;Chengchuang Huang;Yipeng Mei;Lifu Zhang;Teng Su;Wei Ye;Wenqi Shi;Shengnan Wang
Mixture-of-Experts (MoE) efficiently trains large models by using sparse activation to lower costs, selecting a few experts based on data characteristics. However, it faces challenges such as All-to-All communication overhead and load imbalance, with most optimizations targeting dynamic graphs rather than the more efficient static graphs. This study identifies two key challenges in training MoE on static graphs: 1) excessive All-to-All communication (up to 75% of iteration time) and load imbalance (70% of tokens handled by two experts) between experts due to the sparse structure of the MoE model and the token distribution; and 2) inefficient zero-padding for static shapes, leading to unnecessary computational overhead(wasting approximately 50% of resources). Thus, EfficientMoE, a scheduling method based on expert load and data characteristics, is introduced. EfficientMoE first designs a sampler to collect real-time information about token distribution, expert load, etc. It constructs a load prediction model to evaluate expert load. Subsequently, EfficientMoE proposes a dynamic schedule strategy for experts with evaluated expert load, reducing All-to-All communication and addressing load-balancing issues. Additionally, an expert capacity model is proposed to set different capacities for replicas of hot experts before static graph compilation, minimizing computation and storage overhead caused by significant padding. This study implements EfficientMoE in MindSpore and uses 32 Ascend AI accelerators to train an MoE model with 21 billion parameters and evaluate its validity. EfficientMoE demonstrated an improvement of 30% in model training time, approximately 12% reduction in communication time, and saved 35% computational resources across different clusters, compared with Switch transformers, and the Fastermoe method for static graphs.
{"title":"EfficientMoE: Optimizing Mixture-of-Experts Model Training With Adaptive Load Balance","authors":"Yan Zeng;Chengchuang Huang;Yipeng Mei;Lifu Zhang;Teng Su;Wei Ye;Wenqi Shi;Shengnan Wang","doi":"10.1109/TPDS.2025.3539297","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3539297","url":null,"abstract":"Mixture-of-Experts (MoE) efficiently trains large models by using sparse activation to lower costs, selecting a few experts based on data characteristics. However, it faces challenges such as All-to-All communication overhead and load imbalance, with most optimizations targeting dynamic graphs rather than the more efficient static graphs. This study identifies two key challenges in training MoE on static graphs: 1) excessive All-to-All communication (up to 75% of iteration time) and load imbalance (70% of tokens handled by two experts) between experts due to the sparse structure of the MoE model and the token distribution; and 2) inefficient zero-padding for static shapes, leading to unnecessary computational overhead(wasting approximately 50% of resources). Thus, EfficientMoE, a scheduling method based on expert load and data characteristics, is introduced. EfficientMoE first designs a sampler to collect real-time information about token distribution, expert load, etc. It constructs a load prediction model to evaluate expert load. Subsequently, EfficientMoE proposes a dynamic schedule strategy for experts with evaluated expert load, reducing All-to-All communication and addressing load-balancing issues. Additionally, an expert capacity model is proposed to set different capacities for replicas of hot experts before static graph compilation, minimizing computation and storage overhead caused by significant padding. This study implements EfficientMoE in MindSpore and uses 32 Ascend AI accelerators to train an MoE model with 21 billion parameters and evaluate its validity. EfficientMoE demonstrated an improvement of 30% in model training time, approximately 12% reduction in communication time, and saved 35% computational resources across different clusters, compared with Switch transformers, and the Fastermoe method for static graphs.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 4","pages":"677-688"},"PeriodicalIF":5.6,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143521442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Flips: A Flexible Partitioning Strategy Near Memory Processing Architecture for Recommendation System
IF 5.6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-02-06 DOI: 10.1109/TPDS.2025.3539534
Yudi Qiu;Lingfei Lu;Shiyan Yi;Minge Jing;Xiaoyang Zeng;Yang Kong;Yibo Fan
Personalized recommendation systems are massively deployed in production data centers. The memory-intensive embedding layers of recommendation systems are the crucial performance bottleneck, with operations manifesting as sparse memory lookups and simple reduction computations. Recent studies propose near-memory processing (NMP) architectures to speed up embedding operations by utilizing high internal memory bandwidth. However, these solutions typically employ a fixed vector partitioning strategy that fail to adapt to changes in data center deployment scenarios and lack practicality. We propose Flips, a flexible partitioning strategy NMP architecture that accelerates embedding layers. Flips supports more than ten partitioning strategies through hardware-software co-design. Novel hardware architectures and address mapping schemes are designed for the memory-side and host-side. We provide two approaches to determine the optimal partitioning strategy for each embedding table, enabling the architecture to accommodate changes in deployment scenarios. Importantly, Flips is decoupled from the NMP level and can utilize rank-level, bank-group-level and bank-level parallelism. In peer-level NMP evaluations, Flips outperforms state-of-the-art NMP solutions, RecNMP, TRiM, and ReCross by up to 4.0×, 4.1×, and 3.5×, respectively.
{"title":"Flips: A Flexible Partitioning Strategy Near Memory Processing Architecture for Recommendation System","authors":"Yudi Qiu;Lingfei Lu;Shiyan Yi;Minge Jing;Xiaoyang Zeng;Yang Kong;Yibo Fan","doi":"10.1109/TPDS.2025.3539534","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3539534","url":null,"abstract":"Personalized recommendation systems are massively deployed in production data centers. The memory-intensive embedding layers of recommendation systems are the crucial performance bottleneck, with operations manifesting as sparse memory lookups and simple reduction computations. Recent studies propose near-memory processing (NMP) architectures to speed up embedding operations by utilizing high internal memory bandwidth. However, these solutions typically employ a fixed vector partitioning strategy that fail to adapt to changes in data center deployment scenarios and lack practicality. We propose Flips, a <underline>fl</u>ex<underline>i</u>ble <underline>p</u>artitioning <underline>s</u>trategy NMP architecture that accelerates embedding layers. Flips supports more than ten partitioning strategies through hardware-software co-design. Novel hardware architectures and address mapping schemes are designed for the memory-side and host-side. We provide two approaches to determine the optimal partitioning strategy for each embedding table, enabling the architecture to accommodate changes in deployment scenarios. Importantly, Flips is decoupled from the NMP level and can utilize rank-level, bank-group-level and bank-level parallelism. In peer-level NMP evaluations, Flips outperforms state-of-the-art NMP solutions, RecNMP, TRiM, and ReCross by up to 4.0×, 4.1×, and 3.5×, respectively.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 4","pages":"745-758"},"PeriodicalIF":5.6,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143535514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Loci: Federated Continual Learning of Heterogeneous Tasks at Edge
IF 5.6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-01-29 DOI: 10.1109/TPDS.2025.3531123
Yaxin Luopan;Rui Han;Qinglong Zhang;Xiaojiang Zuo;Chi Harold Liu;Guoren Wang;Lydia Y. Chen
Federated continual learning (FCL) has attracted growing attention in achieving collaborative model training among edge clients, each of which learns its local model for a sequence of tasks. Most existing FCL approaches aggregate clients’ latest local models to exchange knowledge. This unfortunately deviates from real-world scenarios where each model is optimized independently using the client’s own dynamic data and different clients have heterogeneous tasks. These tasks not only have distinct class labels (e.g., animals or vehicles) but also differ in input feature distributions. The aggregated model thus often shifts to a higher loss value and incurs accuracy degradation. In this article, we depart from the model-grained view of aggregation and transform it into multiple task-grained aggregations. Each aggregation allows a client to learn from other clients to improve its model accuracy on one task. To this end, we propose Loci to provide abstractions for clients’ past and peer task knowledge using compact model weights, and develop a communication-efficient approach to train each client’s local model by exchanging its tasks’ knowledge with the most accuracy relevant one from other clients. Through its general-purpose API, Loci can be used to provide efficient on-device training for existing deep learning applications of graph, image, nature language processing, and multimodal data. Using extensive comparative evaluations, we show Loci improves the model accuracy by 32.48% without increasing training time, reduces communication cost by 83.6%, and achieves more improvements when scale (task/client number) increases.
{"title":"Loci: Federated Continual Learning of Heterogeneous Tasks at Edge","authors":"Yaxin Luopan;Rui Han;Qinglong Zhang;Xiaojiang Zuo;Chi Harold Liu;Guoren Wang;Lydia Y. Chen","doi":"10.1109/TPDS.2025.3531123","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3531123","url":null,"abstract":"Federated continual learning (FCL) has attracted growing attention in achieving collaborative model training among edge clients, each of which learns its local model for a sequence of tasks. Most existing FCL approaches aggregate clients’ latest local models to exchange knowledge. This unfortunately deviates from real-world scenarios where each model is optimized independently using the client’s own dynamic data and different clients have heterogeneous tasks. These tasks not only have distinct class labels (e.g., animals or vehicles) but also differ in input feature distributions. The aggregated model thus often shifts to a higher loss value and incurs accuracy degradation. In this article, we depart from the model-grained view of aggregation and transform it into multiple task-grained aggregations. Each aggregation allows a client to learn from other clients to improve its model accuracy on one task. To this end, we propose Loci to provide abstractions for clients’ past and peer task knowledge using compact model weights, and develop a communication-efficient approach to train each client’s local model by exchanging its tasks’ knowledge with the most accuracy relevant one from other clients. Through its general-purpose API, Loci can be used to provide efficient on-device training for existing deep learning applications of graph, image, nature language processing, and multimodal data. Using extensive comparative evaluations, we show Loci improves the model accuracy by 32.48% without increasing training time, reduces communication cost by 83.6%, and achieves more improvements when scale (task/client number) increases.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 4","pages":"775-790"},"PeriodicalIF":5.6,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143553186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FHE4DMM: A Low-Latency Distributed Matrix Multiplication With Fully Homomorphic Encryption FHE4DMM:具有完全同态加密功能的低延迟分布式矩阵乘法器
IF 5.6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-01-28 DOI: 10.1109/TPDS.2025.3534846
Yi Chen;Qiang-Sheng Hua;Zixiao Hong;Lin Zhu;Hai Jin
Fully Homomorphic Encryption (FHE) is a promising technology for secure, non-interactive outsourced computation. One notable method to increase the throughput of FHE-based outsourcing is batching, which typically involves large-scale matrix-matrix multiplications (MM). However, the substantial overhead inherent in existing FHE schemes poses a major challenge for processing these large-scale tasks, often resulting in insufficient memory or prolonged delays on a single machine, making it practically unviable. Utilizing multi-machine parallelism in cloud clusters for outsourced computation offers a natural solution to these obstacles. In this work, we propose FHE4DMM, a distributed algorithm that provides a unified view on encrypted matrices, accommodating various FHE schemes and any matrix dimensions, to accelerate large-scale encrypted MM. A key innovation is its reuse optimizations for parallelized homomorphic computations, which can offer valuable insights for broader FHE-based applications. We utilized FHE4DMM to conduct large-scale square ($4096times 4096$) and rectangular ($32768times 32768,32768times 16$ ) matrix multiplications on 256 machines, achieving computation time of 172.2 s and 76.1 s, respectively, while ensuring a 128-bit security level. For scalability, the experiments demonstrate that FHE4DMM achieves linear speedup for $2^{i}$ ($i$ is from 0 to 6) machines across various matrix dimension cases. In addition, within the range of matrix dimensions that the state-of-the-art (SOTA) distributed FHE-MM algorithm (Huang et al. 2023) can handle, FHE4DMM attains a maximum speedup of 16.62x. To assess its practical performance, FHE4DMM is applied in a basic multi-layer feedforward network. We used 64 machines to perform secure outsourced inference on MNIST and CIFAR-10 datasets with encrypted models and data. Compared to using the SOTA, our method achieved speedups of up to 3.54x and 4.22x respectively, with the MM module obtaining a 4.09x and 4.87x speedup.
{"title":"FHE4DMM: A Low-Latency Distributed Matrix Multiplication With Fully Homomorphic Encryption","authors":"Yi Chen;Qiang-Sheng Hua;Zixiao Hong;Lin Zhu;Hai Jin","doi":"10.1109/TPDS.2025.3534846","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3534846","url":null,"abstract":"Fully Homomorphic Encryption (FHE) is a promising technology for secure, non-interactive outsourced computation. One notable method to increase the throughput of FHE-based outsourcing is batching, which typically involves large-scale matrix-matrix multiplications (MM). However, the substantial overhead inherent in existing FHE schemes poses a major challenge for processing these large-scale tasks, often resulting in insufficient memory or prolonged delays on a single machine, making it practically unviable. Utilizing multi-machine parallelism in cloud clusters for outsourced computation offers a natural solution to these obstacles. In this work, we propose FHE4DMM, a distributed algorithm that provides a unified view on encrypted matrices, accommodating various FHE schemes and any matrix dimensions, to accelerate large-scale encrypted MM. A key innovation is its reuse optimizations for parallelized homomorphic computations, which can offer valuable insights for broader FHE-based applications. We utilized FHE4DMM to conduct large-scale square (<inline-formula><tex-math>$4096times 4096$</tex-math></inline-formula>) and rectangular (<inline-formula><tex-math>$32768times 32768,32768times 16$</tex-math></inline-formula> ) matrix multiplications on 256 machines, achieving computation time of 172.2 s and 76.1 s, respectively, while ensuring a 128-bit security level. For scalability, the experiments demonstrate that FHE4DMM achieves linear speedup for <inline-formula><tex-math>$2^{i}$</tex-math></inline-formula> (<inline-formula><tex-math>$i$</tex-math></inline-formula> is from 0 to 6) machines across various matrix dimension cases. In addition, within the range of matrix dimensions that the state-of-the-art (SOTA) distributed FHE-MM algorithm (Huang et al. 2023) can handle, FHE4DMM attains a maximum speedup of 16.62x. To assess its practical performance, FHE4DMM is applied in a basic multi-layer feedforward network. We used 64 machines to perform secure outsourced inference on MNIST and CIFAR-10 datasets with encrypted models and data. Compared to using the SOTA, our method achieved speedups of up to 3.54x and 4.22x respectively, with the MM module obtaining a 4.09x and 4.87x speedup.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 4","pages":"645-658"},"PeriodicalIF":5.6,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10856418","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143521416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Collaborative Service Composition Approach Considering Providers’ Self-Interest and Minimal Service Sharing
IF 5.6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-01-27 DOI: 10.1109/TPDS.2025.3534283
Xiao Wang;Hanchuan Xu;Jian Yang;Xiaofei Xu;Zhongjie Wang
Service composition dynamically integrates various services from multiple providers to meet complex user requirements. However, most existing methods assume centralized control over all services, which is often unrealistic because providers typically prefer to independently manage their own services, posing challenges to the application of traditional methods. Collaborative service composition offers a solution by enabling providers to work together to complete service composition. However, this approach also faces its own challenges. Driven by self-interest, providers may be reluctant to offer services needed by others, and due to business competition, they may wish to share as few services as possible (where sharing services means disclosing service information to other providers). To address these challenges, we propose a novel collaborative service composition approach that comprehensively considers each provider’s self-interest and achieves service composition with minimal service sharing. First, we introduce a “self-interest degree” model to capture providers’ self-interest. This behavior may lead to service refusal, so we design a service availability prediction method based on a reputation model to minimize rejections. Then, we propose a decentralized service composition method. It utilizes historical composition records to mine empirical rules between requirements and services, constructing a correlations matrix, and collaboratively trains a multi-label classification model with other providers under a distributed federated learning framework. Combining the matrix and model outputs, we design a service composition method and a node coordination protocol that completes service composition with minimal service sharing. Experimental results demonstrate the effectiveness of the proposed method in capturing providers’ self-interest and showcase its superior performance compared to existing methods.
{"title":"A Collaborative Service Composition Approach Considering Providers’ Self-Interest and Minimal Service Sharing","authors":"Xiao Wang;Hanchuan Xu;Jian Yang;Xiaofei Xu;Zhongjie Wang","doi":"10.1109/TPDS.2025.3534283","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3534283","url":null,"abstract":"Service composition dynamically integrates various services from multiple providers to meet complex user requirements. However, most existing methods assume centralized control over all services, which is often unrealistic because providers typically prefer to independently manage their own services, posing challenges to the application of traditional methods. Collaborative service composition offers a solution by enabling providers to work together to complete service composition. However, this approach also faces its own challenges. Driven by self-interest, providers may be reluctant to offer services needed by others, and due to business competition, they may wish to share as few services as possible (where sharing services means disclosing service information to other providers). To address these challenges, we propose a novel collaborative service composition approach that comprehensively considers each provider’s self-interest and achieves service composition with minimal service sharing. First, we introduce a “self-interest degree” model to capture providers’ self-interest. This behavior may lead to service refusal, so we design a service availability prediction method based on a reputation model to minimize rejections. Then, we propose a decentralized service composition method. It utilizes historical composition records to mine empirical rules between requirements and services, constructing a correlations matrix, and collaboratively trains a multi-label classification model with other providers under a distributed federated learning framework. Combining the matrix and model outputs, we design a service composition method and a node coordination protocol that completes service composition with minimal service sharing. Experimental results demonstrate the effectiveness of the proposed method in capturing providers’ self-interest and showcase its superior performance compared to existing methods.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 3","pages":"598-615"},"PeriodicalIF":5.6,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143361040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Parallel and Distributed Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1