{"title":"Alleviating straggler impacts for data parallel deep learning with hybrid parameter update","authors":"Hongliang Li , Qi Tian , Dong Xu , Hairui Zhao , Zhewen Xu","doi":"10.1016/j.future.2025.107775","DOIUrl":null,"url":null,"abstract":"<div><div>Data parallelism in distributed clusters faces challenges due to costly global parameter updates and performance imbalances, leading to stragglers that negatively impact training speed and accuracy. This paper proposes Cooperate Grouping Parallel (CGP), a hybrid parameter update scheme to alleviate the problem. CGP supports dynamic grouping among parallel workers and utilizes both intra-group synchronous update and inter-group asynchronous update. CGP treats straggler as an opportunity for worker groups to cooperatively reduce the global parameter update cost. We give the theoretical upper bound of model accuracy deviation caused by inter-group asynchronous updates, which proves the convergence property of the proposed CGP. Extensive testbed experiments on different workloads shows that CGP achieves 1.94<span><math><mo>×</mo></math></span> speedup compared to the other methods on average in different scenarios, and CGP improves accuracy by 16.8% over the asynchronous methods.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"168 ","pages":"Article 107775"},"PeriodicalIF":6.2000,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Future Generation Computer Systems-The International Journal of Escience","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167739X25000706","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
Data parallelism in distributed clusters faces challenges due to costly global parameter updates and performance imbalances, leading to stragglers that negatively impact training speed and accuracy. This paper proposes Cooperate Grouping Parallel (CGP), a hybrid parameter update scheme to alleviate the problem. CGP supports dynamic grouping among parallel workers and utilizes both intra-group synchronous update and inter-group asynchronous update. CGP treats straggler as an opportunity for worker groups to cooperatively reduce the global parameter update cost. We give the theoretical upper bound of model accuracy deviation caused by inter-group asynchronous updates, which proves the convergence property of the proposed CGP. Extensive testbed experiments on different workloads shows that CGP achieves 1.94 speedup compared to the other methods on average in different scenarios, and CGP improves accuracy by 16.8% over the asynchronous methods.
期刊介绍:
Computing infrastructures and systems are constantly evolving, resulting in increasingly complex and collaborative scientific applications. To cope with these advancements, there is a growing need for collaborative tools that can effectively map, control, and execute these applications.
Furthermore, with the explosion of Big Data, there is a requirement for innovative methods and infrastructures to collect, analyze, and derive meaningful insights from the vast amount of data generated. This necessitates the integration of computational and storage capabilities, databases, sensors, and human collaboration.
Future Generation Computer Systems aims to pioneer advancements in distributed systems, collaborative environments, high-performance computing, and Big Data analytics. It strives to stay at the forefront of developments in grids, clouds, and the Internet of Things (IoT) to effectively address the challenges posed by these wide-area, fully distributed sensing and computing systems.