Diego Cardoso Nunes, Bruno Loureiro Coelho, Ricardo Parizotto, Alberto Egon Schaeffer‐Filho
{"title":"No Worker Left (Too Far) Behind: Dynamic Hybrid Synchronization for In‐Network ML Aggregation","authors":"Diego Cardoso Nunes, Bruno Loureiro Coelho, Ricardo Parizotto, Alberto Egon Schaeffer‐Filho","doi":"10.1002/nem.2290","DOIUrl":null,"url":null,"abstract":"Achieving high‐performance aggregation is essential to scaling data‐parallel distributed machine learning (ML) training. Recent research in in‐network computing has shown that offloading the aggregation to the network data plane can accelerate the aggregation process compared to traditional server‐only approaches, reducing the propagation delay and consequently speeding up distributed training. However, the existing literature on in‐network aggregation does not provide ways to deal with slower workers (called stragglers). The presence of stragglers can negatively impact distributed training, increasing the time it takes to complete. In this paper, we present Serene, an in‐network aggregation system capable of circumventing the effects of stragglers. Serene coordinates the ML workers to cooperate with a programmable switch using a hybrid synchronization approach where approaches can be changed dynamically. The synchronization can change dynamically through a control plane API that translates high‐level code into switch rules. Serene switch employs an efficient data structure for managing synchronization and a hot‐swapping mechanism to consistently change from one synchronization strategy to another. We implemented and evaluated a prototype using BMv2 and a Proof‐of‐Concept in a Tofino ASIC. We ran experiments with realistic ML workloads, including a neural network trained for image classification. Our results show that Serene can speed up training by up to 40% in emulation scenarios by reducing drastically the cumulative waiting time compared to a synchronous baseline.","PeriodicalId":14154,"journal":{"name":"International Journal of Network Management","volume":"22 1","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Network Management","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1002/nem.2290","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Achieving high‐performance aggregation is essential to scaling data‐parallel distributed machine learning (ML) training. Recent research in in‐network computing has shown that offloading the aggregation to the network data plane can accelerate the aggregation process compared to traditional server‐only approaches, reducing the propagation delay and consequently speeding up distributed training. However, the existing literature on in‐network aggregation does not provide ways to deal with slower workers (called stragglers). The presence of stragglers can negatively impact distributed training, increasing the time it takes to complete. In this paper, we present Serene, an in‐network aggregation system capable of circumventing the effects of stragglers. Serene coordinates the ML workers to cooperate with a programmable switch using a hybrid synchronization approach where approaches can be changed dynamically. The synchronization can change dynamically through a control plane API that translates high‐level code into switch rules. Serene switch employs an efficient data structure for managing synchronization and a hot‐swapping mechanism to consistently change from one synchronization strategy to another. We implemented and evaluated a prototype using BMv2 and a Proof‐of‐Concept in a Tofino ASIC. We ran experiments with realistic ML workloads, including a neural network trained for image classification. Our results show that Serene can speed up training by up to 40% in emulation scenarios by reducing drastically the cumulative waiting time compared to a synchronous baseline.
期刊介绍:
Modern computer networks and communication systems are increasing in size, scope, and heterogeneity. The promise of a single end-to-end technology has not been realized and likely never will occur. The decreasing cost of bandwidth is increasing the possible applications of computer networks and communication systems to entirely new domains. Problems in integrating heterogeneous wired and wireless technologies, ensuring security and quality of service, and reliably operating large-scale systems including the inclusion of cloud computing have all emerged as important topics. The one constant is the need for network management. Challenges in network management have never been greater than they are today. The International Journal of Network Management is the forum for researchers, developers, and practitioners in network management to present their work to an international audience. The journal is dedicated to the dissemination of information, which will enable improved management, operation, and maintenance of computer networks and communication systems. The journal is peer reviewed and publishes original papers (both theoretical and experimental) by leading researchers, practitioners, and consultants from universities, research laboratories, and companies around the world. Issues with thematic or guest-edited special topics typically occur several times per year. Topic areas for the journal are largely defined by the taxonomy for network and service management developed by IFIP WG6.6, together with IEEE-CNOM, the IRTF-NMRG and the Emanics Network of Excellence.