{"title":"Efficient and Structural Gradient Compression with Principal Component Analysis for Distributed Training","authors":"Jiaxin Tan, Chao Yao, Zehua Guo","doi":"10.1145/3600061.3603140","DOIUrl":null,"url":null,"abstract":"Distributed machine learning is a promising machine learning approach for academia and industry. It can generate a machine learning model for dispersed training data via iterative training in a distributed fashion. To speed up the training process of distributed machine learning, it is essential to reduce the communication load among training nodes. In this paper, we propose a layer-wise gradient compression scheme based on principal component analysis and error accumulation. The key of our solution is to consider the gradient characteristics and architecture of neural networks by taking advantage of the compression ability enabled by PCA and the feedback ability enabled by error accumulation. The preliminary results on image classification task show that our scheme achieves good performance and reduces 97% of the gradient transmission.","PeriodicalId":228934,"journal":{"name":"Proceedings of the 7th Asia-Pacific Workshop on Networking","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 7th Asia-Pacific Workshop on Networking","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3600061.3603140","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Distributed machine learning is a promising machine learning approach for academia and industry. It can generate a machine learning model for dispersed training data via iterative training in a distributed fashion. To speed up the training process of distributed machine learning, it is essential to reduce the communication load among training nodes. In this paper, we propose a layer-wise gradient compression scheme based on principal component analysis and error accumulation. The key of our solution is to consider the gradient characteristics and architecture of neural networks by taking advantage of the compression ability enabled by PCA and the feedback ability enabled by error accumulation. The preliminary results on image classification task show that our scheme achieves good performance and reduces 97% of the gradient transmission.