{"title":"深度神经网络并行训练算法综述","authors":"Dongsuk Yook, Hyowon Lee, In-Chul Yoo","doi":"10.7776/ASK.2020.39.6.505","DOIUrl":null,"url":null,"abstract":"Since a large amount of training data is typically needed to train Deep Neural Networks (DNNs), a parallel training approach is required to train the DNNs. The Stochastic Gradient Descent (SGD) algorithm is one of the most widely used methods to train the DNNs. However, since the SGD is an inherently sequential process, it requires some sort of approximation schemes to parallelize the SGD algorithm. In this paper, we review various efforts on parallelizing the SGD algorithm, and analyze the computational overhead, communication overhead, and the effects of the approximations.","PeriodicalId":42689,"journal":{"name":"Journal of the Acoustical Society of Korea","volume":"39 1","pages":"505-514"},"PeriodicalIF":0.2000,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"A survey on parallel training algorithms for deep neural networks\",\"authors\":\"Dongsuk Yook, Hyowon Lee, In-Chul Yoo\",\"doi\":\"10.7776/ASK.2020.39.6.505\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Since a large amount of training data is typically needed to train Deep Neural Networks (DNNs), a parallel training approach is required to train the DNNs. The Stochastic Gradient Descent (SGD) algorithm is one of the most widely used methods to train the DNNs. However, since the SGD is an inherently sequential process, it requires some sort of approximation schemes to parallelize the SGD algorithm. In this paper, we review various efforts on parallelizing the SGD algorithm, and analyze the computational overhead, communication overhead, and the effects of the approximations.\",\"PeriodicalId\":42689,\"journal\":{\"name\":\"Journal of the Acoustical Society of Korea\",\"volume\":\"39 1\",\"pages\":\"505-514\"},\"PeriodicalIF\":0.2000,\"publicationDate\":\"2020-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of the Acoustical Society of Korea\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.7776/ASK.2020.39.6.505\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ACOUSTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the Acoustical Society of Korea","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.7776/ASK.2020.39.6.505","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ACOUSTICS","Score":null,"Total":0}
A survey on parallel training algorithms for deep neural networks
Since a large amount of training data is typically needed to train Deep Neural Networks (DNNs), a parallel training approach is required to train the DNNs. The Stochastic Gradient Descent (SGD) algorithm is one of the most widely used methods to train the DNNs. However, since the SGD is an inherently sequential process, it requires some sort of approximation schemes to parallelize the SGD algorithm. In this paper, we review various efforts on parallelizing the SGD algorithm, and analyze the computational overhead, communication overhead, and the effects of the approximations.