Hang Gou, Jinli Li, Wen Qin, Chunlin He, Yurong Zhong, Rui Che
{"title":"A Momentum-incorporated Fast Parallelized Stochastic Gradient Descent for Latent Factor Model in Shared Memory Systems","authors":"Hang Gou, Jinli Li, Wen Qin, Chunlin He, Yurong Zhong, Rui Che","doi":"10.1109/ICNSC48988.2020.9238077","DOIUrl":null,"url":null,"abstract":"Latent factor (LF) model is an effective method for extracting useful knowledge from high-dimensional and sparse (HiDS) data generated by various industrial applications. Parallelized stochastic gradient descent (SGD) is widely used in building a parallelized LF model for handling large-scale HiDS data, but parallelized SGD suffers from slow convergence and considerable time cost. To address this issue, this study incorporates the principle of momentum into parallelized SGD, where momentum decay coefficient and learning rate are adjusted dynamically, and proposes a momentum-incorporated fast parallelized SGD (MFSGD) method to discover latent patterns from large-scale HiDS data. The experiments on two datasets show that the proposed MFSGD method outperforms state-of-the-art parallel SGD methods in terms of computational efficiency.","PeriodicalId":412290,"journal":{"name":"2020 IEEE International Conference on Networking, Sensing and Control (ICNSC)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Conference on Networking, Sensing and Control (ICNSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICNSC48988.2020.9238077","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Latent factor (LF) model is an effective method for extracting useful knowledge from high-dimensional and sparse (HiDS) data generated by various industrial applications. Parallelized stochastic gradient descent (SGD) is widely used in building a parallelized LF model for handling large-scale HiDS data, but parallelized SGD suffers from slow convergence and considerable time cost. To address this issue, this study incorporates the principle of momentum into parallelized SGD, where momentum decay coefficient and learning rate are adjusted dynamically, and proposes a momentum-incorporated fast parallelized SGD (MFSGD) method to discover latent patterns from large-scale HiDS data. The experiments on two datasets show that the proposed MFSGD method outperforms state-of-the-art parallel SGD methods in terms of computational efficiency.