{"title":"基于随机变分推理和MapReduce的快速可扩展监督主题模型","authors":"Wenzhuo Song, Bo Yang, Xuehua Zhao, Fei Li","doi":"10.1109/ICNIDC.2016.7974542","DOIUrl":null,"url":null,"abstract":"An important and widespread topic in cloud computing is text analyzing. People often use topic model which is a popular and effective technology to deal with related tasks. Among all the topic models, sLDA is acknowledged as a popular supervised topic model, which adds a response variable or category label with each document, so that the model can uncover the latent structure of a text dataset as well as retains the predictive power for supervised tasks. However, sLDA needs to process all the documents at each iteration in the training period. When the size of dataset increases to the volume that one node cannot deal with, sLDA will no longer be competitive. In this paper we propose a novel model named Mr.sLDA which extends sLDA with stochastic variational inference (SVI) and MapReduce. SVI can reduce the computational burden of sLDA and MapReduce extends the algorithm with parallelization. Mr.sLDA makes the training become more efficient and the training method can be easily implemented in a large computer cluster or cloud computing. Empirical results show that our approach has an efficient training process, and similar accuracy with sLDA.","PeriodicalId":439987,"journal":{"name":"2016 IEEE International Conference on Network Infrastructure and Digital Content (IC-NIDC)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"A fast and scalable supervised topic model using stochastic variational inference and MapReduce\",\"authors\":\"Wenzhuo Song, Bo Yang, Xuehua Zhao, Fei Li\",\"doi\":\"10.1109/ICNIDC.2016.7974542\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"An important and widespread topic in cloud computing is text analyzing. People often use topic model which is a popular and effective technology to deal with related tasks. Among all the topic models, sLDA is acknowledged as a popular supervised topic model, which adds a response variable or category label with each document, so that the model can uncover the latent structure of a text dataset as well as retains the predictive power for supervised tasks. However, sLDA needs to process all the documents at each iteration in the training period. When the size of dataset increases to the volume that one node cannot deal with, sLDA will no longer be competitive. In this paper we propose a novel model named Mr.sLDA which extends sLDA with stochastic variational inference (SVI) and MapReduce. SVI can reduce the computational burden of sLDA and MapReduce extends the algorithm with parallelization. Mr.sLDA makes the training become more efficient and the training method can be easily implemented in a large computer cluster or cloud computing. Empirical results show that our approach has an efficient training process, and similar accuracy with sLDA.\",\"PeriodicalId\":439987,\"journal\":{\"name\":\"2016 IEEE International Conference on Network Infrastructure and Digital Content (IC-NIDC)\",\"volume\":\"7 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 IEEE International Conference on Network Infrastructure and Digital Content (IC-NIDC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICNIDC.2016.7974542\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE International Conference on Network Infrastructure and Digital Content (IC-NIDC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICNIDC.2016.7974542","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A fast and scalable supervised topic model using stochastic variational inference and MapReduce
An important and widespread topic in cloud computing is text analyzing. People often use topic model which is a popular and effective technology to deal with related tasks. Among all the topic models, sLDA is acknowledged as a popular supervised topic model, which adds a response variable or category label with each document, so that the model can uncover the latent structure of a text dataset as well as retains the predictive power for supervised tasks. However, sLDA needs to process all the documents at each iteration in the training period. When the size of dataset increases to the volume that one node cannot deal with, sLDA will no longer be competitive. In this paper we propose a novel model named Mr.sLDA which extends sLDA with stochastic variational inference (SVI) and MapReduce. SVI can reduce the computational burden of sLDA and MapReduce extends the algorithm with parallelization. Mr.sLDA makes the training become more efficient and the training method can be easily implemented in a large computer cluster or cloud computing. Empirical results show that our approach has an efficient training process, and similar accuracy with sLDA.