{"title":"Word sense disambiguation using author topic model","authors":"Shougo Kaneishi, Takuya Tajima","doi":"10.1109/INDCOMP.2014.7011753","DOIUrl":null,"url":null,"abstract":"Purpose of this paper is what decrease situations of misleading in text, blog, tweet etc. We use Latent Dirichlet Allocation (LDA) for Word Sense Disambiguation (WSD). This paper experiments with a new approaches for WSD. The approach is WSD with author topic model. The availability of this approach is exerted on modeling of sentence on the Twitter. In this study, first flow is author estimate, and second flow is WSD. In the first flow, we use LDA topic modeling and dataset from novels in Japanese. We use collapsed Gibbs sampling as the estimated method for parameter of LDA. In the second flow, we use the dataset from the tweet on Twitter. By the two experiments, author topic model is found to be useful for WSD.","PeriodicalId":246465,"journal":{"name":"2014 IEEE International Symposium on Independent Computing (ISIC)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE International Symposium on Independent Computing (ISIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INDCOMP.2014.7011753","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Purpose of this paper is what decrease situations of misleading in text, blog, tweet etc. We use Latent Dirichlet Allocation (LDA) for Word Sense Disambiguation (WSD). This paper experiments with a new approaches for WSD. The approach is WSD with author topic model. The availability of this approach is exerted on modeling of sentence on the Twitter. In this study, first flow is author estimate, and second flow is WSD. In the first flow, we use LDA topic modeling and dataset from novels in Japanese. We use collapsed Gibbs sampling as the estimated method for parameter of LDA. In the second flow, we use the dataset from the tweet on Twitter. By the two experiments, author topic model is found to be useful for WSD.