{"title":"基于Cloud-LDA模型的文本新特征获取","authors":"Maoyuan Zhang, Fanli He, Shui-Chin Chen","doi":"10.1109/ISCC-C.2013.94","DOIUrl":null,"url":null,"abstract":"This paper probes into how to improve Information Retrieval by changing the feature distribution of the text. It introduces Cloud Model theory into Latent Dirichlet Allocation(LDA) Model and build a new feature selection system. LDA Model is used to mine the underlying topical structure. Each topic is associated with a multinomial distribution over words which are semantic related. But there is doubt that themes are relevant with each other in the light of semantics. Based on LDA model presented probability distribution of vocabulary in text, the new system with Cloud Model theory can automatically simulate feature set whose contribution degree is high in the text. Results show this feature set has less features but higher classification accuracy, thus obviously better than currently popular feature selection methods. If the query is matched to words with high contribution degree, the more these words are, the more relevant the article searched out is with the query. NTCIR-5 (the 5th NII Test Collection for IR Systems) collections of Experiment on SLIR (Single Language IR) show that this method achieves an obvious improvement compared with some other methods in IR.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"New Features Acquisition of Text with Cloud-LDA Model\",\"authors\":\"Maoyuan Zhang, Fanli He, Shui-Chin Chen\",\"doi\":\"10.1109/ISCC-C.2013.94\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper probes into how to improve Information Retrieval by changing the feature distribution of the text. It introduces Cloud Model theory into Latent Dirichlet Allocation(LDA) Model and build a new feature selection system. LDA Model is used to mine the underlying topical structure. Each topic is associated with a multinomial distribution over words which are semantic related. But there is doubt that themes are relevant with each other in the light of semantics. Based on LDA model presented probability distribution of vocabulary in text, the new system with Cloud Model theory can automatically simulate feature set whose contribution degree is high in the text. Results show this feature set has less features but higher classification accuracy, thus obviously better than currently popular feature selection methods. If the query is matched to words with high contribution degree, the more these words are, the more relevant the article searched out is with the query. NTCIR-5 (the 5th NII Test Collection for IR Systems) collections of Experiment on SLIR (Single Language IR) show that this method achieves an obvious improvement compared with some other methods in IR.\",\"PeriodicalId\":313511,\"journal\":{\"name\":\"2013 International Conference on Information Science and Cloud Computing Companion\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-12-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2013 International Conference on Information Science and Cloud Computing Companion\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISCC-C.2013.94\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 International Conference on Information Science and Cloud Computing Companion","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCC-C.2013.94","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
New Features Acquisition of Text with Cloud-LDA Model
This paper probes into how to improve Information Retrieval by changing the feature distribution of the text. It introduces Cloud Model theory into Latent Dirichlet Allocation(LDA) Model and build a new feature selection system. LDA Model is used to mine the underlying topical structure. Each topic is associated with a multinomial distribution over words which are semantic related. But there is doubt that themes are relevant with each other in the light of semantics. Based on LDA model presented probability distribution of vocabulary in text, the new system with Cloud Model theory can automatically simulate feature set whose contribution degree is high in the text. Results show this feature set has less features but higher classification accuracy, thus obviously better than currently popular feature selection methods. If the query is matched to words with high contribution degree, the more these words are, the more relevant the article searched out is with the query. NTCIR-5 (the 5th NII Test Collection for IR Systems) collections of Experiment on SLIR (Single Language IR) show that this method achieves an obvious improvement compared with some other methods in IR.