{"title":"Attention-based Joint Representation Learning Network for Short text Classification","authors":"Xinyue Liu, Yexuan Tang","doi":"10.1145/3404555.3404578","DOIUrl":null,"url":null,"abstract":"Deep neural networks have gained success recently in learning distributed representations for text classification. However, due to the sparsity of information in user-generated comments, existing approaches still suffer from the problem of exploiting the semantic information by halves to classify current sentence. In this paper, we propose a novel attention-based joint representation learning network (AJRLN). The proposed model provides two attention-based subnets to extract different attentive features of the sentence embedding. Then, these features are combined by the representation combination layer to get the joint representation of the whole sentence for classification. We conduct extensive experiments on SST, TREC and SUBJ datasets. The experimental results demonstrate that our model achieved comparable or better performance than other state-of-the-art methods.","PeriodicalId":220526,"journal":{"name":"Proceedings of the 2020 6th International Conference on Computing and Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 6th International Conference on Computing and Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3404555.3404578","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep neural networks have gained success recently in learning distributed representations for text classification. However, due to the sparsity of information in user-generated comments, existing approaches still suffer from the problem of exploiting the semantic information by halves to classify current sentence. In this paper, we propose a novel attention-based joint representation learning network (AJRLN). The proposed model provides two attention-based subnets to extract different attentive features of the sentence embedding. Then, these features are combined by the representation combination layer to get the joint representation of the whole sentence for classification. We conduct extensive experiments on SST, TREC and SUBJ datasets. The experimental results demonstrate that our model achieved comparable or better performance than other state-of-the-art methods.