Short text classification using semantically enriched topic model

IF 1.8 4区 管理学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Journal of Information Science Pub Date : 2024-03-21 DOI:10.1177/01655515241230793
Farid Uddin, Yibo Chen, Zuping Zhang, Xin Huang
{"title":"Short text classification using semantically enriched topic model","authors":"Farid Uddin, Yibo Chen, Zuping Zhang, Xin Huang","doi":"10.1177/01655515241230793","DOIUrl":null,"url":null,"abstract":"Modelling short text is challenging due to the small number of word co-occurrence and insufficient semantic information that affects downstream Natural Language Processing (NLP) tasks, for example, text classification. Gathering information from external sources is expensive and may increase noise. For efficient short text classification without depending on external knowledge sources, we propose Expressive Short text Classification (EStC). EStC consists of a novel document context-aware semantically enriched topic model called the Short text Topic Model (StTM) that captures words, topics and documents semantics in a joint learning framework. In StTM, the probability of predicting a context word involves the topic distribution of word embeddings and the document vector as the global context, which obtains by weighted averaging of word embeddings on the fly simultaneously with the topic distribution of words without requiring an additional inference method for the document embedding. EStC represents documents in an expressive (number of topics × number of word embedding features) embedding space and uses a linear support vector machine (SVM) classifier for their classification. Experimental results demonstrate that EStC outperforms many state-of-the-art language models in short text classification using several publicly available short text data sets.","PeriodicalId":54796,"journal":{"name":"Journal of Information Science","volume":"20 1","pages":""},"PeriodicalIF":1.8000,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Information Science","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1177/01655515241230793","RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Modelling short text is challenging due to the small number of word co-occurrence and insufficient semantic information that affects downstream Natural Language Processing (NLP) tasks, for example, text classification. Gathering information from external sources is expensive and may increase noise. For efficient short text classification without depending on external knowledge sources, we propose Expressive Short text Classification (EStC). EStC consists of a novel document context-aware semantically enriched topic model called the Short text Topic Model (StTM) that captures words, topics and documents semantics in a joint learning framework. In StTM, the probability of predicting a context word involves the topic distribution of word embeddings and the document vector as the global context, which obtains by weighted averaging of word embeddings on the fly simultaneously with the topic distribution of words without requiring an additional inference method for the document embedding. EStC represents documents in an expressive (number of topics × number of word embedding features) embedding space and uses a linear support vector machine (SVM) classifier for their classification. Experimental results demonstrate that EStC outperforms many state-of-the-art language models in short text classification using several publicly available short text data sets.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用语义丰富的主题模型进行短文分类
短文本建模具有挑战性,因为短文本中词的共现数量少,语义信息不足,会影响下游的自然语言处理(NLP)任务,例如文本分类。从外部收集信息不仅成本高昂,而且可能会增加噪音。为了在不依赖外部知识源的情况下实现高效的短文分类,我们提出了 "表达式短文分类"(Expressive Short text Classification,简称 EStC)。EStC 包含一个新颖的文档上下文感知语义丰富主题模型,称为短文主题模型(Stort text Topic Model,StTM),它在一个联合学习框架中捕捉单词、主题和文档语义。在 StTM 中,预测上下文单词的概率涉及单词嵌入的主题分布和作为全局上下文的文档向量,而全局上下文是通过单词嵌入的加权平均和单词的主题分布同时获得的,不需要对文档嵌入采用额外的推理方法。EStC 在一个富有表现力(主题数×词嵌入特征数)的嵌入空间中表示文档,并使用线性支持向量机(SVM)分类器对文档进行分类。实验结果表明,在使用几个公开的短文本数据集进行短文本分类时,EStC 的表现优于许多最先进的语言模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Journal of Information Science
Journal of Information Science 工程技术-计算机:信息系统
CiteScore
6.80
自引率
8.30%
发文量
121
审稿时长
4 months
期刊介绍: The Journal of Information Science is a peer-reviewed international journal of high repute covering topics of interest to all those researching and working in the sciences of information and knowledge management. The Editors welcome material on any aspect of information science theory, policy, application or practice that will advance thinking in the field.
期刊最新文献
Government chatbot: Empowering smart conversations with enhanced contextual understanding and reasoning Knowing within multispecies families: An information experience study How are global university rankings adjusted for erroneous science, fraud and misconduct? Posterior reduction or adjustment in rankings in response to retractions and invalidation of scientific findings Predicting the technological impact of papers: Exploring optimal models and most important features Cross-domain corpus selection for cold-start context
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1