Multimodal sentiment analysis for social media contents during public emergencies

Tao Fan, Hao Wang, Peng Wu, Chen Ling, Milad Taleby Ahvanooey
{"title":"Multimodal sentiment analysis for social media contents during public emergencies","authors":"Tao Fan, Hao Wang, Peng Wu, Chen Ling, Milad Taleby Ahvanooey","doi":"10.2478/jdis-2023-0012","DOIUrl":null,"url":null,"abstract":"Abstract Purpose Nowadays, public opinions during public emergencies involve not only textual contents but also contain images. However, the existing works mainly focus on textual contents and they do not provide a satisfactory accuracy of sentiment analysis, lacking the combination of multimodal contents. In this paper, we propose to combine texts and images generated in the social media to perform sentiment analysis. Design/methodology/approach We propose a Deep Multimodal Fusion Model (DMFM), which combines textual and visual sentiment analysis. We first train word2vec model on a large-scale public emergency corpus to obtain semantic-rich word vectors as the input of textual sentiment analysis. BiLSTM is employed to generate encoded textual embeddings. To fully excavate visual information from images, a modified pretrained VGG16-based sentiment analysis network is used with the best-performed fine-tuning strategy. A multimodal fusion method is implemented to fuse textual and visual embeddings completely, producing predicted labels. Findings We performed extensive experiments on Weibo and Twitter public emergency datasets, to evaluate the performance of our proposed model. Experimental results demonstrate that the DMFM provides higher accuracy compared with baseline models. The introduction of images can boost the performance of sentiment analysis during public emergencies. Research limitations In the future, we will test our model in a wider dataset. We will also consider a better way to learn the multimodal fusion information. Practical implications We build an efficient multimodal sentiment analysis model for the social media contents during public emergencies. Originality/value We consider the images posted by online users during public emergencies on social platforms. The proposed method can present a novel scope for sentiment analysis during public emergencies and provide the decision support for the government when formulating policies in public emergencies.","PeriodicalId":92237,"journal":{"name":"Journal of data and information science (Warsaw, Poland)","volume":"8 1","pages":"61 - 87"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of data and information science (Warsaw, Poland)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2478/jdis-2023-0012","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Abstract Purpose Nowadays, public opinions during public emergencies involve not only textual contents but also contain images. However, the existing works mainly focus on textual contents and they do not provide a satisfactory accuracy of sentiment analysis, lacking the combination of multimodal contents. In this paper, we propose to combine texts and images generated in the social media to perform sentiment analysis. Design/methodology/approach We propose a Deep Multimodal Fusion Model (DMFM), which combines textual and visual sentiment analysis. We first train word2vec model on a large-scale public emergency corpus to obtain semantic-rich word vectors as the input of textual sentiment analysis. BiLSTM is employed to generate encoded textual embeddings. To fully excavate visual information from images, a modified pretrained VGG16-based sentiment analysis network is used with the best-performed fine-tuning strategy. A multimodal fusion method is implemented to fuse textual and visual embeddings completely, producing predicted labels. Findings We performed extensive experiments on Weibo and Twitter public emergency datasets, to evaluate the performance of our proposed model. Experimental results demonstrate that the DMFM provides higher accuracy compared with baseline models. The introduction of images can boost the performance of sentiment analysis during public emergencies. Research limitations In the future, we will test our model in a wider dataset. We will also consider a better way to learn the multimodal fusion information. Practical implications We build an efficient multimodal sentiment analysis model for the social media contents during public emergencies. Originality/value We consider the images posted by online users during public emergencies on social platforms. The proposed method can present a novel scope for sentiment analysis during public emergencies and provide the decision support for the government when formulating policies in public emergencies.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
突发公共事件中社交媒体内容的多模态情感分析
摘要目的当前,突发公共事件中的舆论不仅涉及文本内容,还包含图像。然而,现有的作品主要关注文本内容,并没有提供令人满意的情感分析准确性,缺乏多模式内容的结合。在本文中,我们建议将社交媒体中生成的文本和图像结合起来进行情感分析。设计/方法论/方法我们提出了一个深度多模式融合模型(DMFM),它结合了文本和视觉情感分析。我们首先在大规模的公共应急语料库上训练word2vec模型,以获得语义丰富的词向量作为文本情感分析的输入。BiLSTM用于生成编码的文本嵌入。为了从图像中充分挖掘视觉信息,使用了一种改进的基于VGG16的预训练情绪分析网络,并采用了性能最佳的微调策略。实现了一种多模式融合方法,以完全融合文本和视觉嵌入,产生预测标签。研究结果我们在微博和推特公共应急数据集上进行了广泛的实验,以评估我们提出的模型的性能。实验结果表明,与基线模型相比,DMFM提供了更高的精度。图像的引入可以提高突发公共事件中情绪分析的性能。研究局限性未来,我们将在更广泛的数据集中测试我们的模型。我们还将考虑一种更好的方法来学习多模式融合信息。实际意义我们为突发公共事件期间的社交媒体内容建立了一个有效的多模态情绪分析模型。原创/价值我们考虑的是网络用户在公共突发事件期间在社交平台上发布的图片。所提出的方法可以为突发公共事件中的情绪分析提供一个新的范围,并为政府制定突发公共事件政策提供决策支持。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Editorial board publication strategy and acceptance rates in Turkish national journals Multimodal sentiment analysis for social media contents during public emergencies Perspectives from a publishing ethics and research integrity team for required improvements Build neural network models to identify and correct news headlines exaggerating obesity-related scientific findings An author credit allocation method with improved distinguishability and robustness
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1